Sample records for grid resource availability

  1. Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking

    NASA Technical Reports Server (NTRS)

    Zolano, Paul Z.

    2004-01-01

    Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.

  2. Grid Technology as a Cyberinfrastructure for Delivering High-End Services to the Earth and Space Science Community

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.

    2004-01-01

    Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.

  3. An Experimental Framework for Executing Applications in Dynamic Grid Environments

    NASA Technical Reports Server (NTRS)

    Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.

  4. Taxonomy for Modeling Demand Response Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Daniel; Kiliccote, Sila; Sohn, Michael

    2014-08-01

    Demand response resources are an important component of modern grid management strategies. Accurate characterizations of DR resources are needed to develop systems of optimally managed grid operations and to plan future investments in generation, transmission, and distribution. The DOE Demand Response and Energy Storage Integration Study (DRESIS) project researched the degree to which demand response (DR) and energy storage can provide grid flexibility and stability in the Western Interconnection. In this work, DR resources were integrated with traditional generators in grid forecasting tools, specifically a production cost model of the Western Interconnection. As part of this study, LBNL developed amore » modeling framework for characterizing resource availability and response attributes of DR resources consistent with the governing architecture of the simulation modeling platform. In this report, we identify and describe the following response attributes required to accurately characterize DR resources: allowable response frequency, maximum response duration, minimum time needed to achieve load changes, necessary pre- or re-charging of integrated energy storage, costs of enablement, magnitude of controlled resources, and alignment of availability. We describe a framework for modeling these response attributes, and apply this framework to characterize 13 DR resources including residential, commercial, and industrial end-uses. We group these end-uses into three broad categories based on their response capabilities, and define a taxonomy for classifying DR resources within these categories. The three categories of resources exhibit different capabilities and differ in value to the grid. Results from the production cost model of the Western Interconnection illustrate that minor differences in resource attributes can have significant impact on grid utilization of DR resources. The implications of these findings will be explored in future DR valuation studies.« less

  5. Energy Management Challenges and Opportunities with Increased Intermittent Renewable Generation on the California Electrical Grid

    NASA Astrophysics Data System (ADS)

    Eichman, Joshua David

    Renewable resources including wind, solar, geothermal, biomass, hydroelectric, wave and tidal, represent an opportunity for environmentally preferred generation of electricity that also increases energy security and independence. California is very proactive in encouraging the implementation of renewable energy in part through legislation like Assembly Bill 32 and the development and execution of Renewable Portfolio Standards (RPS); however renewable technologies are not without challenges. All renewable resources have some resource limitations, be that from location, capacity, cost or availability. Technologies like wind and solar are intermittent in nature but represent one of the most abundant resources for generating renewable electricity. If RPS goals are to be achieved high levels of intermittent renewables must be considered. This work explores the effects of high penetration of renewables on a grid system, with respect to resource availability and identifies the key challenges from the perspective of the grid to introducing these resources. The HiGRID tool was developed for this analysis because no other tool could explore grid operation, while maintaining system reliability, with a diverse set of renewable resources and a wide array of complementary technologies including: energy efficiency, demand response, energy storage technologies and electric transportation. This tool resolves the hourly operation of conventional generation resources (nuclear, coal, geothermal, natural gas and hydro). The resulting behavior from introducing additional renewable resources and the lifetime costs for each technology is analyzed.

  6. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  7. Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment

    PubMed Central

    Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel

    2008-01-01

    Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979

  8. Tools and Techniques for Measuring and Improving Grid Performance

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.

  9. Emissions & Generation Resource Integrated Database (eGRID) Questions and Answers

    EPA Pesticide Factsheets

    eGRID is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. eGRID is based on available plant-specific data for all U.S. electricity generating plants that report data.

  10. Grid workflow job execution service 'Pilot'

    NASA Astrophysics Data System (ADS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  11. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    NASA Astrophysics Data System (ADS)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  12. Interoperable PKI Data Distribution in Computational Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Gridmore » Security Infrastructure (GSI).« less

  13. Changing from computing grid to knowledge grid in life-science grid.

    PubMed

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  14. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  15. A Solution Framework for Environmental Characterization Problems

    EPA Science Inventory

    This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummon, M.; Kiliccote, S.

    Demand response (DR) resources present a potentially important source of grid flexibility however, DR in grid models is limited by data availability and modeling complexity. This presentation focuses on the co-optimization of DR resources to provide energy and ancillary services in a production cost model of the Colorado "test system". We assume each DR resource can provide energy services by either shedding load or shifting its use between different times, as well as operating reserves: frequency regulation, contingency reserve, and flexibility (or ramping) reserve. There are significant variations in the availabilities of different types of DR resources, which affect bothmore » the operational savings as well as the revenue for each DR resource. The results presented include the system-wide avoided fuel and generator start-up costs as well as the composite revenue for each DR resource by energy and operating reserves.« less

  17. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  18. Sensitivity of power system operations to projected changes in water availability due to climate change: the Western U.S. case study

    NASA Astrophysics Data System (ADS)

    Voisin, N.; Macknick, J.; Fu, T.; O'Connell, M.; Zhou, T.; Brinkman, G.

    2017-12-01

    Water resources provide multiple critical services to the electrical grid through hydropower technologies, from generation to regulation of the electric grid (frequency, capacity reserve). Water resources can also represent vulnerabilities to the electric grid, as hydropower and thermo-electric facilities require water for operations. In the Western U.S., hydropower and thermo-electric plants that rely on fresh surface water represent 67% of the generating capacity. Prior studies have looked at the impact of change in water availability under future climate conditions on expected generating capacity in the Western U.S., but have not evaluated operational risks or changes resulting from climate. In this study, we systematically assess the impact of change in water availability and air temperatures on power operations, i.e. we take into account the different grid services that water resources can provide to the electric grid (generation, regulation) in the system-level context of inter-regional coordination through the electric transmission network. We leverage the Coupled Model Intercomparison Project Phase 5 (CMIP5) hydrology simulations under historical and future climate conditions, and force the large scale river routing- water management model MOSART-WM along with 2010-level sectoral water demands. Changes in monthly hydropower potential generation (including generation and reserves), as well as monthly generation capacity of thermo-electric plants are derived for each power plant in the Western U.S. electric grid. We then utilize the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions for the 2010 infrastructure under 100 years of historical and future (2050 horizon) hydroclimate conditions. We use economic metrics as well as operational metrics such as generation portfolio, emissions, and reserve margins to assess the changes in power system operations between historical and future normal and extreme water availability conditions. We provide insight on how this information can be used to support resource adequacy and grid expansion studies over the Western U.S. in the context of inter-annual variability and climate change.

  19. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  20. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  1. Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration

    2017-10-01

    The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.

  2. Emissions & Generation Resource Integrated Database (eGRID), eGRID2010

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes.eGRID2010 contains the complete release of year 2007 data, as well as years 2005 and 2004 data. Excel spreadsheets, full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are included in this data set. The Archived data in eGRID2002 contain years 1996 through 2000 data.For year 2007 data, the first Microsoft Excel workbook, Plant, contains boiler, generator, and plant spreadsheets. The second Microsoft Excel workbook, Aggregation, contains aggregated data by state, electric generating company, parent company, power control area, eGRID subregion, NERC region, and U.S. total levels. The third Microsoft Excel workbook, ImportExport, contains state import-export data, as well as U.S. generation and consumption data for years 2007, 2005, and 2004. For eGRID data for years 2005 and 2004, a user friendly web application, eGRIDweb, is available to select, view, print, and export specified data.

  3. A Structured-Grid Quality Measure for Simulated Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A structured-grid quality measure is proposed, combining three traditional measurements: intersection angles, stretching, and curvature. Quality assesses whether the grid generated provides the best possible tradeoffs in grid stretching and skewness that enable accurate flow predictions, whereas the grid density is assumed to be a constraint imposed by the available computational resources and the desired resolution of the flow field. The usefulness of this quality measure is assessed by comparing heat transfer predictions from grid convergence studies for grids of varying quality in the range of [0.6-0.8] on an 8'half-angle sphere-cone, at laminar, perfect gas, Mach 10 wind tunnel conditions.

  4. Multi-Lab EV Smart Grid Integration Requirements Study. Providing Guidance on Technology Development and Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, T.; Meintz, A.; Hardy, K.

    2015-05-28

    The report begins with a discussion of the current state of the energy and transportation systems, followed by a summary of some VGI scenarios and opportunities. The current efforts to create foundational interface standards are detailed, and the requirements for enabling PEVs as a grid resource are presented. Existing technology demonstrations that include vehicle to grid functions are summarized. The report also includes a data-based discussion on the magnitude and variability of PEVs as a grid resource, followed by an overview of existing simulation tools that vi This report is available at no cost from the National Renewable Energy Laboratorymore » (NREL) at www.nrel.gov/publications. can be used to explore the expansion of VGI to larger grid functions that might offer system and customer value. The document concludes with a summary of the requirements and potential action items that would support greater adoption of VGI.« less

  5. Using Grid Benchmarks for Dynamic Scheduling of Grid Applications

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Hood, Robert

    2003-01-01

    Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.

  6. e-Human Grid Ecology - understanding and approaching the inverse tragedy of the commons in the e-Grid society.

    PubMed

    Knoch, Tobias A; Baumgärtner, Volkmar; de Zeeuw, Luc V; Grosveld, Frank G; Egger, Kurt

    2009-01-01

    With ever-new technologies emerging also the amount of information to be stored and processed is growing exponentially and is believed to be always at the limit. In contrast, however, huge resources are available in the IT sector alike e.g. the renewable energy sector, which are often even not at all used. This under-usage bares any rational especially in the IT sector where e.g. virtualisation and grid approaches could be fast implemented due to the great technical and fast turnover opportunities. Here, we describe this obvious paradox for the first time as the Inverse Tragedy of the Commons, in contrast to the Classical Tragedy of the Commons where resources are overexploited. From this perspective the grid IT sector attempting to share resources for better efficiency, reveals two challenges leading to the heart of the paradox: i) From a macro perspective all grid infrastructures involve not only mere technical solutions but also dominantly all of the autopoietic social sub-systems ranging from religion to policy. ii) On the micro level the individual players and their psychology and risk behaviour are of major importance for acting within the macro autopoietic framework. Thus, the challenges of grid implementation are similar to those of e.g. climate protection. This is well described by the classic Human Ecology triangle and our extension to a rectangle: invironment-individual-society-environment. Extension of this classical interdisciplinary field of basic and applied research to an e-Human Grid Ecology rational, allows the Inverse Tragedy of the Commons of the grid sector to be understood and approached better and implies obvious guidelines in the day-to-day management for grid and other (networked) resources, which is of importance for many fields with similar paradoxes as in (e-)society.

  7. Saptio-temporal complementarity of wind and solar power in India

    NASA Astrophysics Data System (ADS)

    Lolla, Savita; Baidya Roy, Somnath; Chowdhury, Sourangshu

    2015-04-01

    Wind and solar power are likely to be a part of the solution to the climate change problem. That is why they feature prominently in the energy policies of all industrial economies including India. One of the major hindrances that is preventing an explosive growth of wind and solar energy is the issue of intermittency. This is a major problem because in a rapidly moving economy, energy production must match the patterns of energy demand. Moreover, sudden increase and decrease in energy supply may destabilize the power grids leading to disruptions in power supply. In this work we explore if the patterns of variability in wind and solar energy availability can offset each other so that a constant supply can be guaranteed. As a first step, this work focuses on seasonal-scale variability for each of the 5 regional power transmission grids in India. Communication within each grid is better than communication between grids. Hence, it is assumed that the grids can switch sources relatively easily. Wind and solar resources are estimated using the MERRA Reanalysis data for the 1979-2013 period. Solar resources are calculated with a 20% conversion efficiency. Wind resources are estimated using a 2 MW turbine power curve. Total resources are obtained by optimizing location and number of wind/solar energy farms. Preliminary results show that the southern and western grids are more appropriate for cogeneration than the other grids. Many studies on wind-solar cogeneration have focused on temporal complementarity at local scale. However, this is one of the first studies to explore spatial complementarity over regional scales. This project may help accelerate renewable energy penetration in India by identifying regional grid(s) where the renewable energy intermittency problem can be minimized.

  8. Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

    NASA Astrophysics Data System (ADS)

    Bystritskaya, Elena; Fomenko, Alexander; Gogitidze, Nelly; Lobodzinski, Bogdan

    2014-06-01

    The H1 Virtual Organization (VO), as one of the small VOs, employs most components of the EMI or gLite Middleware. In this framework, a monitoring system is designed for the H1 Experiment to identify and recognize within the GRID the best suitable resources for execution of CPU-time consuming Monte Carlo (MC) simulation tasks (jobs). Monitored resources are Computer Elements (CEs), Storage Elements (SEs), WMS-servers (WMSs), CernVM File System (CVMFS) available to the VO HONE and local GRID User Interfaces (UIs). The general principle of monitoring GRID elements is based on the execution of short test jobs on different CE queues using submission through various WMSs and directly to the CREAM-CEs as well. Real H1 MC Production jobs with a small number of events are used to perform the tests. Test jobs are periodically submitted into GRID queues, the status of these jobs is checked, output files of completed jobs are retrieved, the result of each job is analyzed and the waiting time and run time are derived. Using this information, the status of the GRID elements is estimated and the most suitable ones are included in the automatically generated configuration files for use in the H1 MC production. The monitoring system allows for identification of problems in the GRID sites and promptly reacts on it (for example by sending GGUS (Global Grid User Support) trouble tickets). The system can easily be adapted to identify the optimal resources for tasks other than MC production, simply by changing to the relevant test jobs. The monitoring system is written mostly in Python and Perl with insertion of a few shell scripts. In addition to the test monitoring system we use information from real production jobs to monitor the availability and quality of the GRID resources. The monitoring tools register the number of job resubmissions, the percentage of failed and finished jobs relative to all jobs on the CEs and determine the average values of waiting and running time for the involved GRID queues. CEs which do not meet the set criteria can be removed from the production chain by including them in an exception table. All of these monitoring actions lead to a more reliable and faster execution of MC requests.

  9. Sensitivities and Tipping Points of Power System Operations to Fluctuations Caused by Water Availability and Fuel Prices

    NASA Astrophysics Data System (ADS)

    O'Connell, M.; Macknick, J.; Voisin, N.; Fu, T.

    2017-12-01

    The western US electric grid is highly dependent upon water resources for reliable operation. Hydropower and water-cooled thermoelectric technologies represent 67% of generating capacity in the western region of the US. While water resources provide a significant amount of generation and reliability for the grid, these same resources can represent vulnerabilities during times of drought or low flow conditions. A lack of water affects water-dependent technologies and can result in more expensive generators needing to run in order to meet electric grid demand, resulting in higher electricity prices and a higher cost to operate the grid. A companion study assesses the impact of changes in water availability and air temperatures on power operations by directly derating hydro and thermo-electric generators. In this study we assess the sensitivities and tipping points of water availability compared with higher fuel prices in electricity sector operations. We evaluate the impacts of varying electricity prices by modifying fuel prices for coal and natural gas. We then analyze the difference in simulation results between changes in fuel prices in combination with water availability and air temperature variability. We simulate three fuel price scenarios for a 2010 baseline scenario along with 100 historical and future hydro-climate conditions. We use the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions under each combination of fuel price and water constraint. Some of the metrics evaluated are total production cost, generation type mix, emissions, transmission congestion, and reserve procurement. These metrics give insight to how strained the system is, how much flexibility it still has, and to what extent water resource availability or fuel prices drive changes in the electricity sector operations. This work will provide insights into current electricity operations as well as future cases of increased penetration of variable renewable generation technologies such as wind and solar.

  10. Improving an Assessment of Tidal Stream Energy Resource for Anchorage, Alaska

    NASA Astrophysics Data System (ADS)

    Xu, T.; Haas, K. A.

    2016-12-01

    Increasing global energy demand is driving the pursuit of new and innovative energy sources leading to the need for assessing and utilizing alternative, productive and reliable energy resources. Tidal currents, characterized by periodicity and predictability, have long been explored and studied as a potential energy source, focusing on many different locations with significant tidal ranges. However, a proper resource assessment cannot be accomplished without accurate knowledge of the spatial-temporal distribution and availability of tidal currents. Known for possessing one of the top tidal energy sources along the U.S. coastline, Cook Inlet, Alaska is the area of interest for this project. A previous regional scaled resource assessment has been completed, however, the present study is to focus the assessment on the available power specifically near Anchorage while significantly improving the accuracy of the assessment following IEC guidelines. The Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system is configured to simulate the tidal flows with grid refinement techniques for a minimum of 32 days, encompassing an entire lunar cycle. Simulation results are validated by extracting tidal constituents with harmonic analysis and comparing tidal components with National Oceanic and Atmospheric Administration (NOAA) observations and predictions. Model calibration includes adjustments to bottom friction coefficients and the usage of different tidal database. Differences between NOAA observations and COAWST simulations after applying grid refinement decrease, compared with results from a former study without grid refinement. Also, energy extraction is simulated at potential sites to study the impact on the tidal resources. This study demonstrates the enhancement of the resource assessment using grid refinement to evaluate tidal energy near Anchorage within Cook Inlet, Alaska, the productivity that energy extraction can achieve and the change in tidal currents caused by energy extraction.

  11. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  12. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic

    PubMed Central

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-01-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic. PMID:26451333

  13. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    PubMed

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.

  14. A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System

    NASA Astrophysics Data System (ADS)

    Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.

    2010-05-01

    The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.

  15. Optimizing Mars Airplane Trajectory with the Application Navigation System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Riley, Derek

    2004-01-01

    Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.

  16. Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulakhe, D.; Rodriguez, A.; Wilde, M.

    2008-03-01

    Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less

  17. Vulnerability of the US western electric grid to hydro-climatological conditions: How bad can it get?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.

    Recent studies have highlighted the potential impact of climate change on US electricity generation capacity by exploring the effect of changes in stream temperatures on available capacity of thermo-electric plants that rely on fresh-water cooling. However, little is known about the electric system impacts under extreme climate event such as drought. Vulnerability assessments are usually performed for a baseline water year or a specific drought, which do not provide insights into the full grid stress distribution across the diversity of climate events. In this paper we estimate the impacts of the water availability on the electricity generation and transmission inmore » the Western US grid for a range of historical water availability combinations. We softly couple an integrated water model, which includes climate, hydrology, routing, water resources management and socio-economic water demand models, into a grid model (production cost model) and simulate 30 years of historical hourly power flow conditions in the Western US grid. The experiment allows estimating the grid stress distribution as a function of inter-annual variability in regional water availability. Results indicate a clear correlation between grid vulnerability (as quantified in unmet energy demand and increased production cost) for the summer month of August and annual water availability. There is a 3% chance that at least 6% of the electricity demand cannot be met in August, and 21% chance of not meeting 0.5% of the load in the Western US grid. There is a 3% chance that at least 6% of the electricity demand cannot be met in August, and 21% chance of not meeting 0.1% or more of the load in the Western US grid. The regional variability in water availability contributes significantly to the reliability of the grid and could provide trade off opportunities in times of stress. This paper is the first to explore operational grid impacts imposed by droughts in the Western U.S. grid.« less

  18. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  19. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  20. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  1. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  2. A policy system for Grid Management and Monitoring

    NASA Astrophysics Data System (ADS)

    Stagni, Federico; Santinelli, Roberto; LHCb Collaboration

    2011-12-01

    Organizations using a Grid computing model are faced with non-traditional administrative challenges: the heterogeneous nature of the underlying resources requires professionals acting as Grid Administrators. Members of a Virtual Organization (VO) can use a subset of available resources and services in the grid infrastructure and in an ideal world, the more resoures are exploited the better. In the real world, the less faulty services, the better: experienced Grid administrators apply procedures for adding and removing services, based on their status, as it is reported by an ever-growing set of monitoring tools. When a procedure is agreed and well-exercised, a formal policy could be derived. For this reason, using the DIRAC framework in the LHCb collaboration, we developed a policy system that can enforce management and operational policies, in a VO-specific fashion. A single policy makes an assessment on the status of a subject, relative to one or more monitoring information. Subjects of the policies are monitored entities of an established Grid ontology. The status of a same entity is evaluated against a number of policies, whose results are then combined by a Policy Decision Point. Such results are enforced in a Policy Enforcing Point, which provides plug-ins for actions, like raising alarms, sending notifications, automatic addition and removal of services and resources from the Grid mask. Policy results are shown in the web portal, and site-specific views are provided also. This innovative system provides advantages in terms of procedures automation, information aggregation and problem solving.

  3. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  4. Oregon Magnetic and Gravity Maps and Data: A Web Site for Distribution of Data

    USGS Publications Warehouse

    Roberts, Carter W.; Kucks, Robert P.; Hill, Patricia L.

    2008-01-01

    This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each State. The results for the State of Oregon are presented here on this site. Files of aeromagnetic and gravity grids and images are available for these States for downloading. In Oregon, 49 magnetic surveys have been knit together to form a single digital grid and map. Also, a complete Bouguer gravity anomaly grid and map was generated from 40,665 gravity station measurements in and adjacent to Oregon. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.

  5. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    NASA Astrophysics Data System (ADS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  6. Using OSG Computing Resources with (iLC)Dirac

    NASA Astrophysics Data System (ADS)

    Sailer, A.; Petric, M.; CLICdp Collaboration

    2017-10-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.

  7. A Guidebook on Grid Interconnection and Islanded Operation of Mini-Grid Power Systems Up to 200 kW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greacen, Chris; Engel, Richard; Quetchenbach, Thomas

    A Guidebook on Grid Interconnection and Islanded Operation of Mini-Grid Power Systems Up to 200 kW is intended to help meet the widespread need for guidance, standards, and procedures for interconnecting mini-grids with the central electric grid as rural electrification advances in developing countries, bringing these once separate power systems together. The guidebook aims to help owners and operators of renewable energy mini-grids understand the technical options available, safety and reliability issues, and engineering and administrative costs of different choices for grid interconnection. The guidebook is intentionally brief but includes a number of appendices that point the reader to additionalmore » resources for indepth information. Not included in the scope of the guidebook are policy concerns about “who pays for what,” how tariffs should be set, or other financial issues that are also paramount when “the little grid connects to the big grid.”« less

  8. Grid commerce, market-driven G-negotiation, and Grid resource management.

    PubMed

    Sim, Kwang Mong

    2006-12-01

    Although the management of resources is essential for realizing a computational grid, providing an efficient resource allocation mechanism is a complex undertaking. Since Grid providers and consumers may be independent bodies, negotiation among them is necessary. The contribution of this paper is showing that market-driven agents (MDAs) are appropriate tools for Grid resource negotiation. MDAs are e-negotiation agents designed with the flexibility of: 1) making adjustable amounts of concession taking into account market rivalry, outside options, and time preferences and 2) relaxing bargaining terms in the face of intense pressure. A heterogeneous testbed consisting of several types of e-negotiation agents to simulate a Grid computing environment was developed. It compares the performance of MDAs against other e-negotiation agents (e.g., Kasbah) in a Grid-commerce environment. Empirical results show that MDAs generally achieve: 1) higher budget efficiencies in many market situations than other e-negotiation agents in the testbed and 2) higher success rates in acquiring Grid resources under high Grid loadings.

  9. Safe Grid

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote access from outside of NASA centers. A SAFE enabled IPG can enable IPG capabilities to be available to NASA mission design teams across different NASA center and partner company firewalls. This paper will first discuss some of the potential security issues for IPG to work across NASA center firewalls. We will then present the SAFE federated security model. Finally we will present the concept of the architecture of a SAFE enabled IPG and how it can benefit NASA mission development.

  10. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  11. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  12. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  13. Emission & Generation Resource Integrated Database (eGRID)

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is an integrated source of data on environmental characteristics of electric power generation. Twelve federal databases are represented by eGRID, which provides air emission and resource mix information for thousands of power plants and generating companies. eGRID allows direct comparison of the environmental attributes of electricity from different plants, companies, States, or regions of the power grid.

  14. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  15. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  16. OxfordGrid: a web interface for pairwise comparative map views.

    PubMed

    Yang, Hongyu; Gingle, Alan R

    2005-12-01

    OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.

  17. Illinois, Indiana, and Ohio Magnetic and Gravity Maps and Data: A Website for Distribution of Data

    USGS Publications Warehouse

    Daniels, David L.; Kucks, Robert P.; Hill, Patricia L.

    2008-01-01

    This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each state. The results for the three states, Illinois, Indiana, and Ohio are presented here in one site. Files of aeromagnetic and gravity grids and images are available for these states for downloading. In Illinois, Indiana, and Ohio, 19 magnetic surveys have been knit together to form a single digital grid and map. And, a complete Bouguer gravity anomaly grid and map was generated from 128,227 gravity station measurements in and adjacent to Illinois, Indiana, and Ohio. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.

  18. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  19. Toward Verification of USM3D Extensions for Mixed Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Frink, Neal T.; Ding, Ejiang; Parlette, Edward B.

    2013-01-01

    The unstructured tetrahedral grid cell-centered finite volume flow solver USM3D has been recently extended to handle mixed element grids composed of hexahedral, prismatic, pyramidal, and tetrahedral cells. Presently, two turbulence models, namely, baseline Spalart-Allmaras (SA) and Menter Shear Stress Transport (SST), support mixed element grids. This paper provides an overview of the various numerical discretization options available in the newly enhanced USM3D. Using the SA model, the flow solver extensions are verified on three two-dimensional test cases available on the Turbulence Modeling Resource website at the NASA Langley Research Center. The test cases are zero pressure gradient flat plate, planar shear, and bump-inchannel. The effect of cell topologies on the flow solution is also investigated using the planar shear case. Finally, the assessment of various cell and face gradient options is performed on the zero pressure gradient flat plate case.

  20. Initial steps towards a production platform for DNA sequence analysis on the grid.

    PubMed

    Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D

    2010-12-14

    Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/

  1. Climate simulations and services on HPC, Cloud and Grid infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  2. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  3. Integrating Renewable Generation into Grid Operations: Four International Experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weimar, Mark R.; Mylrea, Michael E.; Levin, Todd

    International experiences with power sector restructuring and the resultant impacts on bulk power grid operations and planning may provide insight into policy questions for the evolving United States power grid as resource mixes are changing in response to fuel prices, an aging generation fleet and to meet climate goals. Australia, Germany, Japan and the UK were selected to represent a range in the level and attributes of electricity industry liberalization in order to draw comparisons across a variety of regions in the United States such as California, ERCOT, the Southwest Power Pool and the Southeast Reliability Region. The study drawsmore » conclusions through a literature review of the four case study countries with regards to the changing resource mix and the electricity industry sector structure and their impact on grid operations and planning. This paper derives lessons learned and synthesizes implications for the United States based on answers to the above questions and the challenges faced by the four selected countries. Each country was examined to determine the challenges to their bulk power sector based on their changing resource mix, market structure, policies driving the changing resource mix, and policies driving restructuring. Each countries’ approach to solving those changes was examined, as well as how each country’s market structure either exacerbated or mitigated the approaches to solving the challenges to their bulk power grid operations and planning. All countries’ policies encourage renewable energy generation. One significant finding included the low- to zero-marginal cost of intermittent renewables and its potential negative impact on long-term resource adequacy. No dominant solution has emerged although a capacity market was introduced in the UK and is being contemplated in Japan. Germany has proposed the Energy Market 2.0 to encourage flexible generation investment. The grid operator in Australia proposed several approaches to maintaining synchronous generation. Interconnections to other regions provides added opportunities for balancing that would not be available otherwise, and at this point, has allowed for integration of renewables.« less

  4. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Belforte, S.; Bockelman, B.

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid,more » cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.« less

  5. Opportunistic Resource Usage in CMS

    NASA Astrophysics Data System (ADS)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.; Gutsche, O.; Tadel, M.; Sfiligoi, I.; Letts, J.; Wuerthwein, F.; McCrea, A.; Bockelman, B.; Fajardo, E.; Linares, L.; Wagner, R.; Konstantinov, P.; Blumenfeld, B.; Bradley, D.; Cms Collaboration

    2014-06-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.

  6. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  7. The Legnaro-Padova distributed Tier-2: challenges and results

    NASA Astrophysics Data System (ADS)

    Badoer, Simone; Biasotto, Massimo; Costa, Fulvia; Crescente, Alberto; Fantinel, Sergio; Ferrari, Roberto; Gulmini, Michele; Maron, Gaetano; Michelotto, Michele; Sgaravatto, Massimo; Toniolo, Nicola

    2014-06-01

    The Legnaro-Padova Tier-2 is a computing facility serving the ALICE and CMS LHC experiments. It also supports other High Energy Physics experiments and other virtual organizations of different disciplines, which can opportunistically harness idle resources if available. The unique characteristic of this Tier-2 is its topology: the computational resources are spread in two different sites, about 15 km apart: the INFN Legnaro National Laboratories and the INFN Padova unit, connected through a 10 Gbps network link (it will be soon updated to 20 Gbps). Nevertheless these resources are seamlessly integrated and are exposed as a single computing facility. Despite this intrinsic complexity, the Legnaro-Padova Tier-2 ranks among the best Grid sites for what concerns reliability and availability. The Tier-2 comprises about 190 worker nodes, providing about 26000 HS06 in total. Such computing nodes are managed by the LSF local resource management system, and are accessible using a Grid-based interface implemented through multiple CREAM CE front-ends. dCache, xrootd and Lustre are the storage systems in use at the Tier-2: about 1.5 PB of disk space is available to users in total, through multiple access protocols. A 10 Gbps network link, planned to be doubled in the next months, connects the Tier-2 to WAN. This link is used for the LHC Open Network Environment (LHCONE) and for other general purpose traffic. In this paper we discuss about the experiences at the Legnaro-Padova Tier-2: the problems that had to be addressed, the lessons learned, the implementation choices. We also present the tools used for the daily management operations. These include DOCET, a Java-based webtool designed, implemented and maintained at the Legnaro-Padova Tier-2, and deployed also in other sites, such as the LHC Italian T1. DOCET provides an uniform interface to manage all the information about the physical resources of a computing center. It is also used as documentation repository available to the Tier-2 operations team. Finally we discuss about the foreseen developments of the existing infrastructure. This includes in particular the evolution from a Grid-based resource towards a Cloud-based computing facility.

  8. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  9. Role of Smarter Grids in Variable Renewable Resource Integration (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.

    2012-07-01

    This presentation discusses the role of smarter grids in variable renewable resource integration and references material from a forthcoming ISGAN issue paper: Smart Grid Contributions to Variable Renewable Resource Integration, co-written by the presenter and currently in review.

  10. Emissions & Generation Resource Integrated Database (eGRID), eGRID2012

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes. eGRID2012 Version 1.0 is the eighth edition of eGRID, which contains the complete release of year 2009 data, as well as year 2007, 2005, and 2004 data. For year 2009 data, all the data are contained in a single Microsoft Excel workbook, which contains boiler, generator, plant, state, power control area, eGRID subregion, NERC region, U.S. total and grid gross loss factor tabs. Full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are also released in this edition. The fourth edition of eGRID, eGRID2002 Version 2.01, containing year 1996 through 2000 data is located on the eGRID Archive page (http://www.epa.gov/cleanenergy/energy-resources/egrid/archive.html). The current edition of eGRID and the archived edition of eGRID contain the following years of data: 1996 - 2000, 2004, 2005, and 2007. eGRID has no other years of data.

  11. Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid

    DOE PAGES

    Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.; ...

    2017-03-24

    This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less

  12. Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.

    This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less

  13. Earth System Grid II (ESG): Turning Climate Model Datasets Into Community Resources

    NASA Astrophysics Data System (ADS)

    Williams, D.; Middleton, D.; Foster, I.; Nevedova, V.; Kesselman, C.; Chervenak, A.; Bharathi, S.; Drach, B.; Cinquni, L.; Brown, D.; Strand, G.; Fox, P.; Garcia, J.; Bernholdte, D.; Chanchio, K.; Pouchard, L.; Chen, M.; Shoshani, A.; Sim, A.

    2003-12-01

    High-resolution, long-duration simulations performed with advanced DOE SciDAC/NCAR climate models will produce tens of petabytes of output. To be useful, this output must be made available to global change impacts researchers nationwide, both at national laboratories and at universities, other research laboratories, and other institutions. To this end, we propose to create a new Earth System Grid, ESG-II - a virtual collaborative environment that links distributed centers, users, models, and data. ESG-II will provide scientists with virtual proximity to the distributed data and resources that they require to perform their research. The creation of this environment will significantly increase the scientific productivity of U.S. climate researchers by turning climate datasets into community resources. In creating ESG-II, we will integrate and extend a range of Grid and collaboratory technologies, including the DODS remote access protocols for environmental data, Globus Toolkit technologies for authentication, resource discovery, and resource access, and Data Grid technologies developed in other projects. We will develop new technologies for (1) creating and operating "filtering servers" capable of performing sophisticated analyses, and (2) delivering results to users. In so doing, we will simultaneously contribute to climate science and advance the state of the art in collaboratory technology. We expect our results to be useful to numerous other DOE projects. The three-year R&D program will be undertaken by a talented and experienced team of computer scientists at five laboratories (ANL, LBNL, LLNL, NCAR, ORNL) and one university (ISI), working in close collaboration with climate scientists at several sites.

  14. Integrating Grid Services into the Cray XT4 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy

    2009-05-01

    The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less

  15. 2025 California Demand Response Potential Study - Charting California’s Demand Response Future. Final Report on Phase 2 Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alstone, Peter; Potter, Jennifer; Piette, Mary Ann

    California’s legislative and regulatory goals for renewable energy are changing the power grid’s dynamics. Increased variable generation resource penetration connected to the bulk power system, as well as, distributed energy resources (DERs) connected to the distribution system affect the grid’s reliable operation over many different time scales (e.g., days to hours to minutes to seconds). As the state continues this transition, it will require careful planning to ensure resources with the right characteristics are available to meet changing grid management needs. Demand response (DR) has the potential to provide important resources for keeping the electricity grid stable and efficient, tomore » defer upgrades to generation, transmission and distribution systems, and to deliver customer economic benefits. This study estimates the potential size and cost of future DR resources for California’s three investor-owned utilities (IOUs): Pacific Gas and Electric Company (PG&E), Southern California Edison Company (SCE), and San Diego Gas & Electric Company (SDG&E). Our goal is to provide data-driven insights as the California Public Utilities Commission (CPUC) evaluates how to enhance DR’s role in meeting California’s resource planning needs and operational requirements. We address two fundamental questions: 1. What cost-competitive DR service types will meet California’s future grid needs as it moves towards clean energy and advanced infrastructure? 2. What is the size and cost of the expected resource base for the DR service types?« less

  16. How to keep the Grid full and working with ATLAS production and physics jobs

    NASA Astrophysics Data System (ADS)

    Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration

    2017-10-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  17. Implementation of data node in spatial information grid based on WS resource framework and WS notification

    NASA Astrophysics Data System (ADS)

    Zhang, Dengrong; Yu, Le

    2006-10-01

    Abstract-An approach of constructing a data node in spatial information grid (SIG) based on Web Service Resource Framework (WSRF) and Web Service Notification (WSN) is described in this paper. Attentions are paid to construct and implement SIG's resource layer, which is the most important part. A study on this layer find out, it is impossible to require persistent interaction with the clients of the services in common SIG architecture because of inheriting "stateless" and "not persistent" limitations of Web Service. A WSRF/WSN-based data node is designed to hurdle this short comes. Three different access modes are employed to test the availability of this node. Experimental results demonstrate this service node can successfully respond to standard OGC requests and returns specific spatial data in different network environment, also is stateful, dynamic and persistent.

  18. Opportunistic Resource Usage in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreuzer, Peter; Hufnagel, Dirk; Dykstra, D.

    2014-01-01

    CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local university and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliantmore » cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.« less

  19. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  20. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  1. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  2. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  3. Emissions & Generation Resource Integrated Database (eGRID), eGRID2002 (with years 1996 - 2000 data)

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, nitrous oxide, and mercury; emissions rates; net generation; resource mix; and many other attributes. eGRID2002 (years 1996 through 2000 data) contains 16 Excel spreadsheets and the Technical Support Document, as well as the eGRID Data Browser, User's Manual, and Readme file. Archived eGRID data can be viewed as spreadsheets or by using the eGRID Data Browser. The eGRID spreadsheets can be manipulated by data users and enables users to view all the data underlying eGRID. The eGRID Data Browser enables users to view key data using powerful search features. Note that the eGRID Data Browser will not run on a Mac-based machine without Windows emulation.

  4. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  5. Interfacing HTCondor-CE with OpenStack

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; Hover, J.

    2017-10-01

    Over the past few years, Grid Computing technologies have reached a high level of maturity. One key aspect of this success has been the development and adoption of newer Compute Elements to interface the external Grid users with local batch systems. These new Compute Elements allow for better handling of jobs requirements and a more precise management of diverse local resources. However, despite this level of maturity, the Grid Computing world is lacking diversity in local execution platforms. As Grid Computing technologies have historically been driven by the needs of the High Energy Physics community, most resource providers run the platform (operating system version and architecture) that best suits the needs of their particular users. In parallel, the development of virtualization and cloud technologies has accelerated recently, making available a variety of solutions, both commercial and academic, proprietary and open source. Virtualization facilitates performing computational tasks on platforms not available at most computing sites. This work attempts to join the technologies, allowing users to interact with computing sites through one of the standard Computing Elements, HTCondor-CE, but running their jobs within VMs on a local cloud platform, OpenStack, when needed. The system will re-route, in a transparent way, end user jobs into dynamically-launched VM worker nodes when they have requirements that cannot be satisfied by the static local batch system nodes. Also, once the automated mechanisms are in place, it becomes straightforward to allow an end user to invoke a custom Virtual Machine at the site. This will allow cloud resources to be used without requiring the user to establish a separate account. Both scenarios are described in this work.

  6. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  7. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  8. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  9. FermiGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yocum, D.R.; Berman, E.; Canal, P.

    2007-05-01

    As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.

  10. Grid computing in large pharmaceutical molecular modeling.

    PubMed

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  11. Examining Extreme Events Using Dynamically Downscaled 12-km WRF Simulations

    EPA Science Inventory

    Continued improvements in the speed and availability of computational resources have allowed dynamical downscaling of global climate model (GCM) projections to be conducted at increasingly finer grid scales and over extended time periods. The implementation of dynamical downscal...

  12. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  13. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  14. Boosting CSP Production with Thermal Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, P.; Mehos, M.

    2012-06-01

    Combining concentrating solar power (CSP) with thermal energy storage shows promise for increasing grid flexibility by providing firm system capacity with a high ramp rate and acceptable part-load operation. When backed by energy storage capability, CSP can supplement photovoltaics by adding generation from solar resources during periods of low solar insolation. The falling cost of solar photovoltaic (PV) - generated electricity has led to a rapid increase in the deployment of PV and projections that PV could play a significant role in the future U.S. electric sector. The solar resource itself is virtually unlimited; however, the actual contribution of PVmore » electricity is limited by several factors related to the current grid. The first is the limited coincidence between the solar resource and normal electricity demand patterns. The second is the limited flexibility of conventional generators to accommodate this highly variable generation resource. At high penetration of solar generation, increased grid flexibility will be needed to fully utilize the variable and uncertain output from PV generation and to shift energy production to periods of high demand or reduced solar output. Energy storage is one way to increase grid flexibility, and many storage options are available or under development. In this article, however, we consider a technology already beginning to be used at scale - thermal energy storage (TES) deployed with concentrating solar power (CSP). PV and CSP are both deployable in areas of high direct normal irradiance such as the U.S. Southwest. The role of these two technologies is dependent on their costs and relative value, including how their value to the grid changes as a function of what percentage of total generation they contribute to the grid, and how they may actually work together to increase overall usefulness of the solar resource. Both PV and CSP use solar energy to generate electricity. A key difference is the ability of CSP to utilize high-efficiency TES, which turns CSP into a partially dispatchable resource. The addition of TES produces additional value by shifting the delivery of solar energy to periods of peak demand, providing firm capacity and ancillary services, and reducing integration challenges. Given the dispatchability of CSP enabled by TES, it is possible that PV and CSP are at least partially complementary. The dispatchability of CSP with TES can enable higher overall penetration of the grid by solar energy by providing solar-generated electricity during periods of cloudy weather or at night, when PV-generated power is unavailable. Such systems also have the potential to improve grid flexibility, thereby enabling greater penetration of PV energy (and other variable generation sources such as wind) than if PV were deployed without CSP.« less

  15. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  16. Coarsening of three-dimensional structured and unstructured grids for subsurface flow

    NASA Astrophysics Data System (ADS)

    Aarnes, Jørg Espen; Hauge, Vera Louise; Efendiev, Yalchin

    2007-11-01

    We present a generic, semi-automated algorithm for generating non-uniform coarse grids for modeling subsurface flow. The method is applicable to arbitrary grids and does not impose smoothness constraints on the coarse grid. One therefore avoids conventional smoothing procedures that are commonly used to ensure that the grids obtained with standard coarsening procedures are not too rough. The coarsening algorithm is very simple and essentially involves only two parameters that specify the level of coarsening. Consequently the algorithm allows the user to specify the simulation grid dynamically to fit available computer resources, and, e.g., use the original geomodel as input for flow simulations. This is of great importance since coarse grid-generation is normally the most time-consuming part of an upscaling phase, and therefore the main obstacle that has prevented simulation workflows with user-defined resolution. We apply the coarsening algorithm to a series of two-phase flow problems on both structured (Cartesian) and unstructured grids. The numerical results demonstrate that one consistently obtains significantly more accurate results using the proposed non-uniform coarsening strategy than with corresponding uniform coarse grids with roughly the same number of cells.

  17. Setting Up a Grid-CERT: Experiences of an Academic CSIRT

    ERIC Educational Resources Information Center

    Moller, Klaus

    2007-01-01

    Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…

  18. Magnitude and Variability of Controllable Charge Capacity Provided by Grid Connected Plug-in Electric Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scoffield, Don R; Smart, John; Salisbury, Shawn

    2015-03-01

    As market penetration of plug-in electric vehicles (PEV) increases over time, the number of PEVs charging on the electric grid will also increase. As the number of PEVs increases, their ability to collectively impact the grid increases. The idea of a large body of PEVs connected to the grid presents an intriguing possibility. If utilities can control PEV charging, it is possible that PEVs could act as a distributed resource to provide grid services. The technology required to control charging is available for modern PEVs. However, a system for wide-spread implementation of controllable charging, including robust communication between vehicles andmore » utilities, is not currently present. Therefore, the value of controllable charging must be assessed and weighed against the cost of building and operating such as system. In order to grasp the value of PEV charge control to the utility, the following must be understood: 1. The amount of controllable energy and power capacity available to the utility 2. The variability of the controllable capacity from day to day and as the number of PEVs in the market increases.« less

  19. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  20. Electric Vehicle Charging and the California Power Sector: Evaluating the Effect of Location and Time on Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Sohnen, Julia Meagher

    This thesis explores the implications of the increased adoption of plug-in electric vehicles in California through its effect on the operation of the state's electric grid. The well-to-wheels emissions associated with driving an electric vehicle depend on the resource mix of the electricity grid used to charge the battery. We present a new least-cost dispatch model, EDGE-NET, for the California electricity grid consisting of interconnected sub-regions that encompass the six largest state utilities that can be used to evaluate the impact of growing electric vehicle demand on existing power grid infrastructure system and energy resources. This model considers spatiality and temporal dynamics of energy demand and supply when determining the regional impacts of additional charging profiles on the current electricity network. Model simulation runs for one year show generation and transmission congestion to be reasonable similar to historical data. Model simulation results show that average emissions and system costs associated with electricity generation vary significantly by time of day, season, and location. Marginal cost and emissions also exhibit seasonal and diurnal differences, but show less spatial variation. Sensitivity of demand analysis shows that the relative changes to average emissions and system costs respond asymmetrically to increases and decreases in electricity demand. These results depend on grid mix at the time and the marginal power plant type. In minimizing total system cost, the model will choose to dispatch the lowest-cost resource to meet additional vehicle demand, regardless of location, as long as transmission capacity is available. Location of electric vehicle charging has a small effect on the marginal greenhouse gas emissions associated with additional generation, due to electricity losses in the transmission grid. We use a geographically explicit, charging assessment model for California to develop and compare the effects of two charging profiles. Comparison of these two basic scenarios points to savings in greenhouse gas emissions savings and operational costs from delayed charging of electric vehicles. Vehicle charging simulations confirm that plug-in electric vehicles alone are unlikely to require additional generation or transmission infrastructure. EDGE-NET was successfully benchmarked against historical data for the present grid but additional work is required to expand the model for future scenario evaluation. We discuss how the model might be adapted for high penetrations of variable renewable energy resources, and the use of grid storage. Renewable resources such as wind and solar vary in California vary significantly by time-of-day, season, and location. However, combination of multiple resources from different geographic regions through transmission grid interconnection is expected to help mitigate the impacts of variability. EDGE-NET can evaluate interaction of supply and demand through the existing transmission infrastructure and can identify any critical network bottlenecks or areas for expansion. For this reason, EDGE-NET will be an important tool to evaluate energy policy scenarios.

  1. Balancing Area Coordination: Efficiently Integrating Renewable Energy Into the Grid, Greening the Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Jessica; Denholm, Paul; Cochran, Jaquelin

    2015-06-01

    Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid. Coordinating balancing area operation can promote more cost and resource efficient integration of variable renewable energy, such as wind and solar, into power systems. This efficiency is achieved by sharing or coordinating balancing resources and operating reserves across larger geographic boundaries.

  2. Climate and Water Vulnerability of the US Electricity Grid Under High Penetrations of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Macknick, J.; Miara, A.; O'Connell, M.; Vorosmarty, C. J.; Newmark, R. L.

    2017-12-01

    The US power sector is highly dependent upon water resources for reliable operations, primarily for thermoelectric cooling and hydropower technologies. Changes in the availability and temperature of water resources can limit electricity generation and cause outages at power plants, which substantially affect grid-level operational decisions. While the effects of water variability and climate changes on individual power plants are well documented, prior studies have not identified the significance of these impacts at the regional systems-level at which the grid operates, including whether there are risks for large-scale blackouts, brownouts, or increases in production costs. Adequately assessing electric grid system-level impacts requires detailed power sector modeling tools that can incorporate electric transmission infrastructure, capacity reserves, and other grid characteristics. Here, we present for the first time, a study of how climate and water variability affect operations of the power sector, considering different electricity sector configurations (low vs. high renewable) and environmental regulations. We use a case study of the US Eastern Interconnection, building off the Eastern Renewable Generation Integration Study (ERGIS) that explored operational challenges of high penetrations of renewable energy on the grid. We evaluate climate-water constraints on individual power plants, using the Thermoelectric Power and Thermal Pollution (TP2M) model coupled with the PLEXOS electricity production cost model, in the context of broader electricity grid operations. Using a five minute time step for future years, we analyze scenarios of 10% to 30% renewable energy penetration along with considerations of river temperature regulations to compare the cost, performance, and reliability tradeoffs of water-dependent thermoelectric generation and variable renewable energy technologies under climate stresses. This work provides novel insights into the resilience and reliability of different configurations of the US electric grid subject to changing climate conditions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Lopez, Anthony

    This paper validates the performance of the physics-based Physical Solar Model (PSM) data set in the National Solar Radiation Data Base (NSRDB) to quantify the accuracy of the magnitude and the spatial and temporal variability of the solar radiation data. Achieving higher penetrations of solar energy on the electric grid and reducing integration costs requires accurate knowledge of the available solar resource. Understanding the impacts of clouds and other meteorological constituents on the solar resource and quantifying intra-/inter-hour, seasonal, and interannual variability are essential for accurately designing utility-scale solar energy projects. Solar resource information can be obtained from ground-based measurementmore » stations and/or from modeled data sets. The availability of measurements is scarce, both temporally and spatially, because it is expensive to maintain a high-density solar radiation measurement network that collects good quality data for long periods of time. On the other hand, high temporal and spatial resolution gridded satellite data can be used to estimate surface radiation for long periods of time and is extremely useful for solar energy development. Because of the advantages of satellite-based solar resource assessment, the National Renewable Energy Laboratory developed the PSM. The PSM produced gridded solar irradiance -- global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance -- for the NSRDB at a 4-km by 4-km spatial resolution and half-hourly temporal resolution covering the 18 years from 1998-2015. The NSRDB also contains additional ancillary meteorological data sets, such as temperature, relative humidity, surface pressure, dew point, and wind speed. Details of the model and data are available at https://nsrdb.nrel.gov. The results described in this paper show that the hourly-averaged satellite-derived data have a systematic (bias) error of approximately +5% for GHI and less than +10% for DNI; however, the scatter (root mean square error [RMSE]) difference is higher for the hourly averages.« less

  4. Evaluation of the National Solar Radiation Database (NSRDB): 1998-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Lopez, Anthony

    This paper validates the performance of the physics-based Physical Solar Model (PSM) data set in the National Solar Radiation Data Base (NSRDB) to quantify the accuracy of the magnitude and the spatial and temporal variability of the solar radiation data. Achieving higher penetrations of solar energy on the electric grid and reducing integration costs requires accurate knowledge of the available solar resource. Understanding the impacts of clouds and other meteorological constituents on the solar resource and quantifying intra-/inter-hour, seasonal, and interannual variability are essential for accurately designing utility-scale solar energy projects. Solar resource information can be obtained from ground-based measurementmore » stations and/or from modeled data sets. The availability of measurements is scarce, both temporally and spatially, because it is expensive to maintain a high-density solar radiation measurement network that collects good quality data for long periods of time. On the other hand, high temporal and spatial resolution gridded satellite data can be used to estimate surface radiation for long periods of time and is extremely useful for solar energy development. Because of the advantages of satellite-based solar resource assessment, the National Renewable Energy Laboratory developed the PSM. The PSM produced gridded solar irradiance -- global horizontal irradiance (GHI), direct normal irradiance (DNI), and diffuse horizontal irradiance -- for the NSRDB at a 4-km by 4-km spatial resolution and half-hourly temporal resolution covering the 18 years from 1998-2015. The NSRDB also contains additional ancillary meteorological data sets, such as temperature, relative humidity, surface pressure, dew point, and wind speed. Details of the model and data are available at https://nsrdb.nrel.gov. The results described in this paper show that the hourly-averaged satellite-derived data have a systematic (bias) error of approximately +5% for GHI and less than +10% for DNI; however, the scatter (root mean square error [RMSE]) difference is higher for the hourly averages.« less

  5. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  6. HappyFace as a generic monitoring tool for HEP experiments

    NASA Astrophysics Data System (ADS)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard

    2015-12-01

    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.

  7. Self managing experiment resources

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Ubeda, M.; Tsaregorodtsev, A.; Romanovskiy, V.; Roiser, S.; Charpentier, P.; Graciani, R.

    2014-06-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  8. Grid Data and Tools | Grid Modernization | NREL

    Science.gov Websites

    technologies and strategies, including renewable resource data sets and models of the electric power system . Renewable Resource Data A library of resource information to inform the design of efficient, integrated

  9. DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Marty, A

    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less

  10. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  11. Using Conventional Hydropower to Help Alleviate Variable Resource Grid Integration Challenges in the Western U.S

    NASA Astrophysics Data System (ADS)

    Veselka, T. D.; Poch, L.

    2011-12-01

    Integrating high penetration levels of wind and solar energy resources into the power grid is a formidable challenge in virtually all interconnected systems due to the fact that supply and demand must remain in balance at all times. Since large scale electricity storage is currently not economically viable, generation must exactly match electricity demand plus energy losses in the system as time unfolds. Therefore, as generation from variable resources such as wind and solar fluctuate, production from generating resources that are easier to control and dispatch need to compensate for these fluctuations while at the same time respond to both instantaneous change in load and follow daily load profiles. The grid in the Western U.S. is not exempt to grid integration challenges associated with variable resources. However, one advantage that the power system in the Western U.S. has over many other regional power systems is that its footprint contains an abundance of hydropower resources. Hydropower plants, especially those that have reservoir water storage, can physically change electricity production levels very quickly both via a dispatcher and through automatic generation control. Since hydropower response time is typically much faster than other dispatchable resources such as steam or gas turbines, it is well suited to alleviate variable resource grid integration issues. However, despite an abundance of hydropower resources and the current low penetration of variable resources in the Western U.S., problems have already surfaced. This spring in the Pacific Northwest, wetter than normal hydropower conditions in combination with transmission constraints resulted in controversial wind resource shedding. This action was taken since water spilling would have increased dissolved oxygen levels downstream of dams thereby significantly degrading fish habitats. The extent to which hydropower resources will be able to contribute toward a stable and reliable Western grid is currently being studied. Typically these studies consider the inherent flexibility of hydropower technologies, but tend to fall short on details regarding grid operations, institutional arrangements, and hydropower environmental regulations. This presentation will focus on an analysis that Argonne National Laboratory is conducting in collaboration with the Western Area Power Administration (Western). The analysis evaluates the extent to which Western's hydropower resources may help with grid integration challenges via a proposed Energy Imbalance Market. This market encompasses most of the Western Electricity Coordinating Council footprint. It changes grid operations such that the real-time dispatch would be, in part, based on a 5-minute electricity market. The analysis includes many factors such as site-specific environmental considerations at each of its hydropower facilities, long-term firm purchase agreements, and hydropower operating objectives and goals. Results of the analysis indicate that site-specific details significantly affect the ability of hydropower plant to respond to grid needs in a future which will have a high penetration of variable resources.

  12. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  13. How to Evaluate Mobile Health Applications: A Scoping Review.

    PubMed

    Fiore, Pasquale

    2017-01-01

    Evaluating mobile health applications requires specific criteria. Research suggests evaluation grids and online web sites are available to provide a quick sense of ease for the health care professional wanting to use a mobile application without worrying about the quality, efficacy, and safety of the mobile application. This article will present a scoping review and explore the available resources for health care professionals.

  14. A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications

    NASA Astrophysics Data System (ADS)

    Entezari-Maleki, Reza; Movaghar, Ali

    Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.

  15. Scheduling quality of precise form sets which consist of tasks of circular type in GRID systems

    NASA Astrophysics Data System (ADS)

    Saak, A. E.; Kureichik, V. V.; Kravchenko, Y. A.

    2018-05-01

    Users’ demand in computer power and rise of technology favour the arrival of Grid systems. The quality of Grid systems’ performance depends on computer and time resources scheduling. Grid systems with a centralized structure of the scheduling system and user’s task are modeled by resource quadrant and re-source rectangle accordingly. A Non-Euclidean heuristic measure, which takes into consideration both the area and the form of an occupied resource region, is used to estimate scheduling quality of heuristic algorithms. The authors use sets, which are induced by the elements of square squaring, as an example of studying the adapt-ability of a level polynomial algorithm with an excess and the one with minimal deviation.

  16. A System for Monitoring and Management of Computational Grids

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  17. Evaluating PRISM precipitation grid data as possible surrogates for station data at four sites in Oklahoma

    USDA-ARS?s Scientific Manuscript database

    The development of climate-sensitive decision support for agriculture or water resource management requires long time series of monthly precipitation for specific locations. Archived station data for many locations is available, but time continuity, quality, and spatial coverage of station data rem...

  18. Gap Assessment (FY 13 Update)

    DOE Data Explorer

    Getman, Dan

    2013-09-30

    To help guide its future data collection efforts, The DOE GTO funded a data gap analysis in FY2012 to identify high potential hydrothermal areas where critical data are needed. This analysis was updated in FY2013 and the resulting datasets are represented by this metadata. The original process was published in FY 2012 and is available here: https://pangea.stanford.edu/ERE/db/GeoConf/papers/SGW/2013/Esposito.pdf Though there are many types of data that can be used for hydrothermal exploration, five types of exploration data were targeted for this analysis. These data types were selected for their regional reconnaissance potential, and include many of the primary exploration techniques currently used by the geothermal industry. The data types include: 1. well data 2. geologic maps 3. fault maps 4. geochemistry data 5. geophysical data To determine data coverage, metadata for exploration data (including data type, data status, and coverage information) were collected and catalogued from nodes on the National Geothermal Data System (NGDS). It is the intention of this analysis that the data be updated from this source in a semi-automated fashion as new datasets are added to the NGDS nodes. In addition to this upload, an online tool was developed to allow all geothermal data providers to access this assessment and to directly add metadata themselves and view the results of the analysis via maps of data coverage in Geothermal Prospector (http://maps.nrel.gov/gt_prospector). A grid of the contiguous U.S. was created with 88,000 10-km by 10-km grid cells, and each cell was populated with the status of data availability corresponding to the five data types. Using these five data coverage maps and the USGS Resource Potential Map, sites were identified for future data collection efforts. These sites signify both that the USGS has indicated high favorability of occurrence of geothermal resources and that data gaps exist. The uploaded data are contained in two data files for each data category. The first file contains the grid and is in the SHP file format (shape file.) Each populated grid cell represents a 10k area within which data is known to exist. The second file is a CSV (comma separated value) file that contains all of the individual layers that intersected with the grid. This CSV can be joined with the map to retrieve a list of datasets that are available at any given site. The attributes in the CSV include: 1. grid_id : The id of the grid cell that the data intersects with 2. title: This represents the name of the WFS service that intersected with this grid cell 3. abstract: This represents the description of the WFS service that intersected with this grid cell 4. gap_type: This represents the category of data availability that these data fall within. As the current processing is pulling data from NGDS, this category universally represents data that are available in the NGDS and are ready for acquisition for analytic purposes. 5. proprietary_type: Whether the data are considered proprietary 6. service_type: The type of service 7. base_url: The service URL

  19. A cross-domain communication resource scheduling method for grid-enabled communication networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding

    2011-10-01

    To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.

  20. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  1. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  2. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-05-01

    The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  3. Practice of Meteorological Services in Turpan Solar Eco-City in China (Invited)

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Chang, R.; He, X.; Jiang, Y.; Zhao, D.; Ma, J.

    2013-12-01

    Turpan Solar Eco-City is located in Gobi in Northwest China, which is one of the National New Energy Demonstration Urban. The city was planed and designed from October of 2008 and constructed from May of 2010, and the first phase of the project has been completed by October of 2013. Energy supply in Turpan Solar Eco-City is mainly from PV power, which is installed in all of the roof and the total capacity is 13.4MW. During the planning and designing of the city, and the running of the smart grid, meteorological services have played an important role. 1) Solar Energy Resource Assessment during Planning Phase. According to the observed data from meteorological stations in recent 30 years, solar energy resource was assessed and available PV power generation capacity was calculated. The results showed that PV power generation capacity is 1.3 times the power consumption, that is, solar energy resource in Turpan is rich. 2) Key Meteorological Parameters Determination for Architectural Design. A professional solar energy resource station was constructed and the observational items included Global Horizontal Irradiance, Inclined Total Solar Irradiance at 30 degree, Inclined Total Solar Irradiance at local latitude, and so on. According these measured data, the optical inclined angle for PV array was determined, that is, 30 degree. The results indicated that the annual irradiation on inclined plane with optimal angle is 1.4% higher than the inclined surface with latitude angle, and 23.16% higher than the horizontal plane. The diffuse ratio and annual variation of the solar elevation angle are two major factors that influence the irradiation on inclined plane. 3) Solar Energy Resource Forecast for Smart Grid. Weather Research Forecast (WRF) model was used to forecast the hourly solar radiation of future 72 hours and the measured irradiance data was used to forecast the minutely solar radiation of future 4 hours. The forecast results were submitted to smart grid and used to regulate the local grid and the city gird.

  4. LSST Resources for the Community

    NASA Astrophysics Data System (ADS)

    Jones, R. Lynne

    2011-01-01

    LSST will generate 100 petabytes of images and 20 petabytes of catalogs, covering 18,000-20,000 square degrees of area sampled every few days, throughout a total of ten years of time -- all publicly available and exquisitely calibrated. The primary access to this data will be through Data Access Centers (DACs). DACs will provide access to catalogs of sources (single detections from individual images) and objects (associations of sources from multiple images). Simple user interfaces or direct SQL queries at the DAC can return user-specified portions of data from catalogs or images. More complex manipulations of the data, such as calculating multi-point correlation functions or creating alternative photo-z measurements on terabyte-scale data, can be completed with the DAC's own resources. Even more data-intensive computations requiring access to large numbers of image pixels on petabyte-scale could also be conducted at the DAC, using compute resources allocated in a similar manner to a TAC. DAC resources will be available to all individuals in member countries or institutes and LSST science collaborations. DACs will also assist investigators with requests for allocations at national facilities such as the Petascale Computing Facility, TeraGrid, and Open Science Grid. Using data on this scale requires new approaches to accessibility and analysis which are being developed through interactions with the LSST Science Collaborations. We are producing simulated images (as might be acquired by LSST) based on models of the universe and generating catalogs from these images (as well as from the base model) using the LSST data management framework in a series of data challenges. The resulting images and catalogs are being made available to the science collaborations to verify the algorithms and develop user interfaces. All LSST software is open source and available online, including preliminary catalog formats. We encourage feedback from the community.

  5. [Research on tumor information grid framework].

    PubMed

    Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing

    2013-10-01

    In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.

  6. National Offshore Wind Energy Grid Interconnection Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, John P.; Liu, Shu; Ibanez, Eduardo

    2014-07-30

    The National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) considers the availability and potential impacts of interconnecting large amounts of offshore wind energy into the transmission system of the lower 48 contiguous United States. A total of 54GW of offshore wind was assumed to be the target for the analyses conducted. A variety of issues are considered including: the anticipated staging of offshore wind; the offshore wind resource availability; offshore wind energy power production profiles; offshore wind variability; present and potential technologies for collection and delivery of offshore wind energy to the onshore grid; potential impacts to existing utility systemsmore » most likely to receive large amounts of offshore wind; and regulatory influences on offshore wind development. The technologies considered the reliability of various high-voltage ac (HVAC) and high-voltage dc (HVDC) technology options and configurations. The utility system impacts of GW-scale integration of offshore wind are considered from an operational steady-state perspective and from a regional and national production cost perspective.« less

  7. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  8. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  9. PVUSA: The value of photovoltaics in the distribution system. The Kerman Grid-Support Project

    NASA Astrophysics Data System (ADS)

    Wenger, Howard J.; Hoff, Thomas E.

    1995-05-01

    As part of the Photovoltaics for Utility Scale Applications Applications (PVUSA) Project Pacific Gas Electric Company (PG&E) built the Kerman 500-kW photovoltaic power plant. Located near the end of a distribution feeder in a rural section of Fresno County, the plant was not built so much to demonstrate PV technology, but to evaluate its interaction with the local distribution grid and quantify available nontraditional grid-support benefits (those other than energy and capacity). As demand for new generation began to languish in the 1980s, and siting and permitting of power plants and transmission lines became more involved, utilities began considering smaller, distributed power sources. Potential benefits include shorter construction lead time, less capital outlay, and better utilization of existing assets. The results of a PG&E study in 1990/1991 of the benefits from a PV system to the distribution grid prompted the PVUSA Project to construct a plant at Kerman. Completed in 1993, the plant is believed to be the first one specifically built to evaluate the multiple benefits to the grid of a strategically sited plant. Each of nine discrete benefits were evaluated in detail by first establishing the technical impact, then translating the results into present economic value. Benefits span the entire system from distribution feeder to the generation fleet. This work breaks new ground in evaluation of distributed resources, and suggests that resource planning practices be expanded to account for these non-traditional benefits.

  10. A Greedy Double Auction Mechanism for Grid Resource Allocation

    NASA Astrophysics Data System (ADS)

    Ding, Ding; Luo, Siwei; Gao, Zhan

    To improve the resource utilization and satisfy more users, a Greedy Double Auction Mechanism(GDAM) is proposed to allocate resources in grid environments. GDAM trades resources at discriminatory price instead of uniform price, reflecting the variance in requirements for profits and quantities. Moreover, GDAM applies different auction rules to different cases, over-demand, over-supply and equilibrium of demand and supply. As a new mechanism for grid resource allocation, GDAM is proved to be strategy-proof, economically efficient, weakly budget-balanced and individual rational. Simulation results also confirm that GDAM outperforms the traditional one on both the total trade amount and the user satisfaction percentage, specially as more users are involved in the auction market.

  11. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    NASA Astrophysics Data System (ADS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-12-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  12. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping

    This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less

  13. RANZCR Body Systems Framework of diagnostic imaging examination descriptors.

    PubMed

    Pitman, Alexander G; Penlington, Lisa; Doromal, Darren; Slater, Gregory; Vukolova, Natalia

    2014-08-01

    A unified and logical system of descriptors for diagnostic imaging examinations and procedures is a desirable resource for radiology in Australia and New Zealand and is needed to support core activities of RANZCR. Existing descriptor systems available in Australia and New Zealand (including the Medicare DIST and the ACC Schedule) have significant limitations and are inappropriate for broader clinical application. An anatomically based grid was constructed, with anatomical structures arranged in rows and diagnostic imaging modalities arranged in columns (including nuclear medicine and positron emission tomography). The grid was segregated into five body systems. The cells at the intersection of an anatomical structure row and an imaging modality column were populated with short, formulaic descriptors of the applicable diagnostic imaging examinations. Clinically illogical or physically impossible combinations were 'greyed out'. Where the same examination applied to different anatomical structures, the descriptor was kept identical for the purposes of streamlining. The resulting Body Systems Framework of diagnostic imaging examination descriptors lists all the reasonably common diagnostic imaging examinations currently performed in Australia and New Zealand using a unified grid structure allowing navigation by both referrers and radiologists. The Framework has been placed on the RANZCR website and is available for access free of charge by registered users. The Body Systems Framework of diagnostic imaging examination descriptors is a system of descriptors based on relationships between anatomical structures and imaging modalities. The Framework is now available as a resource and reference point for the radiology profession and to support core College activities. © 2014 The Royal Australian and New Zealand College of Radiologists.

  14. Towards Dynamic Service Level Agreement Negotiation:An Approach Based on WS-Agreement

    NASA Astrophysics Data System (ADS)

    Pichot, Antoine; Wäldrich, Oliver; Ziegler, Wolfgang; Wieder, Philipp

    In Grid, e-Science and e-Business environments, Service Level Agreements are often used to establish frameworks for the delivery of services between service providers and the organisations hosting the researchers. While this high level SLAs define the overall quality of the services, it is desirable for the end-user to have dedicated service quality also for individual services like the orchestration of resources necessary for composed services. Grid level scheduling services typically are responsible for the orchestration and co-ordination of resources in the Grid. Co-allocation e.g. requires the Grid level scheduler to co-ordinate resource management systems located in different domains. As the site autonomy has to be respected negotiation is the only way to achieve the intended co-ordination. SLAs emerged as a new way to negotiate and manage usage of resources in the Grid and are already adopted by a number of management systems. Therefore, it is natural to look for ways to adopt SLAs for Grid level scheduling. In order to do this, efficient and flexible protocols are needed, which support dynamic negotiation and creation of SLAs. In this paper we propose and discuss extensions to the WS-Agreement protocol addressing these issues.

  15. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  16. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  17. A Brokering Protocol for Agent-Based Grid Resource Discovery

    NASA Astrophysics Data System (ADS)

    Kang, Jaeyong; Sim, Kwang Mong

    Resource discovery is one of the basic and key aspects in grid resource management, which aims at searching for the suitable resources for satisfying the requirement of users' applications. This paper introduces an agent-based brokering protocol which connects users and providers in grid environments. In particular, it focuses on addressing the problem of connecting users and providers. A connection algorithm that matches advertisements of users and requests from providers based on pre-specified multiple criteria is devised and implemented. The connection algorithm mainly consists of four stages: selection, evaluation, filtering, and recommendation. A series of experiments that were carried out in executing the protocol, and favorable results were obtained.

  18. JTS and its Application in Environmental Protection Applications

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta

    2010-05-01

    The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Lopez, A.; Sengupta, M.

    Typical Meteorological Year (TMY) data sets provide industry standard resource information for building designers and are commonly used by the solar industry to estimate photovoltaic and concentrating solar power system performance. Historically, TMY data sets were only available for certain station locations, but current TMY data sets are available on the same grid as the National Solar Radiation Database data and are referred to as the gridded TMY. In this report, a comparison of TMY, typical direct (normal irradiance) year (TDY), and typical global (horizontal irradiance) year (TGY) data sets were performed to better understand the impact of ancillary weathermore » variables upon them. These analyses identified geographical areas of high and low temporal and spatial variability, thereby providing insight into the representativeness of a particular TMY data set for use in renewable energy as well as other applications.« less

  20. Grid infrastructure for automatic processing of SAR data for flood applications

    NASA Astrophysics Data System (ADS)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be executed by different resources of the Grid system. The resulting geospatial services are available in various OGC standards such as KML and WMS. Currently, the Grid infrastructure integrates the resources of several geographically distributed organizations, in particular: Space Research Institute NASU-NSAU (Ukraine) with deployed computational and storage nodes based on Globus Toolkit 4 (htpp://www.globus.org) and gLite 3 (http://glite.web.cern.ch) middleware, access to geospatial data and a Grid portal; Institute of Cybernetics of NASU (Ukraine) with deployed computational and storage nodes (SCIT-1/2/3 clusters) based on Globus Toolkit 4 middleware and access to computational resources (approximately 500 processors); Center of Earth Observation and Digital Earth Chinese Academy of Sciences (CEODE-CAS, China) with deployed computational nodes based on Globus Toolkit 4 middleware and access to geospatial data (approximately 16 processors). We are currently adding new geospatial services based on optical satellite data, namely MODIS. This work is carried out jointly with the CEODE-CAS. Using workflow patterns that were developed for SAR data processing we are building new workflows for optical data processing.

  1. ATLAS user analysis on private cloud resources at GoeGrid

    NASA Astrophysics Data System (ADS)

    Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.

    2015-12-01

    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.

  2. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  3. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  4. Neutron Science TeraGrid Gateway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lynch, Vickie E; Chen, Meili; Cobb, John W

    The unique contributions of the Neutron Science TeraGrid Gateway (NSTG) are the connection of national user facility instrument data sources to the integrated cyberinfrastructure of the National Science FoundationTeraGrid and the development of a neutron science gateway that allows neutron scientists to use TeraGrid resources to analyze their data, including comparison of experiment with simulation. The NSTG is working in close collaboration with the Spallation Neutron Source (SNS) at Oak Ridge as their principal facility partner. The SNS is a next-generation neutron source. It has completed construction at a cost of $1.4 billion and is ramping up operations. The SNSmore » will provide an order of magnitude greater flux than any previous facility in the world and will be available to all of the nation's scientists, independent of funding source, on a peer-reviewed merit basis. With this new capability, the neutron science community is facing orders of magnitude larger data sets and is at a critical point for data analysis and simulation. There is a recognized need for new ways to manage and analyze data to optimize both beam time and scientific output. The TeraGrid is providing new capabilities in the gateway for simulations using McStas and a fitting service on distributed TeraGrid resources to improved turnaround. NSTG staff are also exploring replicating experimental data in archival storage. As part of the SNS partnership, the NSTG provides access to gateway support, cyberinfrastructure outreach, community development, and user support for the neutron science community. This community includes not only SNS staff and users but extends to all the major worldwide neutron scattering centers.« less

  5. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  6. Payload and General Support Computer (PGSC) Detailed Test Objective (DTO) number 795 postflight report: STS-41

    NASA Technical Reports Server (NTRS)

    Adolf, Jurine A.; Beberness, Benjamin J.; Holden, Kritina L.

    1991-01-01

    Since 1983, the Space Transportation System (STS) had routinely flown the GRiD 1139 (80286) laptop computer as a portable onboard computing resource. In the spring of 1988, the GRiD 1530, an 80386 based machine, was chosen to replace the GRiD 1139. Human factors ground evaluations and detailed test objectives (DTO) examined the usability of the available display types under different lighting conditions and various angle deviations. All proved unsuitable due to either flight qualification of usability problems. In 1990, an Electroluminescent (EL) display for the GRiD 1530 became flight qualified and another DTO was undertaken to examine this display on-orbit. Under conditions of indirect sunlight and low ambient light, the readability of the text and graphics was only limited by the observer's distance from the display. Although a problem of direct sunlight viewing still existed, there were no problems with large angular deviations nor dark adaptation. No further evaluations were deemed necessary. The GRiD 1530 with the EL display was accepted by the STS program as the new standard for the PGSC.

  7. Renewable Resource Data | Grid Modernization | NREL

    Science.gov Websites

    , and tools related to U.S. biomass, geothermal, solar, and wind energy resources. Measurement and resource data to help energy system designers, building architects and engineers, renewable energy analysts , and others to accelerate the integration of renewable energy technologies on the grid. National Solar

  8. Resource Management and Risk Mitigation in Online Storage Grids

    ERIC Educational Resources Information Center

    Du, Ye

    2010-01-01

    This dissertation examines the economic value of online storage resources that could be traded and shared as potential commodities and the consequential investments and deployment of such resources. The value proposition of emergent business models such as Akamai and Amazon S3 in online storage grids is capacity provision and content delivery at…

  9. Improving Resource Selection and Scheduling Using Predictions. Chapter 1

    NASA Technical Reports Server (NTRS)

    Smith, Warren

    2003-01-01

    The introduction of computational grids has resulted in several new problems in the area of scheduling that can be addressed using predictions. The first problem is selecting where to run an application on the many resources available in a grid. Our approach to help address this problem is to provide predictions of when an application would start to execute if submitted to specific scheduled computer systems. The second problem is gaining simultaneous access to multiple computer systems so that distributed applications can be executed. We help address this problem by investigating how to support advance reservations in local scheduling systems. Our approaches to both of these problems are based on predictions for the execution time of applications on space- shared parallel computers. As a side effect of this work, we also discuss how predictions of application run times can be used to improve scheduling performance.

  10. Technical Analysis Feasibility Study on Smart Microgrid System in Sekolah Tinggi Teknik PLN

    NASA Astrophysics Data System (ADS)

    Suyanto, Heri

    2018-02-01

    Nowadays application of new and renewable energy as main resource of power plant has greatly increased. High penetration of renewable energy into the grid will influence the quality and reliability of the electricity system, due to the intermittent characteristic of new and renewable energy resources. Smart grid or microgrid technology has the ability to deal with this intermittent characteristic especially if these renewable energy resources integrated to grid in large scale, so it can improve the reliability and efficiency of the grid. We plan to implement smart microgrid system at Sekolah Tinggi Teknik PLN as a pilot project. Before the pilot project start, the feasibility study must be conducted. In this feasibility study, the renewable energy resources and load characteristic at the site will be measured. Then the technical aspect of this feasibility study will be analyzed. This paper explains that analysis of ths feasibility study.

  11. A grid-enabled web service for low-resolution crystal structure refinement.

    PubMed

    O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr

    2012-03-01

    Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.

  12. Analysis the Transient Process of Wind Power Resources when there are Voltage Sags in Distribution Grid

    NASA Astrophysics Data System (ADS)

    Nhu Y, Do

    2018-03-01

    Vietnam has many advantages of wind power resources. Time by time there are more and more capacity as well as number of wind power project in Vietnam. Corresponding to the increase of wind power emitted into national grid, It is necessary to research and analyze in order to ensure the safety and reliability of win power connection. In national distribution grid, voltage sag occurs regularly, it can strongly influence on the operation of wind power. The most serious consequence is the disconnection. The paper presents the analysis of distribution grid's transient process when voltage is sagged. Base on the analysis, the solutions will be recommended to improve the reliability and effective operation of wind power resources.

  13. Institutional Support | Grid Modernization | NREL

    Science.gov Websites

    the challenges posed by grid modernization. Photo of two people standing in front of a display showing results from a grid study. The demand for objective technical assistance and information on grid related to grid modernization and increasing deployment of distributed energy and renewable resources. As

  14. Distributed hierarchical control architecture for integrating smart grid assets during normal and disrupted operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek

    Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.

  15. Information Power Grid (IPG) Tutorial 2003

    NASA Technical Reports Server (NTRS)

    Meyers, George

    2003-01-01

    For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.

  16. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  17. An improved resource management model based on MDS

    NASA Astrophysics Data System (ADS)

    Yuan, Man; Sun, Changying; Li, Pengfei; Sun, Yongdong; He, Rui

    2005-11-01

    GRID technology provides a kind of convenient method for managing GRID resources. This service is so-called monitoring, discovering service. This method is proposed by Globus Alliance, in this GRID environment, all kinds of resources, such as computational resources, storage resources and other resources can be organized by MDS specifications. However, this MDS is a theory framework, particularly, in a small world intranet, in the case of limit of resources, the MDS has its own limitation. Based on MDS, an improved light method for managing corporation computational resources and storage resources is proposed in intranet(IMDS). Firstly, in MDS, all kinds of resource description information is stored in LDAP, it is well known although LDAP is a light directory access protocol, in practice, programmers rarely master how to access and store resource information into LDAP store, in such way, it limits MDS to be used. So, in intranet, these resources' description information can be stored in RDBMS, programmers and users can access this information by standard SQL. Secondly, in MDS, how to monitor all kinds of resources in GRID is not transparent for programmers and users. In such way, it limits its application scope, in general, resource monitoring method base on SNMP is widely employed in intranet, therefore, a kind of resource monitoring method based on SNMP is integrated into MDS. Finally, all kinds of resources in the intranet can be described by XML, and all kinds of resources' description information is stored in RDBMS, such as MySql, and retrieved by standard SQL, dynamic information for all kinds of resources can be sent to resource storage by SNMP, A prototype resource description, monitoring is designed and implemented in intranet.

  18. Hydrological Scenario Using Tools and Applications Available in enviroGRIDS Portal

    NASA Astrophysics Data System (ADS)

    Bacu, V.; Mihon, D.; Stefanut, T.; Rodila, D.; Cau, P.; Manca, S.; Soru, C.; Gorgan, D.

    2012-04-01

    Nowadays the decision makers but also citizens are concerning with the sustainability and vulnerability of land management practices on various aspects and in particular on water quality and quantity in complex watersheds. The Black Sea Catchment is an important watershed in the Central and East Europe. In the FP7 project enviroGRIDS [1] was developed a Web Portal that incorporates different tools and applications focused on geospatial data management, hydrologic model calibration, execution and visualization and training activities. This presentation highlights, from the end-user point of view, the scenario related with hydrological models using the tools and applications available in the enviroGRIDS Web Portal [2]. The development of SWAT (Soil Water Assessment Tool) hydrological models is a well known procedure for the hydrological specialists [3]. Starting from the primary data (information related to weather, soil properties, topography, vegetation, and land management practices of the particular watershed) that are used to develop SWAT hydrological models, to specific reports, about the water quality in the studied watershed, the hydrological specialist will use different applications available in the enviroGRIDS portal. The tools and applications available through the enviroGRIDS portal are not dealing with the building up of the SWAT hydrological models. They are mainly focused on: calibration procedure (gSWAT [4]) - uses the GRID computational infrastructure to speed-up the calibration process; development of specific scenarios (BASHYT [5]) - starts from an already calibrated SWAT hydrological model and defines new scenarios; execution of scenarios (gSWATSim [6]) - executes the scenarios exported from BASHYT; visualization (BASHYT) - displays charts, tables and maps. Each application is built-up as a stack of functional layers. We combine different layers of applications by vertical interoperability in order to build the desired complex functionality. On the other hand, the applications can collaborate at the same architectural levels, which represent the horizontal interoperability. Both the horizontal and vertical interoperability is accomplished by services and by exchanging data. The calibration procedure requires huge computational resources, which are provided by the Grid infrastructure. On the other hand the scenario development through BASHYT requires a flexible way of interaction with the SWAT model in order to easily change the input model. The large user community of SWAT from the enviroGRIDS consortium or outside may greatly benefit from tools and applications related with the calibration process, scenario development and execution from the enviroGRIDS portal. [1]. enviroGRIDS project, http://envirogrids.net/ [2]. Gorgan D., Abbaspour K., Cau P., Bacu V., Mihon D., Giuliani G., Ray N., Lehmann A., Grid Based Data Processing Tools and Applications for Black Sea Catchment Basin. IDAACS 2011 - The 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications 15-17 September 2011, Prague. IEEE Computer Press, pp. 223 - 228 (2011). [3]. Soil and Water Assessment Tool, http://www.brc.tamus.edu/swat/index.html [4]. Bacu V., Mihon D., Rodila D., Stefanut T., Gorgan D., Grid Based Architectural Components for SWAT Model Calibration. HPCS 2011 - International Conference on High Performance Computing and Simulation, 4-8 July, Istanbul, Turkey, ISBN 978-1-61284-381-0, doi: 10.1109/HPCSim.2011.5999824, pp. 193-198 (2011). [5]. Manca S., Soru C., Cau P., Meloni G., Fiori M., A multi model and multiscale, GIS oriented Web framework based on the SWAT model to face issues of water and soil resource vulnerability. Presentation at the 5th International SWAT Conference, August 3-7, 2009, http://www.brc.tamus.edu/swat/4thswatconf/docs/rooma/session5/Cau-Bashyt.pdf [6]. Bacu V., Mihon D., Stefanut T., Rodila D., Gorgan D., Cau P., Manca S., Grid Based Services and Tools for Hydrological Model Processing and Visualization. SYNASC 2011 - 13 International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (in press).

  19. Smart grid integration of small-scale trigeneration systems

    NASA Astrophysics Data System (ADS)

    Vacheva, Gergana; Kanchev, Hristiyan; Hinov, Nikolay

    2017-12-01

    This paper presents a study on the possibilities for implementation of local heating, air-conditioning and electricity generation (trigeneration) as distributed energy resource in the Smart Grid. By the means of microturbine-based generators and absorption chillers buildings are able to meet partially or entirely their electrical load curve or even supply power to the grid by following their heating and air-conditioning daily schedule. The principles of small-scale cooling, heating and power generation systems are presented at first, then the thermal calculations of an example building are performed: the heat losses due to thermal conductivity and the estimated daily heating and air-conditioning load curves. By considering daily power consumption curves and weather data for several winter and summer days, the heating/air-conditioning schedule is estimated and the available electrical energy from a microturbine-based cogeneration system is estimated. Simulation results confirm the potential of using cogeneration and trigeneration systems for local distributed electricity generation and grid support in the daily peaks of power consumption.

  20. Grid Technology as a Cyber Infrastructure for Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.

    2004-01-01

    This paper describes how grids and grid service technologies can be used to develop an infrastructure for the Earth Science community. This cyberinfrastructure would be populated with a hierarchy of services, including discipline specific services such those needed by the Earth Science community as well as a set of core services that are needed by most applications. This core would include data-oriented services used for accessing and moving data as well as computer-oriented services used to broker access to resources and control the execution of tasks on the grid. The availability of such an Earth Science cyberinfrastructure would ease the development of Earth Science applications. With such a cyberinfrastructure, application work flows could be created to extract data from one or more of the Earth Science archives and then process it by passing it through various persistent services that are part of the persistent cyberinfrastructure, such as services to perform subsetting, reformatting, data mining and map projections.

  1. Quantifying variabilty of the solar resource using the Kriging method

    NASA Astrophysics Data System (ADS)

    Monger, Samuel Haze

    Energy consumption will steadily rise in coming years and if fossil fuels, particularly coal, continue to be the primary resource for electricity generation our planet is going to face many hardships. Solar energy is the most abundant resource available to humankind, and although solar generated power is still expensive, the technology is in a state of rapid development as governments strive to meet renewable energy goals as part of the effort to slow climate change and become less dependent on finite resources. However there are many valid concerns associated with integrating high levels of solar energy with the transmission grid due to the rapid changes in power output and voltage from photovoltaic generated electricity due to drops in the solar resource. Therefore, a study was conducted to address issues in this field of research by attempting to quantify the variability of solar irradiance at a specific area using a uniform grid of 45 irradiance sensors. Another goal of this study was to determine if fewer measurement stations could be used in the quantification of variability. This thesis addresses these issues by using the Sandia Variability Index and the dead band ramp algorithm in a statistical analysis on irradiance fluctuations in the regulation and sub-regulation time frames. A kriging method will be introduced which accurately predicts variability using only four stations.

  2. The BioGRID Interaction Database: 2011 update

    PubMed Central

    Stark, Chris; Breitkreutz, Bobby-Joe; Chatr-aryamontri, Andrew; Boucher, Lorrie; Oughtred, Rose; Livstone, Michael S.; Nixon, Julie; Van Auken, Kimberly; Wang, Xiaodong; Shi, Xiaoqi; Reguly, Teresa; Rust, Jennifer M.; Winter, Andrew; Dolinski, Kara; Tyers, Mike

    2011-01-01

    The Biological General Repository for Interaction Datasets (BioGRID) is a public database that archives and disseminates genetic and protein interaction data from model organisms and humans (http://www.thebiogrid.org). BioGRID currently holds 347 966 interactions (170 162 genetic, 177 804 protein) curated from both high-throughput data sets and individual focused studies, as derived from over 23 000 publications in the primary literature. Complete coverage of the entire literature is maintained for budding yeast (Saccharomyces cerevisiae), fission yeast (Schizosaccharomyces pombe) and thale cress (Arabidopsis thaliana), and efforts to expand curation across multiple metazoan species are underway. The BioGRID houses 48 831 human protein interactions that have been curated from 10 247 publications. Current curation drives are focused on particular areas of biology to enable insights into conserved networks and pathways that are relevant to human health. The BioGRID 3.0 web interface contains new search and display features that enable rapid queries across multiple data types and sources. An automated Interaction Management System (IMS) is used to prioritize, coordinate and track curation across international sites and projects. BioGRID provides interaction data to several model organism databases, resources such as Entrez-Gene and other interaction meta-databases. The entire BioGRID 3.0 data collection may be downloaded in multiple file formats, including PSI MI XML. Source code for BioGRID 3.0 is freely available without any restrictions. PMID:21071413

  3. 76 FR 7867 - Proposed Collection; Comment Request; Cancer Biomedical Informatics Grid® (caBIG®) Support...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-11

    ... proposed projects to be submitted to the Office of Management and Budget (OMB) for review and approval... Information Technology (CBIIT) launched the enterprise phase of the caBIG [supreg] initiative in early 2007... resources available through the caBIG [supreg] Enterprise Support Network (ESN), including the caBIG [supreg...

  4. Resilient Core Networks for Energy Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuntze, Nicolai; Rudolph, Carsten; Leivesley, Sally

    2014-07-28

    Abstract—Substations and their control are crucial for the availability of electricity in today’s energy distribution. Ad- vanced energy grids with Distributed Energy Resources require higher complexity in substations, distributed functionality and communication between devices inside substations and between substations. Also, substations include more and more intelligent devices and ICT based systems. All these devices are connected to other systems by different types of communication links or are situated in uncontrolled environments. Therefore, the risk of ICT based attacks on energy grids is growing. Consequently, security measures to counter these risks need to be an intrinsic part of energy grids. Thismore » paper introduces the concept of a Resilient Core Network to interconnected substations. This core network provides essen- tial security features, enables fast detection of attacks and allows for a distributed and autonomous mitigation of ICT based risks.« less

  5. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  6. Energy trading market evolution to the energy internet a feasibility review on the enabling internet of things (IoT) cloud technologies

    NASA Astrophysics Data System (ADS)

    Agavanakis, Kyriakos; Papageorgas, Panagiotis G.; Vokas, Georgios A.; Ampatis, Dionysios; Salame, Chafic

    2018-05-01

    Energy trading market is a consequence of the grid evolution, which has been highly regulated and accessible to a small group of stakeholders so far. Being a fundamental part of national economies, the business models and the operating regulatory structures have been the subject of intense research and experimentation. At the same time, the increasing integration of distributed energy resources to the microgrid level changes the dependence of the grid infrastructure from fossil and nuclear to renewable energy sources, smart storage and smart management. In this paper, it is argued that this shift which marks the transformation towards the next industrial era, puts in the market foreground a big number of smaller producers and ultimately all the end users, in the form of actively engaged prosumers. Furthermore, it is shown that the computational resources and technology to support an open, widely accessible and fair peer-to-peer trading market, are already available. And that such an implementation is feasible and immediately achievable using just commercial products and a side-by-side approach in the place of unrealistic big-bang type grid upgrades.

  7. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration

    2016-10-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  8. Impact of the 2017 Solar Eclipse on Smart Grid

    NASA Astrophysics Data System (ADS)

    Reda, I.; Andreas, A.; Sengupta, M.; Habte, A.

    2017-12-01

    With the increasing interest in using solar energy as a major contributor to renewable energy utilization, and with the focus on using smart grids to optimize the use of electrical energy based on demand and resources from different locations, arises the need to know the Moon position in the sky with respect to the Sun. When a solar eclipse occurs, the Moon disk might totally or partially shade the Sun disk, which can affect the irradiance level from the sun disk, consequently, a resource on the grid is affected. The Moon position can then provide the smart grid users with information about potential total or partial solar eclipse at different locations in the grid, so that other resources on the grid can be directed where this might be needed when such phenomena occurs. At least five solar eclipses occur yearly at different locations on earth, they can last three hours or more depending on the location, which can have devastating effects on the smart grid users. On August 21, 2017 a partial solar eclipse will occur at the National Renewable Energy Laboratory in Golden, Colorado, USA. The solar irradiance will be measured during the eclipse and compared to the data generated by a model for validation.

  9. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749

  10. A bioinformatics knowledge discovery in text application for grid computing.

    PubMed

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.

  11. Integrating Solar Power onto the Electric Grid - Bridging the Gap between Atmospheric Science, Engineering and Economics

    NASA Astrophysics Data System (ADS)

    Ghonima, M. S.; Yang, H.; Zhong, X.; Ozge, B.; Sahu, D. K.; Kim, C. K.; Babacan, O.; Hanna, R.; Kurtz, B.; Mejia, F. A.; Nguyen, A.; Urquhart, B.; Chow, C. W.; Mathiesen, P.; Bosch, J.; Wang, G.

    2015-12-01

    One of the main obstacles to high penetrations of solar power is the variable nature of solar power generation. To mitigate variability, grid operators have to schedule additional reliability resources, at considerable expense, to ensure that load requirements are met by generation. Thus despite the cost of solar PV decreasing, the cost of integrating solar power will increase as penetration of solar resources onto the electric grid increases. There are three principal tools currently available to mitigate variability impacts: (i) flexible generation, (ii) storage, either virtual (demand response) or physical devices and (iii) solar forecasting. Storage devices are a powerful tool capable of ensuring smooth power output from renewable resources. However, the high cost of storage is prohibitive and markets are still being designed to leverage their full potential and mitigate their limitation (e.g. empty storage). Solar forecasting provides valuable information on the daily net load profile and upcoming ramps (increasing or decreasing solar power output) thereby providing the grid advance warning to schedule ancillary generation more accurately, or curtail solar power output. In order to develop solar forecasting as a tool that can be utilized by the grid operators we identified two focus areas: (i) develop solar forecast technology and improve solar forecast accuracy and (ii) develop forecasts that can be incorporated within existing grid planning and operation infrastructure. The first issue required atmospheric science and engineering research, while the second required detailed knowledge of energy markets, and power engineering. Motivated by this background we will emphasize area (i) in this talk and provide an overview of recent advancements in solar forecasting especially in two areas: (a) Numerical modeling tools for coastal stratocumulus to improve scheduling in the day-ahead California energy market. (b) Development of a sky imager to provide short term forecasts (0-20 min ahead) to improve optimization and control of equipment on distribution feeders with high penetration of solar. Leveraging such tools that have seen extensive use in the atmospheric sciences supports the development of accurate physics-based solar forecast models. Directions for future research are also provided.

  12. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  13. Interoperating Cloud-based Virtual Farms

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.

    2015-12-01

    The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.

  14. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  15. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  16. Research on the architecture and key technologies of SIG

    NASA Astrophysics Data System (ADS)

    Fu, Zhongliang; Meng, Qingxiang; Huang, Yan; Liu, Shufan

    2007-06-01

    Along with the development of computer network, Grid has become one of the hottest issues of researches on sharing and cooperation of Internet resources throughout the world. This paper illustrates a new architecture of SIG-a five-hierarchy architecture (including Data Collecting Layer, Grid Layer, Service Layer, Application Layer and Client Layer) of SIG from the traditional three hierarchies (only including resource layer, service layer and client layer). In the paper, the author proposes a new mixed network mode of Spatial Information Grid which integrates CAG (Certificate Authority of Grid) and P2P (Peer to Peer) in the Grid Layer, besides, the author discusses some key technologies of SIG and analysis the functions of these key technologies.

  17. Accounting and Accountability for Distributed and Grid Systems

    NASA Technical Reports Server (NTRS)

    Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.

    2001-01-01

    While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.

  18. The Atmospheric Data Acquisition And Interpolation Process For Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Jardin, M. R.; Erzberger, H.; Denery, Dallas G. (Technical Monitor)

    1995-01-01

    The Center-TRACON Automation System (CTAS), an advanced new air traffic automation program, requires knowledge of spatial and temporal atmospheric conditions such as the wind speed and direction, the temperature and the pressure in order to accurately predict aircraft trajectories. Real-time atmospheric data is available in a grid format so that CTAS must interpolate between the grid points to estimate the atmospheric parameter values. The atmospheric data grid is generally not in the same coordinate system as that used by CTAS so that coordinate conversions are required. Both the interpolation and coordinate conversion processes can introduce errors into the atmospheric data and reduce interpolation accuracy. More accurate algorithms may be computationally expensive or may require a prohibitively large amount of data storage capacity so that trade-offs must be made between accuracy and the available computational and data storage resources. The atmospheric data acquisition and processing employed by CTAS will be outlined in this report. The effects of atmospheric data processing on CTAS trajectory prediction will also be analyzed, and several examples of the trajectory prediction process will be given.

  19. Experimental Evaluation of Load Rejection Over-Voltage from Grid-Tied Solar Inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Hoke, Andy, Chakraborty, Sudipta; Ropp, Michael

    This paper investigates the impact of load rejection over-voltage (LRO) from commercially available grid-tied photovoltaic (PV) solar inverters. LRO can occur when a local feeder or breaker opens and the power output from a distributed energy resource exceeds the load power. Simplified models of current controlled inverters can over-predict over-voltage magnitudes, thus it is useful to quantify testing. The load rejection event was replicated using a hardware testbed at the National Renewable Energy Laboratory (NREL), and a set of commercially available PV inverters was tested to quantify the impact of LRO for a range of generation-to-load ratios. The magnitude andmore » duration of the over-voltage events are reported in this paper along with a discussion of characteristic inverter output behavior. The results for the inverters under test showed that maximum over-voltage magnitudes were less than 200 percent of nominal voltage, and much lower in many test cases. These research results are important because utilities that interconnect inverter-based DER need to understand their characteristics under abnormal grid conditions.« less

  20. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  1. Smoothing effect for spatially distributed renewable resources and its impact on power grid robustness.

    PubMed

    Nagata, Motoki; Hirata, Yoshito; Fujiwara, Naoya; Tanaka, Gouhei; Suzuki, Hideyuki; Aihara, Kazuyuki

    2017-03-01

    In this paper, we show that spatial correlation of renewable energy outputs greatly influences the robustness of the power grids against large fluctuations of the effective power. First, we evaluate the spatial correlation among renewable energy outputs. We find that the spatial correlation of renewable energy outputs depends on the locations, while the influence of the spatial correlation of renewable energy outputs on power grids is not well known. Thus, second, by employing the topology of the power grid in eastern Japan, we analyze the robustness of the power grid with spatial correlation of renewable energy outputs. The analysis is performed by using a realistic differential-algebraic equations model. The results show that the spatial correlation of the energy resources strongly degrades the robustness of the power grid. Our results suggest that we should consider the spatial correlation of the renewable energy outputs when estimating the stability of power grids.

  2. Nomadic migration : a service environment for autonomic computing on the Grid

    NASA Astrophysics Data System (ADS)

    Lanfermann, Gerd

    2003-06-01

    In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.

  3. Managing large-scale workflow execution from resource provisioning to provenance tracking: The CyberShake example

    USGS Publications Warehouse

    Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.

    2006-01-01

    This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.

  4. Lambda Data Grid: Communications Architecture in Support of Grid Computing

    DTIC Science & Technology

    2006-12-21

    number of paradigm shifts in the 20th century, including the growth of large geographically dispersed teams and the use of simulations and computational...get results. The work in this thesis automates the orchestration of networks with other resources, better utilizing all resources in a time efficient...domains, over transatlantic links in around minute. The main goal of this thesis is to build a new grid-computing paradigm that fully harnesses the

  5. GANGA: A tool for computational-task management and easy access to Grid resources

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.

    2009-11-01

    In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL No. of lines in distributed program, including test data, etc.: 224 590 No. of bytes in distributed program, including test data, etc.: 14 365 315 Distribution format: tar.gz Programming language: Python Computer: personal computers, laptops Operating system: Linux/Unix RAM: 1 MB Classification: 6.2, 6.5 Nature of problem: Management of computational tasks for scientific applications on heterogenous distributed systems, including local, batch farms, opportunistic clusters and Grids. Solution method: High-level job management interface, including command line, scripting and GUI components. Restrictions: Access to the distributed resources depends on the installed, 3rd party software such as batch system client or Grid user interface.

  6. A Framework for Control and Observation in Distributed Environments

    NASA Technical Reports Server (NTRS)

    Smith, Warren

    2001-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Further, users need to observe the performance of their applications so that this performance can be improved and control how their applications execute in a dynamic grid environment. In this paper we describe our software framework for control and observation of resources, services, and applications that supports such uses and we provide examples of how our framework can be used.

  7. A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  8. A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  9. Cyberinfrastructure for End-to-End Environmental Explorations

    NASA Astrophysics Data System (ADS)

    Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.

    2007-12-01

    The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.

  10. Sensing, Measurement, and Forecasting | Grid Modernization | NREL

    Science.gov Websites

    into operational intelligence to support grid operations and planning. Photo of solar resource monitoring equipment Grid operations involve assessing the grid's health in real time, predicting its to hours and days-to support advances in power system operations and planning. Capabilities Solar

  11. Sampling designs matching species biology produce accurate and affordable abundance indices

    PubMed Central

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290

  12. Network gateway security method for enterprise Grid: a literature review

    NASA Astrophysics Data System (ADS)

    Sujarwo, A.; Tan, J.

    2017-03-01

    The computational Grid has brought big computational resources closer to scientists. It enables people to do a large computational job anytime and anywhere without any physical border anymore. However, the massive and spread of computer participants either as user or computational provider arise problems in security. The challenge is on how the security system, especially the one which filters data in the gateway could works in flexibility depends on the registered Grid participants. This paper surveys what people have done to approach this challenge, in order to find the better and new method for enterprise Grid. The findings of this paper is the dynamically controlled enterprise firewall to secure the Grid resources from unwanted connections with a new firewall controlling method and components.

  13. Earth Science Data Grid System

    NASA Astrophysics Data System (ADS)

    Chi, Y.; Yang, R.; Kafatos, M.

    2004-12-01

    The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.

  14. Impact of the 2017 Solar Eclipse on the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Reda, Ibrahim M; Andreas, Afshin M

    With the increasing interest in using solar energy as a major contributor to the use of renewable generation, and with the focus on using smart grids to optimize the use of electrical energy based on demand and resources from different locations, the need arises to know the moons position in the sky with respect to the sun. When a solar eclipse occurs, the moon disk might totally or partially shade the sun disk, which can affect the irradiance level from the sun disk, consequently affecting a resource on the electric grid. The moons position can then provide smart grid usersmore » with information about how potential total or partial solar eclipses might affect different locations on the grid so that other resources on the grid can be directed to where they might be needed when such phenomena occurs. At least five solar eclipses occur yearly at different locations on Earth, they can last 3 hours or more depending on the location, and they can affect smart grid users. On August 21, 2017, a partial and full solar eclipse occurred in many locations in the United States, including at the National Renewable Energy Laboratory in Golden, Colorado. Solar irradiance measurements during the eclipse were compared to the data generated by a model for validation at eight locations.« less

  15. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  16. Digital aeromagnetic anomaly data from eastern-most Guyana

    USGS Publications Warehouse

    Pierce, Herbert A.; Backjinski, Natalka; Manes, John-James

    1995-01-01

    The Center for Inter-American Mineral Resource Investigations (CIMRI) supported distribution and analysis of geoscientific and mineral resource related information concerning Latin America. CIMRI staff digitized aeromagnetic data for eastern-most Guyana as part of a preliminary regional assessment of minerals in the Guyana Shield, South America. The data were digitized from 145 aeromagnetic contour maps at a scale of 1:50,000 and merged into a single digital data set. The data were used to examine the Precambrian shield, greenstone belts, and other tectonic boundaries as well as explore ideas concerning mineral deposits within the area. A subset of these digital data were presented to the Guyanan government during early 1995 (Pierce, 1994). This Open-File report, consisting of this text and seven (7) 3.5" IBM-PC compatible ASCII magnetic disks, makes the digital data available to the public. Information regarding the source of data and subsequent processing is included below. The data were collected in Guyana by two contractors at different times. The first data were collected from 1962 to 1963. These data are several aeromagnetic surveys covering parts of 12 quadrangles funded by the United Nations and flown by Aero Service Corporation. The second and more extensive data set was collected from 1971 to 1972 by the Canadian International Development Agency flown by Terra Surveys Ltd. under a contract with the Geological Survey of Guyana. The Guyana Government published the data as contour maps that are available in Georgetown through the Guyana Government. Coverage extends from about 2°45*N to 8°30*N latitude and from 60°0'W to 57°0'W longitude (see Figure 1.). The contour maps were digitized at points where the magnetic contours intersect the flight lines. The data files include XYZ ASCII files, XYZ binary files, ASCII grids, and binary "standard USGS" grids. There are four grids consisting of the following data types: unprotected raw data gridunprotected residual or International Geomagnetic Reference Field (IGRF) removed gridUTM projected residual (IGRF removed) gridUTM projected residual with a second order surface removedThese data files were transferred to 3.5" 1.44 megabyte floppy disks readable on IBM-compatible personal computers. These data are also available from the Department of Commerce National Geophysical Data Center.

  17. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create profit for investors for renting their transmission capacity, and cheaper electricity for end users. We propose a hybrid method based on a heuristic and deterministic method to attain new transmission lines additions and increase transmission capacity. Renewable energy resources (RES) have zero operating cost, which makes them very attractive for generation companies and market participants. In addition, RES have zero carbon emission, which helps relieve the concerns of environmental impacts of electric generation resources' carbon emission. RES are wind, solar, hydro, biomass, and geothermal. By 2030, the expectation is that more than 30% of electricity in the U.S. will come from RES. One major contributor of RES generation will be from wind energy resources (WES). Furthermore, WES will be an important component of the future generation portfolio. However, the nature of WES is that it experiences a high intermittency and volatility. Because of the great expectation of high WES penetration and the nature of such resources, researchers focus on studying the effects of such resources on the electric grid operation and its adequacy from different aspects. Additionally, current market operations of electric grids add another complication to consider while integrating RES (e.g., specifically WES). Mandates by market rules and long-term analysis of renewable penetration in large-scale electric grid are also the focus of researchers in recent years. We advocate a method for high-wind resources penetration study on large-scale electric grid operations. PMU is a geographical positioning system (GPS) based device, which provides immediate and precise measurements of voltage angle in a high-voltage transmission system. PMUs can update the status of a transmission line and related measurements (e.g., voltage magnitude and voltage phase angle) more frequently. Every second, a PMU can provide 30 samples of measurements compared to traditional systems (e.g., supervisory control and data acquisition [SCADA] system), which provides one sample of measurement every 2 to 5 seconds. Because PMUs provide more measurement data samples, PMU can improve electric grid reliability and observability. (Abstract shortened by UMI.)

  18. Spatiotemporal Variability of Drought in Pakistan through High-Resolution Daily Gridded In-Situ Observations

    NASA Astrophysics Data System (ADS)

    Bashir, F.; Zeng, X.; Gupta, H. V.; Hazenberg, P.

    2017-12-01

    Drought as an extreme event may have far reaching socio-economic impacts on agriculture based economies like Pakistan. Effective assessment of drought requires high resolution spatiotemporally continuous hydrometeorological information. For this purpose, new in-situ daily observations based gridded analyses of precipitation, maximum, minimum and mean temperature and diurnal temperature range are developed, that covers whole Pakistan on 0.01º latitude-longitude for a 54-year period (1960-2013). The number of participating meteorological observatories used in these gridded analyses is 2 to 6 times greater than any other similar product available. This data set is used to identify extreme wet and dry periods and their spatial patterns across Pakistan using Palmer Drought Severity Index (PDSI) and Standardized Precipitation Index (SPI). Periodicity of extreme events is estimated at seasonal to decadal scales. Spatiotemporal signatures of drought incidence indicating its extent and longevity in different areas may help water resource managers and policy makers to mitigate the severity of the drought and its impact on food security through suitable adaptive techniques. Moreover, this high resolution gridded in-situ observations of precipitation and temperature is used to evaluate other coarser-resolution gridded products.

  19. Energy Systems Integration News - September 2016 | Energy Systems

    Science.gov Websites

    , Smarter Grid Solutions demonstrated a new distributed energy resources (DER) software control platform utility interconnections require distributed generation (DG) devices to disconnect from the grid during OpenFMB distributed applications on the microgrid test site to locally optimize renewable energy resources

  20. Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics

    NASA Astrophysics Data System (ADS)

    Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.

    2016-12-01

    Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.

  1. Network Coding Opportunities for Wireless Grids Formed by Mobile Devices

    NASA Astrophysics Data System (ADS)

    Nielsen, Karsten Fyhn; Madsen, Tatiana K.; Fitzek, Frank H. P.

    Wireless grids have potential in sharing communication, computa-tional and storage resources making these networks more powerful, more robust, and less cost intensive. However, to enjoy the benefits of cooperative resource sharing, a number of issues should be addressed and the cost of the wireless link should be taken into account. We focus on the question how nodes can efficiently communicate and distribute data in a wireless grid. We show the potential of a network coding approach when nodes have the possibility to combine packets thus increasing the amount of information per transmission. Our implementation demonstrates the feasibility of network coding for wireless grids formed by mobile devices.

  2. Modularized Parallel Neutron Instrument Simulation on the TeraGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Meili; Cobb, John W; Hagen, Mark E

    2007-01-01

    In order to build a bridge between the TeraGrid (TG), a national scale cyberinfrastructure resource, and neutron science, the Neutron Science TeraGrid Gateway (NSTG) is focused on introducing productive HPC usage to the neutron science community, primarily the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL). Monte Carlo simulations are used as a powerful tool for instrument design and optimization at SNS. One of the successful efforts of a collaboration team composed of NSTG HPC experts and SNS instrument scientists is the development of a software facility named PSoNI, Parallelizing Simulations of Neutron Instruments. Parallelizing the traditional serialmore » instrument simulation on TeraGrid resources, PSoNI quickly computes full instrument simulation at sufficient statistical levels in instrument de-sign. Upon SNS successful commissioning, to the end of 2007, three out of five commissioned instruments in SNS target station will be available for initial users. Advanced instrument study, proposal feasibility evalua-tion, and experiment planning are on the immediate schedule of SNS, which pose further requirements such as flexibility and high runtime efficiency on fast instrument simulation. PSoNI has been redesigned to meet the new challenges and a preliminary version is developed on TeraGrid. This paper explores the motivation and goals of the new design, and the improved software structure. Further, it describes the realized new fea-tures seen from MPI parallelized McStas running high resolution design simulations of the SEQUOIA and BSS instruments at SNS. A discussion regarding future work, which is targeted to do fast simulation for automated experiment adjustment and comparing models to data in analysis, is also presented.« less

  3. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.

  4. A Security Architecture for Grid-enabling OGC Web Services

    NASA Astrophysics Data System (ADS)

    Angelini, Valerio; Petronzio, Luca

    2010-05-01

    In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.

  5. Uniformity on the grid via a configuration framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igor V Terekhov et al.

    2003-03-11

    As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less

  6. High Quality Data for Grid Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather predictionmore » to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.« less

  7. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  8. On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

    2001-05-01

    There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

  9. NWTC's Grid Capabilities Providing Value for Partners | News | NREL

    Science.gov Websites

    controlled grid conditions where you can research interactions of grid impacts and resource variability impacts on the system at the same time." This capability creates an unrivaled asset. "That's on a smaller 2.5-megawatt dynamometer. They started to realize that grid impacts also need to be

  10. NASA Astrophysics Data System (ADS)

    Knosp, B.; Neely, S.; Zimdars, P.; Mills, B.; Vance, N.

    2007-12-01

    The Microwave Limb Sounder (MLS) Science Computing Facility (SCF) stores over 50 terabytes of data, has over 240 computer processing hosts, and 64 users from around the world. These resources are spread over three primary geographical locations - the Jet Propulsion Laboratory (JPL), Raytheon RIS, and New Mexico Institute of Mining and Technology (NMT). A need for a grid network system was identified and defined to solve the problem of users competing for finite, and increasingly scarce, MLS SCF computing resources. Using Sun's Grid Engine software, a grid network was successfully created in a development environment that connected the JPL and Raytheon sites, established master and slave hosts, and demonstrated that transfer queues for jobs can work among multiple clusters in the same grid network. This poster will first describe MLS SCF resources and the lessons that were learned in the design and development phase of this project. It will then go on to discuss the test environment and plans for deployment by highlighting benchmarks and user experiences.

  11. Using Electric Vehicles to Meet Balancing Requirements Associated with Wind Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuffner, Francis K.; Kintner-Meyer, Michael CW

    2011-07-31

    Many states are deploying renewable generation sources at a significant rate to meet renewable portfolio standards. As part of this drive to meet renewable generation levels, significant additions of wind generation are planned. Due to the highly variable nature of wind generation, significant energy imbalances on the power system can be created and need to be handled. This report examines the impact on the Northwest Power Pool (NWPP) region for a 2019 expected wind scenario. One method for mitigating these imbalances is to utilize plug-in hybrid electric vehicles (PHEVs) or battery electric vehicles (BEVs) as assets to the grid. PHEVsmore » and BEVs have the potential to meet this demand through both charging and discharging strategies. This report explores the usage of two different charging schemes: V2GHalf and V2GFull. In V2GHalf, PHEV/BEV charging is varied to absorb the additional imbalance from the wind generation, but never feeds power back into the grid. This scenario is highly desirable to automotive manufacturers, who harbor great concerns about battery warranty if vehicle-to-grid discharging is allowed. The second strategy, V2GFull, varies not only the charging of the vehicle battery, but also can vary the discharging of the battery back into the power grid. This scenario is currently less desirable to automotive manufacturers, but provides an additional resource benefit to PHEV/BEVs in meeting the additional imbalance imposed by wind. Key findings in the report relate to the PHEV/BEV population required to meet the additional imbalance when comparing V2GHalf to V2GFull populations, and when comparing home-only-charging and work-and-home-charging scenarios. Utilizing V2GFull strategies over V2GHalf resulted in a nearly 33% reduction in the number of vehicles required. This reduction indicates fewer vehicles are needed to meet the unhandled energy, but they would utilize discharging of the vehicle battery into the grid. This practice currently results in the voiding of automotive manufacturer's battery warranty, and is not feasible for many customers. The second key finding is the change in the required population when PHEV/BEV charging is available at both home and work. Allowing 10% of the vehicle population access to work charging resulted in nearly 80% of the grid benefit. Home-only charging requires, at best, 94% of the current NWPP light duty vehicle fleet to be a PHEV or BEV. With the introduction of full work charging availability, only 8% of the NWPP light duty vehicle fleet is required. Work charging has primarily been associated with mitigating range anxiety in new electric vehicle owners, but these studies indicate they have significant potential for improving grid reliability. The V2GHalf and V2GFull charging strategies of the report utilize grid frequency as an indication of the imbalance requirements. The introduction of public charging stations, as well as the potential for PHEV/BEVs to be used as a resource for renewable generation integration, creates conditions for additional products into the ancillary services market. In the United Kingdom, such a capability would be bid as a frequency product in the ancillary services market. Such a market could create the need for larger, third-party aggregators or services to manage the use of electric vehicles as a grid resource. Ultimately, customer adoption, usage patterns and habits, and feedback from the power and automotive industries will drive the need.« less

  12. Biomass energy inventory and mapping system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasile, J.D.

    1993-12-31

    A four-stage biomass energy inventory and mapping system was conducted for the entire State of Ohio. The product is a set of maps and an inventory of the State of Ohio. The set of amps and an inventory of the State`s energy biomass resource are to a one kilometer grid square basis on the Universal Transverse Mercator (UTM) system. Each square kilometer is identified and mapped showing total British Thermal Unit (BTU) energy availability. Land cover percentages and BTU values are provided for each of nine biomass strata types for each one kilometer grid square. LANDSAT satellite data was usedmore » as the primary stratifier. The second stage sampling was the photointerpretation of randomly selected one kilometer grid squares that exactly corresponded to the LANDSAT one kilometer grid square classification orientation. Field sampling comprised the third stage of the energy biomass inventory system and was combined with the fourth stage sample of laboratory biomass energy analysis using a Bomb calorimeter and was then used to assign BTU values to the photointerpretation and to adjust the LANDSAT classification. The sampling error for the whole system was 3.91%.« less

  13. Smart Grid Interoperability Maturity Model Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widergren, Steven E.; Drummond, R.; Giroti, Tony

    The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across anmore » information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.« less

  14. High-quality weather data for grid integration studies

    NASA Astrophysics Data System (ADS)

    Draxl, C.

    2016-12-01

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. In this talk we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather prediction to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets will be presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The Solar Integration National Dataset (SIND) is available as time synchronized with the WIND Toolkit, and will allow for combined wind-solar grid integration studies. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. Grid integration studies are also carried out in various countries, which aim at increasing their wind and solar penetration through combined wind and solar integration data sets. We will present a multi-year effort to directly support India's 24x7 energy access goal through a suite of activities aimed at enabling large-scale deployment of clean energy and energy efficiency. Another current effort is the North-American-Renewable-Integration-Study, with the aim of providing a seamless data set across borders for a whole continent, to simulate and analyze the impacts of potential future large wind and solar power penetrations on bulk power system operations.

  15. A computer software system for integration and analysis of grid-based remote sensing data with other natural resource data. Remote Sensing Project

    NASA Technical Reports Server (NTRS)

    Tilmann, S. E.; Enslin, W. R.; Hill-Rowley, R.

    1977-01-01

    A computer-based information system is described designed to assist in the integration of commonly available spatial data for regional planning and resource analysis. The Resource Analysis Program (RAP) provides a variety of analytical and mapping phases for single factor or multi-factor analyses. The unique analytical and graphic capabilities of RAP are demonstrated with a study conducted in Windsor Township, Eaton County, Michigan. Soil, land cover/use, topographic and geological maps were used as a data base to develope an eleven map portfolio. The major themes of the portfolio are land cover/use, non-point water pollution, waste disposal, and ground water recharge.

  16. Smarter Grid Solutions Works with NREL to Enhance Grid-Hosting Capacity |

    Science.gov Websites

    autonomously manages, coordinates, and controls distributed energy resources in real time to maintain the coordination and real-time management of an entire distribution grid, subsuming the smart home and smart campus

  17. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  18. A novel LTE scheduling algorithm for green technology in smart grid.

    PubMed

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.

  19. A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid

    PubMed Central

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  20. A Comparison of Satellite Based, Modeled Derived Daily Solar Radiation Data with Observed Data for the Continental US

    NASA Technical Reports Server (NTRS)

    White, Jeffrey W.; Hoogenboom, Gerrit; Wilkens, Paul W.; Stackhouse, Paul W., Jr.; Hoell, James M.

    2010-01-01

    Many applications of simulation models and related decision support tools for agriculture and natural resource management require daily meteorological data as inputs. Availability and quality of such data, however, often constrain research and decision support activities that require use of these tools. Daily solar radiation (SRAD) data are especially problematic because the instruments require electronic integrators, accurate sensors are expensive, and calibration standards are seldom available. The Prediction Of Worldwide Energy Resources (NASA/POWER; power.larc.nasa.gov) project at the NASA Langley Research Center estimates daily solar radiation based on data that are derived from satellite observations of outgoing visible radiances and atmospheric parameters based upon satellite observations and assimilation models. The solar data are available for a global 1 degree x 1 degree coordinate grid. SRAD can also be estimated based on attenuation of extraterrestrial radiation (Q0) using daily temperature and rainfall data to estimate the optical thickness of the atmosphere. This study compares daily solar radiation data from NASA/POWER (SRADNP) with instrument readings from 295 stations (SRADOB), as well as with values that were estimated with the WGENR solar generator. WGENR was used both with daily temperature and precipitation records from the stations reporting solar data and records from the NOAA Cooperative Observer Program (COOP), thus providing two additional sources of solar data, SRADWG and SRADCO. Values of SRADNP for different grid cells consistently showed higher correlations (typically 0.85 to 0.95) with SRADOB data than did SRADWG or SRADCO for sites within the corresponding cells. Mean values of SRADOB, SRADWG and SRADNP for sites within a grid cell usually were within 1 MJm-2d-1 of each other, but NASA/POWER values averaged 1.1 MJm-2d-1 lower than SRADOB. The magnitude of this bias was greater at lower latitudes and during summer months and may be at least partially explained by assumptions in ambient aerosol properties. Overall, the NASA/POWER solar radiation data are a promising resource for regional modeling studies where realistic accounting of historic variation is required.

  1. gLExec: gluing grid computing to the Unix world

    NASA Astrophysics Data System (ADS)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  2. Semantic technologies in a decision support system

    NASA Astrophysics Data System (ADS)

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Bǎdicǎ, C.; Ivanovic, M.; Lirkov, I.

    2015-10-01

    The aim of our work is to design a decision support system based on ontological representation of domain(s) and semantic technologies. Specifically, we consider the case when Grid / Cloud user describes his/her requirements regarding a "resource" as a class expression from an ontology, while the instances of (the same) ontology represent available resources. The goal is to help the user to find the best option with respect to his/her requirements, while remembering that user's knowledge may be "limited." In this context, we discuss multiple approaches based on semantic data processing, which involve different "forms" of user interaction with the system. Specifically, we consider: (a) ontological matchmaking based on SPARQL queries and class expression, (b) graph-based semantic closeness of instances representing user requirements (constructed from the class expression) and available resources, and (c) multicriterial analysis based on the AHP method, which utilizes expert domain knowledge (also ontologically represented).

  3. Using Information Processing Techniques to Forecast, Schedule, and Deliver Sustainable Energy to Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Pulusani, Praneeth R.

    As the number of electric vehicles on the road increases, current power grid infrastructure will not be able to handle the additional load. Some approaches in the area of Smart Grid research attempt to mitigate this, but those approaches alone will not be sufficient. Those approaches and traditional solution of increased power production can result in an insufficient and imbalanced power grid. It can lead to transformer blowouts, blackouts and blown fuses, etc. The proposed solution will supplement the ``Smart Grid'' to create a more sustainable power grid. To solve or mitigate the magnitude of the problem, measures can be taken that depend on weather forecast models. For instance, wind and solar forecasts can be used to create first order Markov chain models that will help predict the availability of additional power at certain times. These models will be used in conjunction with the information processing layer and bidirectional signal processing components of electric vehicle charging systems, to schedule the amount of energy transferred per time interval at various times. The research was divided into three distinct components: (1) Renewable Energy Supply Forecast Model, (2) Energy Demand Forecast from PEVs, and (3) Renewable Energy Resource Estimation. For the first component, power data from a local wind turbine, and weather forecast data from NOAA were used to develop a wind energy forecast model, using a first order Markov chain model as the foundation. In the second component, additional macro energy demand from PEVs in the Greater Rochester Area was forecasted by simulating concurrent driving routes. In the third component, historical data from renewable energy sources was analyzed to estimate the renewable resources needed to offset the energy demand from PEVs. The results from these models and components can be used in the smart grid applications for scheduling and delivering energy. Several solutions are discussed to mitigate the problem of overloading transformers, lack of energy supply, and higher utility costs.

  4. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  5. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  6. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  7. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  8. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  9. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less

  10. Picking the Best from the All-Resources Menu: Advanced Tools for Resource Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S

    Introduces the wide range of electric power systems modeling types and associated questions they can help answer. The presentation focusses on modeling needs for high levels of Distributed Energy Resources (DERs), renewables, and inverter-based technologies as alternatives to traditional centralized power systems. Covers Dynamics, Production Cost/QSTS, Metric Assessment, Resource Planning, and Integrated Simulations with examples drawn from NREL's past and on-going projects. Presented at the McKnight Foundation workshop on 'An All-Resources Approach to Planning for a More Dynamic, Low-Carbon Grid' exploring grid modernization options to replace retiring coal plants in Minnesota.

  11. NREL + SolarCity: Maximizing Solar Power on Electrical Grids Video Text

    Science.gov Websites

    Electrical Grids video. RYAN HANLEY: The growth of distributed energy resources is becoming real and tangible . BRYAN HANNEGAN: Solar technologies, particularly those distributed, rooftop, PV solar technologies, add Hawaiian Electric Company was concerned about as far as installing distributed energy resources on their

  12. Assessing the prospective resource base for enhanced geothermal systems in Europe

    NASA Astrophysics Data System (ADS)

    Limberger, J.; Calcagno, P.; Manzella, A.; Trumpy, E.; Boxem, T.; Pluymaekers, M. P. D.; van Wees, J.-D.

    2014-12-01

    In this study the resource base for EGS (enhanced geothermal systems) in Europe was quantified and economically constrained, applying a discounted cash-flow model to different techno-economic scenarios for future EGS in 2020, 2030, and 2050. Temperature is a critical parameter that controls the amount of thermal energy available in the subsurface. Therefore, the first step in assessing the European resource base for EGS is the construction of a subsurface temperature model of onshore Europe. Subsurface temperatures were computed to a depth of 10 km below ground level for a regular 3-D hexahedral grid with a horizontal resolution of 10 km and a vertical resolution of 250 m. Vertical conductive heat transport was considered as the main heat transfer mechanism. Surface temperature and basal heat flow were used as boundary conditions for the top and bottom of the model, respectively. If publicly available, the most recent and comprehensive regional temperature models, based on data from wells, were incorporated. With the modeled subsurface temperatures and future technical and economic scenarios, the technical potential and minimum levelized cost of energy (LCOE) were calculated for each grid cell of the temperature model. Calculations for a typical EGS scenario yield costs of EUR 215 MWh-1 in 2020, EUR 127 MWh-1 in 2030, and EUR 70 MWh-1 in 2050. Cutoff values of EUR 200 MWh-1 in 2020, EUR 150 MWh-1 in 2030, and EUR 100 MWh-1 in 2050 are imposed to the calculated LCOE values in each grid cell to limit the technical potential, resulting in an economic potential for Europe of 19 GWe in 2020, 22 GWe in 2030, and 522 GWe in 2050. The results of our approach do not only provide an indication of prospective areas for future EGS in Europe, but also show a more realistic cost determined and depth-dependent distribution of the technical potential by applying different well cost models for 2020, 2030, and 2050.

  13. Framework Resources Multiply Computing Power

    NASA Technical Reports Server (NTRS)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  14. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  15. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  16. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  17. WebGIS based on semantic grid model and web services

    NASA Astrophysics Data System (ADS)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.

  18. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  19. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  20. Best Practices Handbook for the Collection and Use of Solar Resource Data for Solar Energy Applications: Second Edition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Habte, Aron; Gueymard, Christian

    As the world looks for low-carbon sources of energy, solar power stands out as the single most abundant energy resource on Earth. Harnessing this energy is the challenge for this century. Photovoltaics, solar heating and cooling, and concentrating solar power (CSP) are primary forms of energy applications using sunlight. These solar energy systems use different technologies, collect different fractions of the solar resource, and have different siting requirements and production capabilities. Reliable information about the solar resource is required for every solar energy application. This holds true for small installations on a rooftop as well as for large solar powermore » plants; however, solar resource information is of particular interest for large installations, because they require substantial investment, sometimes exceeding 1 billion dollars in construction costs. Before such a project is undertaken, the best possible information about the quality and reliability of the fuel source must be made available. That is, project developers need reliable data about the solar resource available at specific locations, including historic trends with seasonal, daily, hourly, and (preferably) subhourly variability to predict the daily and annual performance of a proposed power plant. Without this data, an accurate financial analysis is not possible. Additionally, with the deployment of large amounts of distributed photovoltaics, there is an urgent need to integrate this source of generation to ensure the reliability and stability of the grid. Forecasting generation from the various sources will allow for larger penetrations of these generation sources because utilities and system operators can then ensure stable grid operations. Developed by the foremost experts in the field who have come together under the umbrella of the International Energy Agency's Solar Heating and Cooling Task 46, this handbook summarizes state-of-the-art information about all the above topics.« less

  1. Electric Power Infrastructure Reliability and Security (EPIRS) Reseach and Development Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rick Meeker; L. Baldwin; Steinar Dale

    2010-03-31

    Power systems have become increasingly complex and face unprecedented challenges posed by population growth, climate change, national security issues, foreign energy dependence and an aging power infrastructure. Increased demand combined with increased economic and environmental constraints is forcing state, regional and national power grids to expand supply without the large safety and stability margins in generation and transmission capacity that have been the rule in the past. Deregulation, distributed generation, natural and man-made catastrophes and other causes serve to further challenge and complicate management of the electric power grid. To meet the challenges of the 21st century while also maintainingmore » system reliability, the electric power grid must effectively integrate new and advanced technologies both in the actual equipment for energy conversion, transfer and use, and in the command, control, and communication systems by which effective and efficient operation of the system is orchestrated - in essence, the 'smart grid'. This evolution calls for advances in development, integration, analysis, and deployment approaches that ultimately seek to take into account, every step of the way, the dynamic behavior of the system, capturing critical effects due to interdependencies and interaction. This approach is necessary to better mitigate the risk of blackouts and other disruptions and to improve the flexibility and capacity of the grid. Building on prior Navy and Department of Energy investments in infrastructure and resources for electric power systems research, testing, modeling, and simulation at the Florida State University (FSU) Center for Advanced Power Systems (CAPS), this project has continued an initiative aimed at assuring reliable and secure grid operation through a more complete understanding and characterization of some of the key technologies that will be important in a modern electric system, while also fulfilling an education and outreach mission to provide future energy workforce talent and support the electric system stakeholder community. Building upon and extending portions of that research effort, this project has been focused in the following areas: (1) Building high-fidelity integrated power and controls hardware-in-the-loop research and development testbed capabilities (Figure 1). (2) Distributed Energy Resources Integration - (a) Testing Requirements and Methods for Fault Current Limiters, (b) Contributions to the Development of IEEE 1547.7, (c) Analysis of a STATCOM Application for Wind Resource Integration, (d) Development of a Grid-Interactive Inverter with Energy Storage Elements, (e) Simulation-Assisted Advancement of Microgrid Understanding and Applications; (3) Availability of High-Fidelity Dynamic Simulation Tools for Grid Disturbance Investigations; (4) HTS Material Characterization - (a) AC Loss Studies on High Temperature Superconductors, (b) Local Identification of Current-Limiting Mechanisms in Coated Conductors; (5) Cryogenic Dielectric Research; and (6) Workshops, education, and outreach.« less

  2. The GENIUS Grid Portal and robot certificates: a new tool for e-Science

    PubMed Central

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-01-01

    Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747

  3. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    PubMed

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.

  4. Chelonia: A self-healing, replicated storage system

    NASA Astrophysics Data System (ADS)

    Kerr Nilsen, Jon; Toor, Salman; Nagy, Zsombor; Read, Alex

    2011-12-01

    Chelonia is a novel grid storage system designed to fill the requirements gap between those of large, sophisticated scientific collaborations which have adopted the grid paradigm for their distributed storage needs, and of corporate business communities gravitating towards the cloud paradigm. Chelonia is an integrated system of heterogeneous, geographically dispersed storage sites which is easily and dynamically expandable and optimized for high availability and scalability. The architecture and implementation in term of web-services running inside the Advanced Resource Connector Hosting Environment Dameon (ARC HED) are described and results of tests in both local -area and wide-area networks that demonstrate the fault tolerance, stability and scalability of Chelonia will be presented. In addition, example setups for production deployments for small and medium-sized VO's are described.

  5. Auspice: Automatic Service Planning in Cloud/Grid Environments

    NASA Astrophysics Data System (ADS)

    Chiu, David; Agrawal, Gagan

    Recent scientific advances have fostered a mounting number of services and data sets available for utilization. These resources, though scattered across disparate locations, are often loosely coupled both semantically and operationally. This loosely coupled relationship implies the possibility of linking together operations and data sets to answer queries. This task, generally known as automatic service composition, therefore abstracts the process of complex scientific workflow planning from the user. We have been exploring a metadata-driven approach toward automatic service workflow composition, among other enabling mechanisms, in our system, Auspice: Automatic Service Planning in Cloud/Grid Environments. In this paper, we present a complete overview of our system's unique features and outlooks for future deployment as the Cloud computing paradigm becomes increasingly eminent in enabling scientific computing.

  6. Grid accounting service: state and future development

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.

    2014-06-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  7. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    NASA Astrophysics Data System (ADS)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  8. Optimizing Resource Utilization in Grid Batch Systems

    NASA Astrophysics Data System (ADS)

    Gellrich, Andreas

    2012-12-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  9. ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows

    NASA Technical Reports Server (NTRS)

    McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush

    2004-01-01

    With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.

  10. A highly optimized grid deployment: the metagenomic analysis example.

    PubMed

    Aparicio, Gabriel; Blanquer, Ignacio; Hernández, Vicente

    2008-01-01

    Computational resources and computationally expensive processes are two topics that are not growing at the same ratio. The availability of large amounts of computing resources in Grid infrastructures does not mean that efficiency is not an important issue. It is necessary to analyze the whole process to improve partitioning and submission schemas, especially in the most critical experiments. This is the case of metagenomic analysis, and this text shows the work done in order to optimize a Grid deployment, which has led to a reduction of the response time and the failure rates. Metagenomic studies aim at processing samples of multiple specimens to extract the genes and proteins that belong to the different species. In many cases, the sequencing of the DNA of many microorganisms is hindered by the impossibility of growing significant samples of isolated specimens. Many bacteria cannot survive alone, and require the interaction with other organisms. In such cases, the information of the DNA available belongs to different kinds of organisms. One important stage in Metagenomic analysis consists on the extraction of fragments followed by the comparison and analysis of their function stage. By the comparison to existing chains, whose function is well known, fragments can be classified. This process is computationally intensive and requires of several iterations of alignment and phylogeny classification steps. Source samples reach several millions of sequences, which could reach up to thousands of nucleotides each. These sequences are compared to a selected part of the "Non-redundant" database which only implies the information from eukaryotic species. From this first analysis, a refining process is performed and alignment analysis is restarted from the results. This process implies several CPU years. The article describes and analyzes the difficulties to fragment, automate and check the above operations in current Grid production environments. This environment has been tuned-up from an experimental study which has tested the most efficient and reliable resources, the optimal job size, and the data transference and database reindexation overhead. The environment should re-submit faulty jobs, detect endless tasks and ensure that the results are correctly retrieved and workflow synchronised. The paper will give an outline on the structure of the system, and the preparation steps performed to deal with this experiment.

  11. [Application of digital earth technology in research of traditional Chinese medicine resources].

    PubMed

    Liu, Jinxin; Liu, Xinxin; Gao, Lu; Wei, Yingqin; Meng, Fanyun; Wang, Yongyan

    2011-02-01

    This paper describes the digital earth technology and its core technology-"3S" integration technology. The advance and promotion of the "3S" technology provide more favorable means and technical support for Chinese medicine resources survey, evaluation and appropriate zoning. Grid is a mature and popular technology that can connect all kinds of information resources. The author sums up the application of digital earth technology in the research of traditional Chinese medicine resources in recent years, and proposes the new method and technical route of investigation in traditional Chinese medicine resources, traditional Chinese medicine zoning and suitability assessment by combining the digital earth technology and grid.

  12. The Language Grid: supporting intercultural collaboration

    NASA Astrophysics Data System (ADS)

    Ishida, T.

    2018-03-01

    A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.

  13. Geoscience data visualization and analysis using GeoMapApp

    NASA Astrophysics Data System (ADS)

    Ferrini, Vicki; Carbotte, Suzanne; Ryan, William; Chan, Samantha

    2013-04-01

    Increased availability of geoscience data resources has resulted in new opportunities for developing visualization and analysis tools that not only promote data integration and synthesis, but also facilitate quantitative cross-disciplinary access to data. Interdisciplinary investigations, in particular, frequently require visualizations and quantitative access to specialized data resources across disciplines, which has historically required specialist knowledge of data formats and software tools. GeoMapApp (www.geomapapp.org) is a free online data visualization and analysis tool that provides direct quantitative access to a wide variety of geoscience data for a broad international interdisciplinary user community. While GeoMapApp provides access to online data resources, it can also be packaged to work offline through the deployment of a small portable hard drive. This mode of operation can be particularly useful during field programs to provide functionality and direct access to data when a network connection is not possible. Hundreds of data sets from a variety of repositories are directly accessible in GeoMapApp, without the need for the user to understand the specifics of file formats or data reduction procedures. Available data include global and regional gridded data, images, as well as tabular and vector datasets. In addition to basic visualization and data discovery functionality, users are provided with simple tools for creating customized maps and visualizations and to quantitatively interrogate data. Specialized data portals with advanced functionality are also provided for power users to further analyze data resources and access underlying component datasets. Users may import and analyze their own geospatial datasets by loading local versions of geospatial data and can access content made available through Web Feature Services (WFS) and Web Map Services (WMS). Once data are loaded in GeoMapApp, a variety options are provided to export data and/or 2D/3D visualizations into common formats including grids, images, text files, spreadsheets, etc. Examples of interdisciplinary investigations that make use of GeoMapApp visualization and analysis functionality will be provided.

  14. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.

  15. Job Superscheduler Architecture and Performance in Computational Grid Environments

    NASA Technical Reports Server (NTRS)

    Shan, Hongzhang; Oliker, Leonid; Biswas, Rupak

    2003-01-01

    Computational grids hold great promise in utilizing geographically separated heterogeneous resources to solve large-scale complex scientific problems. However, a number of major technical hurdles, including distributed resource management and effective job scheduling, stand in the way of realizing these gains. In this paper, we propose a novel grid superscheduler architecture and three distributed job migration algorithms. We also model the critical interaction between the superscheduler and autonomous local schedulers. Extensive performance comparisons with ideal, central, and local schemes using real workloads from leading computational centers are conducted in a simulation environment. Additionally, synthetic workloads are used to perform a detailed sensitivity analysis of our superscheduler. Several key metrics demonstrate that substantial performance gains can be achieved via smart superscheduling in distributed computational grids.

  16. A new service-oriented grid-based method for AIoT application and implementation

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.

  17. Comparative Assessment of Tactics to Improve Primary Frequency Response Without Curtailing Solar Output in High Photovoltaic Interconnection Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Jin; Zhang, Yingchen; You, Shutang

    Power grid primary frequency response will be significantly impaired by Photovoltaic (PV) penetration increase because of the decrease in inertia and governor response. PV inertia and governor emulation requires reserving PV output and leads to solar energy waste. This paper exploits current grid resources and explores energy storage for primary frequency response under high PV penetration at the interconnection level. Based on the actual models of the U.S. Eastern Interconnection grid and the Texas grid, effects of multiple factors associated with primary frequency response, including the governor ratio, governor deadband, droop rate, fast load response. are assessed under high PVmore » penetration scenarios. In addition, performance of batteries and supercapacitors using different control strategies is studied in the two interconnections. The paper quantifies the potential of various resources to improve interconnection-level primary frequency response under high PV penetration without curtailing solar output.« less

  18. National Renewable Energy Laboratory (NREL) Topic 2 Final Report: End-to-End Communication and Control System to Support Clean Energy Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudgins, Andrew P.; Carrillo, Ismael M.; Jin, Xin

    This document is the final report of a two-year development, test, and demonstration project, 'Cohesive Application of Standards- Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL's) Integrated Network Testbed for Energy Grid Research and Technology (INTEGRATE) initiative hosted at Energy Systems Integration Facility (ESIF). This project demonstrated techniques to control distribution grid events using the coordination of traditional distribution grid devices and high-penetration renewable resources and demand response. Using standard communication protocols and semantic standards, the project examined the use cases of high/low distribution voltage, requests for volt-ampere-reactive (VAR)more » power support, and transactive energy strategies using Volttron. Open source software, written by EPRI to control distributed energy resources (DER) and demand response (DR), was used by an advanced distribution management system (ADMS) to abstract the resources reporting to a collection of capabilities rather than needing to know specific resource types. This architecture allows for scaling both horizontally and vertically. Several new technologies were developed and tested. Messages from the ADMS based on the common information model (CIM) were developed to control the DER and DR management systems. The OpenADR standard was used to help manage grid events by turning loads off and on. Volttron technology was used to simulate a homeowner choosing the price at which to enter the demand response market. Finally, the ADMS used newly developed algorithms to coordinate these resources with a capacitor bank and voltage regulator to respond to grid events.« less

  19. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    NASA Astrophysics Data System (ADS)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.

  20. The wave and tidal resource of Scotland

    NASA Astrophysics Data System (ADS)

    Neill, Simon; Vogler, Arne; Lewis, Matt; Goward-Brown, Alice

    2017-04-01

    As the marine renewable energy industry evolves, in parallel with an increase in the quantity of available data and improvements in validated numerical simulations, it is occasionally appropriate to re-assess the wave and tidal resource of a region. This is particularly true for Scotland - a leading nation that the international community monitors for developments in the marine renewable energy industry, and which has witnessed much progress in the sector over the last decade. With 7 leased wave and 17 leased tidal sites, Scotland is well poised to generate significant levels of electricity from its abundant natural marine resources. In this review of Scotland's wave and tidal resource, I present the theoretical and technical resource, and provide an overview of commercial progress. I also discuss issues that affect future development of the marine energy seascape in Scotland, applicable to other regions of the world, including the potential for developing lower energy sites, and grid connectivity.

  1. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  2. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  3. Forest resources of southeast Alaska, 2000: results of a single-phase systematic sample.

    Treesearch

    Willem W.S. van Hees

    2003-01-01

    A baseline assessment of forest resources in southeast Alaska was made by using a single-phase, unstratified, systematic-grid sample, with ground plots established at each grid intersection. Ratio-of-means estimators were used to develop population estimates. Forests cover an estimated 48 percent of the 22.9-million-acre southeast Alaska inventory unit. Dominant forest...

  4. Trends in life science grid: from computing grid to knowledge grid.

    PubMed

    Konagaya, Akihiko

    2006-12-18

    Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  5. Trends in life science grid: from computing grid to knowledge grid

    PubMed Central

    Konagaya, Akihiko

    2006-01-01

    Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community. PMID:17254294

  6. Distributed Energy Systems: Security Implications of the Grid of the Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamber, Kevin L.; Kelic, Andjelka; Taylor, Robert A.

    2017-01-01

    Distributed Energy Resources (DER) are being added to the nation's electric grid, and as penetration of these resources increases, they have the potential to displace or offset large-scale, capital-intensive, centralized generation. Integration of DER into operation of the traditional electric grid requires automated operational control and communication of DER elements, from system measurement to control hardware and software, in conjunction with a utility's existing automated and human-directed control of other portions of the system. Implementation of DER technologies suggests a number of gaps from both a security and a policy perspective. This page intentionally left blank.

  7. Distribution Strategies for Solar and Wind Renewables in NW Europe

    NASA Astrophysics Data System (ADS)

    Smedley, Andrew; Webb, Ann

    2017-04-01

    Whilst the UNFCCC Paris Agreement Climate change was ratified in November, 2016 saw the highest global temperature anomaly on record at 1.2°C above pre-industrial levels. As such there is urgent need to reduce CO2 emissions by a move away from fossil fuels and towards renewable electricity energy technologies. As the principal renewable technologies of solar PV and wind turbines contribute an increasing fraction to the electricity grid, questions of cumulative intermittency and the large-scale geographic distribution of each technology need to be addressed. In this study our initial emphasis is on a calculation of a relatively high spatial resolution (0.1° × 0.1°) daily gridded dataset of solar irradiance data, over a 10 year period (2006-2015). This is achieved by coupling established sources of satellite data (MODIS SSF level2 instantaneous footprint data) to a well-validated radiative transfer model, here LibRadTran. We utilise both a morning and afternoon field for two cloud layers (optical depth and cloud fraction) interpolated to hourly grids, together with aerosol optical depth, topographic height and solar zenith angle. These input parameters are passed to a 5-D LUT of LibRadTran results to construct hourly estimates of the solar irradiance field, which is then integrated to a daily total. For the daily wind resource we rely on the 6 hourly height-adjusted ECMWF ERA-Interim reanalysis wind fields, but separated into onshore, offshore and deep water components. From these datasets of the solar and wind resources we construct 22 different distribution strategies for solar PV and wind turbines based on the long-term availability of each resource. Combining these distributions with the original daily gridded datasets enables each distribution strategy to be then assessed in terms of the day-to-day variability, the installed capacity required to maintain a baseline supply, and the relative proportions of each technology. Notably for the NW European area considered we find that distribution strategies that only deploy renewables in regions with the highest annual mean irradiance or wind resource, also minimise the total required installed capacity and typically exhibit the smallest output range. Further in the majority of strategies we find that the onshore and offshore wind resource fractions fall to zero with the wind contribution being fully composed of deep water installations. Only as the strategy is to increasingly concentrate each technology in areas with the highest annual mean resource do firstly offshore, and then onshore wind, contribute.

  8. Fast Grid Frequency Support from Distributed Inverter-Based Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson F

    This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.

  9. GMLC Hawaii Regional Partnership: Distributed Inverter-Based Grid Frequency Support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Hoke, Andy

    This presentation is part of a panel session at the IEEE ISGT conference on Grid Modernization Initiative projects. This segment of the panel session provides a brief overview of a Hawaii Regional Partnership project focusing grid frequency support from distributed resources on the fastest time scales.

  10. SWAT use of gridded observations for simulating runoff - a Vietnam river basin study

    NASA Astrophysics Data System (ADS)

    Vu, M. T.; Raghavan, S. V.; Liong, S. Y.

    2011-12-01

    Many research studies that focus on basin hydrology have used the SWAT model to simulate runoff. One common practice in calibrating the SWAT model is the application of station data rainfall to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) modified Global Historical Climatology Network version 2 (GHCN2) and one reanalysis dataset National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dakbla River (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.

  11. Data location-aware job scheduling in the grid. Application to the GridWay metascheduler

    NASA Astrophysics Data System (ADS)

    Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.

    2010-04-01

    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.

  12. SoilGrids1km — Global Soil Information Based on Automated Mapping

    PubMed Central

    Hengl, Tomislav; de Jesus, Jorge Mendes; MacMillan, Robert A.; Batjes, Niels H.; Heuvelink, Gerard B. M.; Ribeiro, Eloi; Samuel-Rosa, Alessandro; Kempen, Bas; Leenaars, Johan G. B.; Walsh, Markus G.; Gonzalez, Maria Ruiperez

    2014-01-01

    Background Soils are widely recognized as a non-renewable natural resource and as biophysical carbon sinks. As such, there is a growing requirement for global soil information. Although several global soil information systems already exist, these tend to suffer from inconsistencies and limited spatial detail. Methodology/Principal Findings We present SoilGrids1km — a global 3D soil information system at 1 km resolution — containing spatial predictions for a selection of soil properties (at six standard depths): soil organic carbon (g kg−1), soil pH, sand, silt and clay fractions (%), bulk density (kg m−3), cation-exchange capacity (cmol+/kg), coarse fragments (%), soil organic carbon stock (t ha−1), depth to bedrock (cm), World Reference Base soil groups, and USDA Soil Taxonomy suborders. Our predictions are based on global spatial prediction models which we fitted, per soil variable, using a compilation of major international soil profile databases (ca. 110,000 soil profiles), and a selection of ca. 75 global environmental covariates representing soil forming factors. Results of regression modeling indicate that the most useful covariates for modeling soils at the global scale are climatic and biomass indices (based on MODIS images), lithology, and taxonomic mapping units derived from conventional soil survey (Harmonized World Soil Database). Prediction accuracies assessed using 5–fold cross-validation were between 23–51%. Conclusions/Significance SoilGrids1km provide an initial set of examples of soil spatial data for input into global models at a resolution and consistency not previously available. Some of the main limitations of the current version of SoilGrids1km are: (1) weak relationships between soil properties/classes and explanatory variables due to scale mismatches, (2) difficulty to obtain covariates that capture soil forming factors, (3) low sampling density and spatial clustering of soil profile locations. However, as the SoilGrids system is highly automated and flexible, increasingly accurate predictions can be generated as new input data become available. SoilGrids1km are available for download via http://soilgrids.org under a Creative Commons Non Commercial license. PMID:25171179

  13. Sustainability Challenge of Micro Hydro Power Development in Indonesia

    NASA Astrophysics Data System (ADS)

    Didik, H.; Bambang, P. N.; Asep, S.; Purwanto, Y. A.

    2018-05-01

    Rural electrification using renewable energy is the best choice for many locations that far away from national grid. Many renewable energy project have been built for rural electrification such as micro hydro power plant (MHPP) and solar photovoltaic (SPV). Sustainability still the main challenge of off-grid renewable energy development for off-grid rural electrification in Indonesia. The objective of this paper is to review sustainability of micro hydro power development in Indonesia. The research method was done by field observation, interview with MHPP management, and reviewing some research about MHPP in Indonesia. Sustainability issues include various aspects that can be classified into 5 dimensions: technical, economic, socio-cultural, institutional, and environmental. In technical factors that lead to sustainability problem are: improper MHPP design and construction, improper operation and maintenance, availability of spare parts and expertise. In the economic dimension are generally related to: low electricity tariff and utilization of MHPP for productive use. In the social dimension are: the growth of consumer’s load exceeding the capacity, reduced number of consumers, lack of external institutional support. In the institutional side, it is generally related to the ability of human resources in managing, operating and maintaining of MHPP. Environmental factors that lead the sustainability problems of MHPP are: scarcity of water discharge, conflict of water resources, land conversion over the watershed, and natural disasters.

  14. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  15. WIND Toolkit Offshore Summary Dataset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draxl, Caroline; Musial, Walt; Scott, George

    This dataset contains summary statistics for offshore wind resources for the continental United States derived from the Wind Integration National Datatset (WIND) Toolkit. These data are available in two formats: GDB - Compressed geodatabases containing statistical summaries aligned with lease blocks (aliquots) stored in a GIS format. These data are partitioned into Pacific, Atlantic, and Gulf resource regions. HDF5 - Statistical summaries of all points in the offshore Pacific, Atlantic, and Gulf offshore regions. These data are located on the original WIND Toolkit grid and have not been reassigned or downsampled to lease blocks. These data were developed under contractmore » by NREL for the Bureau of Oceanic Energy Management (BOEM).« less

  16. European grid services for global earth science

    NASA Astrophysics Data System (ADS)

    Brewer, S.; Sipos, G.

    2012-04-01

    This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.

  17. Survey of Volumetric Grid Generators

    NASA Technical Reports Server (NTRS)

    Woo, Alex; Volakis, John; Hulbert, Greg; Case, Jeff; Presley, Leroy L. (Technical Monitor)

    1994-01-01

    This document is the result of an Internet Survey of Volumetric grid generators. As such we have included information from only the responses which were sent to us. After the initial publication and posting of this survey, we would encourage authors and users of grid generators to send further information. Here is the initial query posted to SIGGRID@nas and the USENET group sci.physics.computational.fluid-dynamics. Date: Sun, 30 Jan 94 11:37:52 -0800 From: woo (Alex Woo x6010 227-6 rm 315) Subject: Info Sought for Survey of Grid Generators I am collecting information and reviews of both government sponsored and commercial mesh generators for large scientific calculations, both block structured and unstructured. If you send me a review of a mesh generator, please indicate its availability and cost. If you are a commercial concern with information on a product, please also include references for possible reviewers. Please email to woo@ra-next.arc.nasa.gov. I will post a summary and probably write a short note for the IEEE Antennas and Propagation Magazine. Alex Woo, MS 227-6 woo@ames.arc.nasa.gov NASA Ames Research Center NASAMAIL ACWOO Moffett Field, CA 94035-1000 SPANET 24582::W00 (415) 604-6010 (FAX) 604-4357 fhplabs,decwrl,uunet)!ames!woo Disclaimer: These are not official statements of NASA or EMCC. We did not include all the submitted text here. Instead we have created a database entry in the freely available and widely used BIBTeX format which has an Uniform Resource Locator (URL) field pointing to more details. The BIBTeX database is modeled after those available from the BIBNET project at University of Utah.

  18. Modelling noise propagation using Grid Resources. Progress within GDI-Grid

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut

    2010-05-01

    Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation. This immense intensive calculation needs to be performed for a major part of European landscape. A LINUX version of the commercial LimA software for noise mapping analysis has been implemented on a test cluster within the German D-GRID computer network. Results and performance indicators will be presented. The presentation is an extension to last-years presentation "Spatial Data Infrastructures and Grid Computing: the GDI-Grid project" that described the gridification concept developed in the GDI-Grid project and provided an overview of the conceptual gaps between Grid Computing and Spatial Data Infrastructures. Results from the GDI-Grid project are incorporated in the OGC-OGF (Open Grid Forum) collaboration efforts as well as the OGC WPS 2.0 standards working group developing the next major version of the WPS specification.

  19. Current Grid operation and future role of the Grid

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.

  20. Evolving Distributed Generation Support Mechanisms: Case Studies from United States, Germany, United Kingdom, and Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowder, Travis; Zhou, Ella; Tian, Tian

    This report expands on a previous National Renewable Energy Laboratory (NREL) technical report (Lowder et al. 2015) that focused on the United States' unique approach to distributed generation photovoltaics (DGPV) support policies and business models. While the focus of that report was largely historical (i.e., detailing the policies and market developments that led to the growth of DGPV in the United States), this report looks forward, narrating recent changes to laws and regulations as well as the ongoing dialogues over how to incorporate distributed generation (DG) resources onto the electric grid. This report also broadens the scope of Lowder etmore » al. (2015) to include additional countries and technologies. DGPV and storage are the principal technologies under consideration (owing to market readiness and deployment volumes), but the report also contemplates any generation resource that is (1) on the customer side of the meter, (2) used to, at least partly, offset a host's energy consumption, and/or (3) potentially available to provide grid support (e.g., through peak shaving and load shifting, ancillary services, and other means).« less

  1. The LHCb Grid Simulation: Proof of Concept

    NASA Astrophysics Data System (ADS)

    Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.

    2017-10-01

    The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.

  2. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  3. Signal to Noise Ratio for Different Gridded Rainfall Products of Indian Monsoon

    NASA Astrophysics Data System (ADS)

    Nehra, P.; Shastri, H. K.; Ghosh, S.; Mishra, V.; Murtugudde, R. G.

    2014-12-01

    Gridded rainfall datasets provide useful information of spatial and temporal distribution of precipitation over a region. For India, there are 3 gridded rainfall data products available from India Meteorological Department (IMD), Tropical Rainfall Measurement Mission (TRMM) and Asian Precipitation - Highly Resolved Observational Data Integration towards Evaluation of Water Resources (APHRODITE), these compile precipitation information obtained through satellite based measurement and ground station based data. The gridded rainfall data from IMD is available at spatial resolution of 1°, 0.5° and 0.25° where as TRMM and APHRODITE is available at 0.25°. Here, we employ 7 years (1998-2004) of common time period amongst the 3 data products for the south-west monsoon season, i.e., the months June to September. We examine temporal mean and standard deviation of these 3 products to observe substantial variation amongst them at 1° resolution whereas for 0.25° resolution, all the data types are nearly identical. We determine the Signal to Noise Ratio (SNR) of the 3 products at 1° and 0.25° resolution based on noise separation technique adopting horizontal separation of the power spectrum generated with the Fast Fourier Transformation (FFT). A methodology is developed for threshold based separation of signal and noise from the power spectrum, treating the noise as white. The variance of signal to that of noise is computed to obtain SNR. Determination of SNR for different regions over the country shows the highest SNR with APHRODITE at 0.25° resolution. It is observed that the eastern part of India has the highest SNR in all cases considered whereas the northern and southern most Indian regions have lowest SNR. An incremental linear trend is observed among the SNR values and the spatial variance of corresponding region. Relationship between the computed SNR values and the interpolation method used with the dataset is analyzed. The SNR analysis provides an effective tool to evaluate the gridded precipitation data products. However detailed analysis is needed to determine the processes that lead to these SNR distributions so that the quality of the gridded rainfall data products can be further improved and transferability of the gridding algorithms can be explored to produce a unified high-quality rainfall dataset.

  4. Multi-Scale Simulations of Past and Future Projections of Hydrology in Lake Tahoe Basin, California-Nevada (Invited)

    NASA Astrophysics Data System (ADS)

    Niswonger, R. G.; Huntington, J. L.; Dettinger, M. D.; Rajagopal, S.; Gardner, M.; Morton, C. G.; Reeves, D. M.; Pohll, G. M.

    2013-12-01

    Water resources in the Tahoe basin are susceptible to long-term climate change and extreme events because it is a middle-altitude, snow-dominated basin that experiences large inter-annual climate variations. Lake Tahoe provides critical water supply for its basin and downstream populations, but changes in water supply are obscured by complex climatic and hydrologic gradients across the high relief, geologically complex basin. An integrated surface and groundwater model of the Lake Tahoe basin has been developed using GSFLOW to assess the effects of climate change and extreme events on surface and groundwater resources. Key hydrologic mechanisms are identified with this model that explains recent changes in water resources of the region. Critical vulnerabilities of regional water-supplies and hazards also were explored. Maintaining a balance between (a) accurate representation of spatial features (e.g., geology, streams, and topography) and hydrologic response (i.e., groundwater, stream, lake, and wetland flows and storages), and (b) computational efficiency, is a necessity for the desired model applications. Potential climatic influences on water resources are analyzed here in simulations of long-term water-availability and flood responses to selected 100-year climate-model projections. GSFLOW is also used to simulate a scenario depicting an especially extreme storm event that was constructed from a combination of two historical atmospheric-river storm events as part of the USGS MultiHazards Demonstration Project. Historical simulated groundwater levels, streamflow, wetlands, and lake levels compare well with measured values for a 30-year historical simulation period. Results are consistent for both small and large model grid cell sizes, due to the model's ability to represent water table altitude, streams, and other hydrologic features at the sub-grid scale. Simulated hydrologic responses are affected by climate change, where less groundwater resources will be available during more frequent droughts. Simulated floods for the region indicate issues related to drainage in the developed areas around Lake Tahoe, and necessary dam releases that create downstream flood risks.

  5. Stability of synchrony against local intermittent fluctuations in tree-like power grids

    NASA Astrophysics Data System (ADS)

    Auer, Sabine; Hellmann, Frank; Krause, Marie; Kurths, Jürgen

    2017-12-01

    90% of all Renewable Energy Power in Germany is installed in tree-like distribution grids. Intermittent power fluctuations from such sources introduce new dynamics into the lower grid layers. At the same time, distributed resources will have to contribute to stabilize the grid against these fluctuations in the future. In this paper, we model a system of distributed resources as oscillators on a tree-like, lossy power grid and its ability to withstand desynchronization from localized intermittent renewable infeed. We find a remarkable interplay of the network structure and the position of the node at which the fluctuations are fed in. An important precondition for our findings is the presence of losses in distribution grids. Then, the most network central node splits the network into branches with different influence on network stability. Troublemakers, i.e., nodes at which fluctuations are especially exciting the grid, tend to be downstream branches with high net power outflow. For low coupling strength, we also find branches of nodes vulnerable to fluctuations anywhere in the network. These network regions can be predicted at high confidence using an eigenvector based network measure taking the turbulent nature of perturbations into account. While we focus here on tree-like networks, the observed effects also appear, albeit less pronounced, for weakly meshed grids. On the other hand, the observed effects disappear for lossless power grids often studied in the complex system literature.

  6. Mechanics of Flapping Flight: Analytical Formulations of Unsteady Aerodynamics, Kinematic Optimization, Flight Dynamics, and Control

    NASA Astrophysics Data System (ADS)

    Taneja, Jayant Kumar

    Electricity is an indispensable commodity to modern society, yet it is delivered via a grid architecture that remains largely unchanged over the past century. A host of factors are conspiring to topple this dated yet venerated design: developments in renewable electricity generation technology, policies to reduce greenhouse gas emissions, and advances in information technology for managing energy systems. Modern electric grids are emerging as complex distributed systems in which a portfolio of power generation resources, often incorporating fluctuating renewable resources such as wind and solar, must be managed dynamically to meet uncontrolled, time-varying demand. Uncertainty in both supply and demand makes control of modern electric grids fundamentally more challenging, and growing portfolios of renewables exacerbate the challenge. We study three electricity grids: the state of California, the province of Ontario, and the country of Germany. To understand the effects of increasing renewables, we develop a methodology to scale renewables penetration. Analyzing these grids yields key insights about rigid limits to renewables penetration and their implications in meeting long-term emissions targets. We argue that to achieve deep penetration of renewables, the operational model of the grid must be inverted, changing the paradigm from load-following supplies to supply-following loads. To alleviate the challenge of supply-demand matching on deeply renewable grids, we first examine well-known techniques, including altering management of existing supply resources, employing utility-scale energy storage, targeting energy efficiency improvements, and exercising basic demand-side management. Then, we create several instantiations of supply-following loads -- including refrigerators, heating and cooling systems, and laptop computers -- by employing a combination of sensor networks, advanced control techniques, and enhanced energy storage. We examine the capacity of each load for supply-following and study the behaviors of populations of these loads, assessing their potential at various levels of deployment throughout the California electricity grid. Using combinations of supply-following strategies, we can reduce peak natural gas generation by 19% on a model of the California grid with 60% renewables. We then assess remaining variability on this deeply renewable grid incorporating supply-following loads, characterizing additional capabilities needed to ensure supply-demand matching in future sustainable electricity grids.

  7. Data privacy considerations in Intensive Care Grids.

    PubMed

    Luna, Jesus; Dikaiakos, Marios D; Kyprianou, Theodoros; Bilas, Angelos; Marazakis, Manolis

    2008-01-01

    Novel eHealth systems are being designed to provide a citizen-centered health system, however the even demanding need for computing and data resources has required the adoption of Grid technologies. In most of the cases, this novel Health Grid requires not only conveying patient's personal data through public networks, but also storing it into shared resources out of the hospital premises. These features introduce new security concerns, in particular related with privacy. In this paper we survey current legal and technological approaches that have been taken to protect a patient's personal data into eHealth systems, with a particular focus in Intensive Care Grids. However, thanks to a security analysis applied over the Intensive Care Grid system (ICGrid) we show that these security mechanisms are not enough to provide a comprehensive solution, mainly because the data-at-rest is still vulnerable to attacks coming from untrusted Storage Elements where an attacker may directly access them. To cope with these issues, we propose a new privacy-oriented protocol which uses a combination of encryption and fragmentation to improve data's assurance while keeping compatibility with current legislations and Health Grid security mechanisms.

  8. THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS

    PubMed Central

    Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel

    2010-01-01

    Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618

  9. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    NASA Astrophysics Data System (ADS)

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  10. Grid accounting service: state and future development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levshina, T.; Sehgal, C.; Bockelman, B.

    2014-01-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at Universitymore » of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.« less

  11. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  12. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.

  13. High-resolution integration of water, energy, and climate models to assess electricity grid vulnerabilities to climate change

    NASA Astrophysics Data System (ADS)

    Meng, M.; Macknick, J.; Tidwell, V. C.; Zagona, E. A.; Magee, T. M.; Bennett, K.; Middleton, R. S.

    2017-12-01

    The U.S. electricity sector depends on large amounts of water for hydropower generation and cooling thermoelectric power plants. Variability in water quantity and temperature due to climate change could reduce the performance and reliability of individual power plants and of the electric grid as a system. While studies have modeled water usage in power systems planning, few have linked grid operations with physical water constraints or with climate-induced changes in water resources to capture the role of the energy-water nexus in power systems flexibility and adequacy. In addition, many hydrologic and hydropower models have a limited representation of power sector water demands and grid interaction opportunities of demand response and ancillary services. A multi-model framework was developed to integrate and harmonize electricity, water, and climate models, allowing for high-resolution simulation of the spatial, temporal, and physical dynamics of these interacting systems. The San Juan River basin in the Southwestern U.S., which contains thermoelectric power plants, hydropower facilities, and multiple non-energy water demands, was chosen as a case study. Downscaled data from three global climate models and predicted regional water demand changes were implemented in the simulations. The Variable Infiltration Capacity hydrologic model was used to project inflows, ambient air temperature, and humidity in the San Juan River Basin. Resulting river operations, water deliveries, water shortage sharing agreements, new water demands, and hydroelectricity generation at the basin-scale were estimated with RiverWare. The impacts of water availability and temperature on electric grid dispatch, curtailment, cooling water usage, and electricity generation cost were modeled in PLEXOS. Lack of water availability resulting from climate, new water demands, and shortage sharing agreements will require thermoelectric generators to drastically decrease power production, as much as 50% during intensifying drought scenarios, which can have broader electricity sector system implications. Results relevant to stakeholder and power provider interests highlight the vulnerabilities in grid operations driven by water shortage agreements and changes in the climate.

  14. Advanced Cloud Forecasting for Solar Energy’s Impact on Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werth, D.; Nichols, R.

    Solar energy production is subject to variability in the solar resource – clouds and aerosols will reduce the available solar irradiance and inhibit power production. The fact that solar irradiance can vary by large amounts at small timescales and in an unpredictable way means that power utilities are reluctant to assign to their solar plants a large portion of future energy demand – the needed power might be unavailable, forcing the utility to make costly adjustments to its daily portfolio. The availability and predictability of solar radiation therefore represent important research topics for increasing the power produced by renewable sources.

  15. Integration and Exposure of Large Scale Computational Resources Across the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.

    2015-12-01

    As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.

  16. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  17. NeuroLOG: a community-driven middleware design.

    PubMed

    Montagnat, Johan; Gaignard, Alban; Lingrand, Diane; Rojas Balderrama, Javier; Collet, Philippe; Lahire, Philippe

    2008-01-01

    The NeuroLOG project designs an ambitious neurosciences middleware, gaining from many existing components and learning from past project experiences. It is targeting a focused application area and adopting a user-centric perspective to meet the neuroscientists expectations. It aims at fostering the adoption of HealthGrids in a pre-clinical community. This paper details the project's design study and methodology which were proposed to achieve the integration of heterogeneous site data schemas and the definition of a site-centric policy. The NeuroLOG middleware will bridge HealthGrid and local resources to match user desires to control their resources and provide a transitional model towards HealthGrids.

  18. A 47-Year Daily Gridded Precipitation Dataset for Asia Based on a Dense Network of Rain Gauges -APHRODITE project-

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Yasutomi, N.; Hamada, A.; Kamiguchi, K.; Arakawa, O.

    2009-12-01

    A daily gridded precipitation dataset for 1961-2007 is created by collecting rain gauge observation data across Asia through the activities of the Asian Precipitation--Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE) project. We have already released APHRODITE’s daily gridded precipitation (APHRO_V0902) product for 1961-2004 (Yatagai et al., 2009), and our number of valid stations was between 5000 and 12,000, representing 2.3 to 4.5 times the data available through the Global Telecommunication System network, which were used for most daily grid precipitation products. APHRO_V0902 is the only long-term (1961 onward) continental-scale daily product that contains a dense network of daily rain gauge data for Asia including the Himalayas and mountainous areas in the Middle East. The product has already contributed to studies such as the evaluation of Asian water resources, diagnosis of climate change, statistical downscaling, and verification of numerical model simulation and high-resolution precipitation estimates using satellites. We are currently improving quality control (QC) schemes and interpolation algorithms, and make continuous efforts in data collection. In addition, we have undertaken capacity building activities, such as training seminars by inviting researchers/programmers from some Asian meteorological organizations who provided the observation data for us. Furthermore, we feed the errata (QC) information back to the above organizations and/or data centers. The next version of the algorithm will be fixed in December 2009 (APHRO_V0912), and we will update the product up to 2007. Our progress and advantage of the next products will be shown at the AGU fall meeting in 2009.

  19. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  20. Systems Engineering Building Advances Power Grid Research

    ScienceCinema

    Virden, Jud; Huang, Henry; Skare, Paul; Dagle, Jeff; Imhoff, Carl; Stoustrup, Jakob; Melton, Ron; Stiles, Dennis; Pratt, Rob

    2018-01-16

    Researchers and industry are now better equipped to tackle the nation’s most pressing energy challenges through PNNL’s new Systems Engineering Building – including challenges in grid modernization, buildings efficiency and renewable energy integration. This lab links real-time grid data, software platforms, specialized laboratories and advanced computing resources for the design and demonstration of new tools to modernize the grid and increase buildings energy efficiency.

  1. Smart Grid Interoperability Maturity Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widergren, Steven E.; Levinson, Alex; Mater, J.

    2010-04-28

    The integration of automation associated with electricity resources (including transmission and distribution automation and demand-side resources operated by end-users) is key to supporting greater efficiencies and incorporating variable renewable resources and electric vehicles into the power system. The integration problems faced by this community are analogous to those faced in the health industry, emergency services, and other complex communities with many stakeholders. To highlight this issue and encourage communication and the development of a smart grid interoperability community, the GridWise Architecture Council (GWAC) created an Interoperability Context-Setting Framework. This "conceptual model" has been helpful to explain the importance of organizationalmore » alignment in addition to technical and informational interface specifications for "smart grid" devices and systems. As a next step to building a community sensitive to interoperability, the GWAC is investigating an interoperability maturity model (IMM) based on work done by others to address similar circumstances. The objective is to create a tool or set of tools that encourages a culture of interoperability in this emerging community. The tools would measure status and progress, analyze gaps, and prioritize efforts to improve the situation.« less

  2. Appalachian Basin Play Fairway Analysis: Thermal Quality Analysis in Low-Temperature Geothermal Play Fairway Analysis (GPFA-AB

    DOE Data Explorer

    Teresa E. Jordan

    2015-11-15

    This collection of files are part of a larger dataset uploaded in support of Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (GPFA-AB, DOE Project DE-EE0006726). Phase 1 of the GPFA-AB project identified potential Geothermal Play Fairways within the Appalachian basin of Pennsylvania, West Virginia and New York. This was accomplished through analysis of 4 key criteria or ‘risks’: thermal quality, natural reservoir productivity, risk of seismicity, and heat utilization. Each of these analyses represent a distinct project task, with the fifth task encompassing combination of the 4 risks factors. Supporting data for all five tasks has been uploaded into the Geothermal Data Repository node of the National Geothermal Data System (NGDS). This submission comprises the data for Thermal Quality Analysis (project task 1) and includes all of the necessary shapefiles, rasters, datasets, code, and references to code repositories that were used to create the thermal resource and risk factor maps as part of the GPFA-AB project. The identified Geothermal Play Fairways are also provided with the larger dataset. Figures (.png) are provided as examples of the shapefiles and rasters. The regional standardized 1 square km grid used in the project is also provided as points (cell centers), polygons, and as a raster. Two ArcGIS toolboxes are available: 1) RegionalGridModels.tbx for creating resource and risk factor maps on the standardized grid, and 2) ThermalRiskFactorModels.tbx for use in making the thermal resource maps and cross sections. These toolboxes contain “item description” documentation for each model within the toolbox, and for the toolbox itself. This submission also contains three R scripts: 1) AddNewSeisFields.R to add seismic risk data to attribute tables of seismic risk, 2) StratifiedKrigingInterpolation.R for the interpolations used in the thermal resource analysis, and 3) LeaveOneOutCrossValidation.R for the cross validations used in the thermal interpolations. Some file descriptions make reference to various 'memos'. These are contained within the final report submitted October 16, 2015. Each zipped file in the submission contains an 'about' document describing the full Thermal Quality Analysis content available, along with key sources, authors, citation, use guidelines, and assumptions, with the specific file(s) contained within the .zip file highlighted.

  3. Enabling campus grids with open science grid technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weitzel, Derek; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condormore » clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.« less

  4. Utilizing data grid architecture for the backup and recovery of clinical image data.

    PubMed

    Liu, Brent J; Zhou, M Z; Documet, J

    2005-01-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models. However, there has been limited investigation into the impact of this emerging technology in medical imaging and informatics. In particular, PACS technology, an established clinical image repository system, while having matured significantly during the past ten years, still remains weak in the area of clinical image data backup. Current solutions are expensive or time consuming and the technology is far from foolproof. Many large-scale PACS archive systems still encounter downtime for hours or days, which has the critical effect of crippling daily clinical operations. In this paper, a review of current backup solutions will be presented along with a brief introduction to grid technology. Finally, research and development utilizing the grid architecture for the recovery of clinical image data, in particular, PACS image data, will be presented. The focus of this paper is centered on applying a grid computing architecture to a DICOM environment since DICOM has become the standard for clinical image data and PACS utilizes this standard. A federation of PACS can be created allowing a failed PACS archive to recover its image data from others in the federation in a seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid is composed of one research laboratory and two clinical sites. The Globus 3.0 Toolkit (Co-developed by the Argonne National Laboratory and Information Sciences Institute, USC) for developing the core and user level middleware is utilized to achieve grid connectivity. The successful implementation and evaluation of utilizing data grid architecture for clinical PACS data backup and recovery will provide an understanding of the methodology for using Data Grid in clinical image data backup for PACS, as well as establishment of benchmarks for performance from future grid technology improvements. In addition, the testbed can serve as a road map for expanded research into large enterprise and federation level data grids to guarantee CA (Continuous Availability, 99.999% up time) in a variety of medical data archiving, retrieval, and distribution scenarios.

  5. Grid Integration of Offshore Wind | Wind | NREL

    Science.gov Websites

    . Photograph of a wind turbine in the ocean. Located about 10 kilometers off the coast of Arklow, Ireland, the Grid Integration of Offshore Wind Grid Integration of Offshore Wind Much can be learned from the existing land-based integration research for handling the variability and uncertainty of the wind resource

  6. Glossary of AWS Acrinabs. Acronyms, Initialisms, and Abbreviations Commonly Used in Air Weather Service

    DTIC Science & Technology

    1991-01-01

    Foundation FYDP ......... Five Year Defense Plan FSI ............ Fog Stability Index 17 G G ................ gravity, giga- GISM ......... Gridded ...Global Circulation Model GOES-TAP GOES imagery processing & dissemination system GCS .......... grid course GOFS ........ Global Ocean Flux Study GD...Analysis Support System Complex Systems GRID .......... Global Resource Information Data -Base GEMAG ..... geomagnetic GRIST..... grazing-incidence solar

  7. RTDS-Based Design and Simulation of Distributed P-Q Power Resources in Smart Grid

    NASA Astrophysics Data System (ADS)

    Taylor, Zachariah David

    In this Thesis, we propose to utilize a battery system together with its power electronics interfaces and bidirectional charger as a distributed P-Q resource in power distribution networks. First, we present an optimization-based approach to operate such distributed P-Q resources based on the characteristics of the battery and charger system as well as the features and needs of the power distribution network. Then, we use the RTDS Simulator, which is an industry-standard simulation tool of power systems, to develop two RTDS-based design approaches. The first design is based on an ideal four-quadrant distributed P-Q power resource. The second design is based on a detailed four-quadrant distributed P-Q power resource that is developed using power electronics components. The hardware and power electronics circuitry as well as the control units are explained for the second design. After that, given the two-RTDS designs, we conducted extensive RTDS simulations to assess the performance of the designed distributed P-Q Power Resource in an IEEE 13 bus test system. We observed that the proposed design can noticeably improve the operational performance of the power distribution grid in at least four key aspects: reducing power loss, active power peak load shaving at substation, reactive power peak load shaving at substation, and voltage regulation. We examine these performance measures across three design cases: Case 1: There is no P-Q Power Resource available on the power distribution network. Case 2: The installed P-Q Power Resource only supports active power, i.e., it only utilizes its battery component. Case 3: The installed P-Q Power Resource supports both active and reactive power, i.e., it utilizes both its battery component and its power electronics charger component. In the end, we present insightful interpretations on the simulation results and suggest some future works.

  8. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.

  9. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-supportmore » functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.« less

  11. Past Seminars and Workshops | Energy Systems Integration Facility | NREL

    Science.gov Websites

    Distributed Optimization and Control of Sustainable Power Systems Workshop Integrating PV in Distributed Grids Unintentional Islands in Power Systems with Distributed Resources Webinar Smart Grid Educational Series Energy

  12. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  13. Experimental Verification and Integration of a Next Generation Smart Power Management System

    NASA Astrophysics Data System (ADS)

    Clemmer, Tavis B.

    With the increase in energy demand by the residential community in this country and the diminishing fossil fuel resources being used for electric energy production there is a need for a system to efficiently manage power within a residence. The Smart Green Power Node (SGPN) is a next generation energy management system that automates on-site energy production, storage, consumption, and grid usage to yield the most savings for both the utility and the consumer. Such a system automatically manages on-site distributed generation sources such as a PhotoVoltaic (PV) input and battery storage to curtail grid energy usage when the price is high. The SGPN high level control features an advanced modular algorithm that incorporates weather data for projected PV generation, battery health monitoring algorithms, user preferences for load prioritization within the home in case of an outage, Time of Use (ToU) grid power pricing, and status of on-site resources to intelligently schedule and manage power flow between the grid, loads, and the on-site resources. The SGPN has a scalable, modular architecture such that it can be customized for user specific applications. This drove the topology for the SGPN which connects on-site resources at a low voltage DC microbus; a two stage bi-directional inverter/rectifier then couples the AC load and residential grid connect to on-site generation. The SGPN has been designed, built, and is undergoing testing. Hardware test results obtained are consistent with the design goals set and indicate that the SGPN is a viable system with recommended changes and future work.

  14. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  15. Greening the Grid: Solar and Wind Grid Integration Study for the Luzon-Visayas System of the Philippines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton P.; Katz, Jessica R.; Cochran, Jaquelin M.

    The Republic of the Philippines is home to abundant solar, wind, and other renewable energy (RE) resources that contribute to the national government's vision to ensure sustainable, secure, sufficient, accessible, and affordable energy. Because solar and wind resources are variable and uncertain, significant generation from these resources necessitates an evolution in power system planning and operation. To support Philippine power sector planners in evaluating the impacts and opportunities associated with achieving high levels of variable RE penetration, the Department of Energy of the Philippines (DOE) and the United States Agency for International Development (USAID) have spearheaded this study along withmore » a group of modeling representatives from across the Philippine electricity industry, which seeks to characterize the operational impacts of reaching high solar and wind targets in the Philippine power system, with a specific focus on the integrated Luzon-Visayas grids.« less

  16. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  17. COMP Superscalar, an interoperable programming framework

    NASA Astrophysics Data System (ADS)

    Badia, Rosa M.; Conejero, Javier; Diaz, Carlos; Ejarque, Jorge; Lezzi, Daniele; Lordan, Francesc; Ramon-Cortes, Cristian; Sirvent, Raul

    2015-12-01

    COMPSs is a programming framework that aims to facilitate the parallelization of existing applications written in Java, C/C++ and Python scripts. For that purpose, it offers a simple programming model based on sequential development in which the user is mainly responsible for (i) identifying the functions to be executed as asynchronous parallel tasks and (ii) annotating them with annotations or standard Python decorators. A runtime system is in charge of exploiting the inherent concurrency of the code, automatically detecting and enforcing the data dependencies between tasks and spawning these tasks to the available resources, which can be nodes in a cluster, clouds or grids. In cloud environments, COMPSs provides scalability and elasticity features allowing the dynamic provision of resources.

  18. Deployment and Operational Experiences with CernVM-FS at the GridKa Tier-1 Center

    NASA Astrophysics Data System (ADS)

    Alef, Manfred; Jäger, Axel; Petzold and, Andreas; Verstege, Bernhard

    2012-12-01

    In 2012 the GridKa Tier-1 computing center hosts 130 kHS06 computing resources and 14PB disk and 17PB tape space. These resources are shared between the four LHC VOs and a number of national and international VOs from high energy physics and other sciences. CernVM-FS has been deployed at GridKa to supplement the existing NFS-based system to access VO software on the worker nodes. It provides a solution tailored to the requirement of the LHC VOs. We will focus on the first operational experiences and the monitoring of CernVM-FS on the worker nodes and the squid caches.

  19. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    NASA Astrophysics Data System (ADS)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  20. Integrating Wind and Solar on the Grid-NREL Analysis Leads the Way -

    Science.gov Websites

    shown in color, but not including pink/IESO area.) Map provided by NREL Integrating Wind and Solar on the Grid-NREL Analysis Leads the Way NREL studies confirm big wind, solar potential for grid integration To fully harvest the nation's bountiful wind and solar resources, it is critical to know how much

  1. A Scheduling Algorithm for Computational Grids that Minimizes Centralized Processing in Genome Assembly of Next-Generation Sequencing Data

    PubMed Central

    Lima, Jakelyne; Cerdeira, Louise Teixeira; Bol, Erick; Schneider, Maria Paula Cruz; Silva, Artur; Azevedo, Vasco; Abelém, Antônio Jorge Gomes

    2012-01-01

    Improvements in genome sequencing techniques have resulted in generation of huge volumes of data. As a consequence of this progress, the genome assembly stage demands even more computational power, since the incoming sequence files contain large amounts of data. To speed up the process, it is often necessary to distribute the workload among a group of machines. However, this requires hardware and software solutions specially configured for this purpose. Grid computing try to simplify this process of aggregate resources, but do not always offer the best performance possible due to heterogeneity and decentralized management of its resources. Thus, it is necessary to develop software that takes into account these peculiarities. In order to achieve this purpose, we developed an algorithm aimed to optimize the functionality of de novo assembly software ABySS in order to optimize its operation in grids. We run ABySS with and without the algorithm we developed in the grid simulator SimGrid. Tests showed that our algorithm is viable, flexible, and scalable even on a heterogeneous environment, which improved the genome assembly time in computational grids without changing its quality. PMID:22461785

  2. Sustaining and Extending the Open Science Grid: Science Innovation on a PetaScale Nationwide Facility (DE-FC02-06ER41436) SciDAC-2 Closeout Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron; Shank, James; Ernst, Michael

    Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less

  3. A New Family of Multilevel Grid Connected Inverters Based on Packed U Cell Topology.

    PubMed

    Pakdel, Majid; Jalilzadeh, Saeid

    2017-09-29

    In this paper a novel packed U cell (PUC) based multilevel grid connected inverter is proposed. Unlike the U cell arrangement which consists of two power switches and one capacitor, in the proposed converter topology a lower DC power supply from renewable energy resources such as photovoltaic arrays (PV) is used as a base power source. The proposed topology offers higher efficiency and lower cost using a small number of power switches and a lower DC power source which is supplied from renewable energy resources. Other capacitor voltages are extracted from the base lower DC power source using isolated DC-DC power converters. The operation principle of proposed transformerless multilevel grid connected inverter is analyzed theoretically. Operation of the proposed multilevel grid connected inverter is verified through simulation studies. An experimental prototype using STM32F407 discovery controller board is performed to verify the simulation results.

  4. Option Grids to facilitate shared decision making for patients with Osteoarthritis of the knee: protocol for a single site, efficacy trial.

    PubMed

    Marrin, Katy; Wood, Fiona; Firth, Jill; Kinsey, Katharine; Edwards, Adrian; Brain, Kate E; Newcombe, Robert G; Nye, Alan; Pickles, Timothy; Hawthorne, Kamila; Elwyn, Glyn

    2014-04-07

    Despite policy interest, an ethical imperative, and evidence of the benefits of patient decision support tools, the adoption of shared decision making (SDM) in day-to-day clinical practice remains slow and is inhibited by barriers that include culture and attitudes; resources and time pressures. Patient decision support tools often require high levels of health and computer literacy. Option Grids are one-page evidence-based summaries of the available condition-specific treatment options, listing patients' frequently asked questions. They are designed to be sufficiently brief and accessible enough to support a better dialogue between patients and clinicians during routine consultations. This paper describes a study to assess whether an Option Grid for osteoarthritis of the knee (OA of the knee) facilitates SDM, and explores the use of Option Grids by patients disadvantaged by language or poor health literacy. This will be a stepped wedge exploratory trial involving 72 patients with OA of the knee referred from primary medical care to a specialist musculoskeletal service in Oldham. Six physiotherapists will sequentially join the trial and consult with six patients using usual care procedures. After a period of brief training in using the Option Grid, the same six physiotherapists will consult with six further patients using an Option Grid in the consultation. The primary outcome will be efficacy of the Option Grid in facilitating SDM as measured by observational scores using the OPTION scale. Comparisons will be made between patients who have received the Option Grid and those who received usual care. A Decision Quality Measure (DQM) will assess quality of decision making. The health literacy of patients will be measured using the REALM-R instrument. Consultations will be observed and audio-recorded. Interviews will be conducted with the physiotherapists, patients and any interpreters present to explore their views of using the Option Grid. Option Grids offer a potential solution to the barriers to implementing traditional decision aids into routine clinical practice. The study will assess whether Option Grids can facilitate SDM in day-to-day clinical practice and explore their use with patients disadvantaged by language or poor health literacy. Current Controlled Trials ISRCTN94871417.

  5. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  6. Energy footprint and carbon emission reduction using off-the-grid solar-powered mixing for lagoon treatment.

    PubMed

    Jiang, Yuyuan; Bebee, Brian; Mendoza, Alvaro; Robinson, Alice K; Zhang, Xiaying; Rosso, Diego

    2018-01-01

    Mixing is the driver for the energy footprint of water resource recovery in lagoons. With the availability of solar-powered equipment, one potential measure to decrease the environmental impacts of treatment is to transition to an off-the-grid treatment. We studied the comparative scenarios of an existing grid-powered mixer and a solar-powered mixer. Testing was conducted to monitor the water quality, and to guarantee that the effluent concentrations were maintained equally between the two scenarios. Meanwhile, the energy consumption was recorded with the electrical energy monitor by the wastewater treatment utility, and the carbon emission changes were calculated using the emission intensity of the power utility. The results show that after the replacement, both energy usage and energy costs were significantly reduced, with the energy usage having decreased by 70% and its cost by 47%. Additionally, carbon-equivalent emission from electricity importation dropped by 64%, with an effect on the overall carbon emissions (i.e., including all other contributions from the process) decreasing from 3.8% to 1.5%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid.

    PubMed

    Poehlman, William L; Rynge, Mats; Branton, Chris; Balamurugan, D; Feltus, Frank A

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments.

  8. OSG-GEM: Gene Expression Matrix Construction Using the Open Science Grid

    PubMed Central

    Poehlman, William L.; Rynge, Mats; Branton, Chris; Balamurugan, D.; Feltus, Frank A.

    2016-01-01

    High-throughput DNA sequencing technology has revolutionized the study of gene expression while introducing significant computational challenges for biologists. These computational challenges include access to sufficient computer hardware and functional data processing workflows. Both these challenges are addressed with our scalable, open-source Pegasus workflow for processing high-throughput DNA sequence datasets into a gene expression matrix (GEM) using computational resources available to U.S.-based researchers on the Open Science Grid (OSG). We describe the usage of the workflow (OSG-GEM), discuss workflow design, inspect performance data, and assess accuracy in mapping paired-end sequencing reads to a reference genome. A target OSG-GEM user is proficient with the Linux command line and possesses basic bioinformatics experience. The user may run this workflow directly on the OSG or adapt it to novel computing environments. PMID:27499617

  9. The Montage architecture for grid-enabled science processing of large, distributed datasets

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S .; Prince, Thomas; Berriman, Bruce G.; Good, John C.; Laity, Anastasia C.; Deelman, Ewa; Singh, Gurmeet; Su, Mei-Hui

    2004-01-01

    Montage is an Earth Science Technology Office (ESTO) Computational Technologies (CT) Round III Grand Challenge investigation to deploy a portable, compute-intensive, custom astronomical image mosaicking service for the National Virtual Observatory (NVO). Although Montage is developing a compute- and data-intensive service for the astronomy community, we are also helping to address a problem that spans both Earth and Space science, namely how to efficiently access and process multi-terabyte, distributed datasets. In both communities, the datasets are massive, and are stored in distributed archives that are, in most cases, remote from the available Computational resources. Therefore, state of the art computational grid technologies are a key element of the Montage portal architecture. This paper describes the aspects of the Montage design that are applicable to both the Earth and Space science communities.

  10. Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.

  11. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGES

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  12. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  13. IEEE Smart Grid Series of Standards IEEE 2030 (Interoperability) and IEEE 1547 (Interconnection) Status: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basso, T.; DeBlasio, R.

    The IEEE American National Standards smart grid publications and standards development projects IEEE 2030, which addresses smart grid interoperability, and IEEE 1547TM, which addresses distributed resources interconnection with the grid, have made substantial progress since 2009. The IEEE 2030TM and 1547 standards series focus on systems-level aspects and cover many of the technical integration issues involved in a mature smart grid. The status and highlights of these two IEEE series of standards, which are sponsored by IEEE Standards Coordinating Committee 21 (SCC21), are provided in this paper.

  14. Resource management and scheduling policy based on grid for AIoT

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    This paper has a research on resource management and scheduling policy based on grid technology for Agricultural Internet of Things (AIoT). Facing the situation of a variety of complex and heterogeneous agricultural resources in AIoT, it is difficult to represent them in a unified way. But from an abstract perspective, there are some common models which can express their characteristics and features. Based on this, we proposed a high-level model called Agricultural Resource Hierarchy Model (ARHM), which can be used for modeling various resources. It introduces the agricultural resource modeling method based on this model. Compared with traditional application-oriented three-layer model, ARHM can hide the differences of different applications and make all applications have a unified interface layer and be implemented without distinction. Furthermore, it proposes a Web Service Resource Framework (WSRF)-based resource management method and the encapsulation structure for it. Finally, it focuses on the discussion of multi-agent-based AG resource scheduler, which is a collaborative service provider pattern in multiple agricultural production domains.

  15. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  16. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The technology necessary to build net zero energy buildings (NZEBs) is ready and available today, however, building to net zero energy performance levels can be challenging. Energy efficiency measures, onsite energy generation resources, load matching and grid interaction, climatic factors, and local policies vary from location to location and require unique methods of constructing NZEBs. It is recommended that Components start looking into how to construct and operate NZEBs now as there is a learning curve to net zero construction and FY 2020 is just around the corner.

  18. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  19. Concurrent negotiation and coordination for grid resource coallocation.

    PubMed

    Sim, Kwang Mong; Shi, Benyun

    2010-06-01

    Bolstering resource coallocation is essential for realizing the Grid vision, because computationally intensive applications often require multiple computing resources from different administrative domains. Given that resource providers and consumers may have different requirements, successfully obtaining commitments through concurrent negotiations with multiple resource providers to simultaneously access several resources is a very challenging task for consumers. The impetus of this paper is that it is one of the earliest works that consider a concurrent negotiation mechanism for Grid resource coallocation. The concurrent negotiation mechanism is designed for 1) managing (de)commitment of contracts through one-to-many negotiations and 2) coordination of multiple concurrent one-to-many negotiations between a consumer and multiple resource providers. The novel contributions of this paper are devising 1) a utility-oriented coordination (UOC) strategy, 2) three classes of commitment management strategies (CMSs) for concurrent negotiation, and 3) the negotiation protocols of consumers and providers. Implementing these ideas in a testbed, three series of experiments were carried out in a variety of settings to compare the following: 1) the CMSs in this paper with the work of others in a single one-to-many negotiation environment for one resource where decommitment is allowed for both provider and consumer agents; 2) the performance of the three classes of CMSs in different resource market types; and 3) the UOC strategy with the work of others [e.g., the patient coordination strategy (PCS )] for coordinating multiple concurrent negotiations. Empirical results show the following: 1) the UOC strategy achieved higher utility, faster negotiation speed, and higher success rates than PCS for different resource market types; and 2) the CMS in this paper achieved higher final utility than the CMS in other works. Additionally, the properties of the three classes of CMSs in different kinds of resource markets are also verified.

  20. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  1. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  2. Reinforcement learning techniques for controlling resources in power networks

    NASA Astrophysics Data System (ADS)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  3. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  4. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    NASA Astrophysics Data System (ADS)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  5. Mapping the spatial distribution of global anthropogenic mercury atmospheric emission inventories

    NASA Astrophysics Data System (ADS)

    Wilson, Simon J.; Steenhuisen, Frits; Pacyna, Jozef M.; Pacyna, Elisabeth G.

    This paper describes the procedures employed to spatially distribute global inventories of anthropogenic emissions of mercury to the atmosphere, prepared by Pacyna, E.G., Pacyna, J.M., Steenhuisen, F., Wilson, S. [2006. Global anthropogenic mercury emission inventory for 2000. Atmospheric Environment, this issue, doi:10.1016/j.atmosenv.2006.03.041], and briefly discusses the results of this work. A new spatially distributed global emission inventory for the (nominal) year 2000, and a revised version of the 1995 inventory are presented. Emissions estimates for total mercury and major species groups are distributed within latitude/longitude-based grids with a resolution of 1×1 and 0.5×0.5°. A key component in the spatial distribution procedure is the use of population distribution as a surrogate parameter to distribute emissions from sources that cannot be accurately geographically located. In this connection, new gridded population datasets were prepared, based on the CEISIN GPW3 datasets (CIESIN, 2004. Gridded Population of the World (GPW), Version 3. Center for International Earth Science Information Network (CIESIN), Columbia University and Centro Internacional de Agricultura Tropical (CIAT). GPW3 data are available at http://beta.sedac.ciesin.columbia.edu/gpw/index.jsp). The spatially distributed emissions inventories and population datasets prepared in the course of this work are available on the Internet at www.amap.no/Resources/HgEmissions/

  6. The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG

    NASA Astrophysics Data System (ADS)

    Sun, S.; Liu, D.; Li, G.; Yu, W.

    2011-08-01

    The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.

  7. Evolving Distributed Generation Support Mechanisms: Case Studies from United States, Germany, United Kingdom, and Australia (Chinese translation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shengru; Lowder, Travis R; Tian, Tian

    This is the Chinese translation of NREL/TP-6A20-67613. This report expands on a previous National Renewable Energy Laboratory (NREL) technical report (Lowder et al. 2015) that focused on the United States' unique approach to distributed generation photovoltaics (DGPV) support policies and business models. While the focus of that report was largely historical (i.e., detailing the policies and market developments that led to the growth of DGPV in the United States), this report looks forward, narrating recent changes to laws and regulations as well as the ongoing dialogues over how to incorporate distributed generation (DG) resources onto the electric grid. This reportmore » also broadens the scope of Lowder et al. (2015) to include additional countries and technologies. DGPV and storage are the principal technologies under consideration (owing to market readiness and deployment volumes), but the report also contemplates any generation resource that is (1) on the customer side of the meter, (2) used to, at least partly, offset a host's energy consumption, and/or (3) potentially available to provide grid support (e.g., through peak shaving and load shifting, ancillary services, and other means).« less

  8. Fungal genome resources at NCBI.

    PubMed

    Robbertse, B; Tatusova, T

    2011-09-01

    The National Center for Biotechnology Information (NCBI) is well known for the nucleotide sequence archive, GenBank and sequence analysis tool BLAST. However, NCBI integrates many types of biomolecular data from variety of sources and makes it available to the scientific community as interactive web resources as well as organized releases of bulk data. These tools are available to explore and compare fungal genomes. Searching all databases with Fungi [organism] at http://www.ncbi.nlm.nih.gov/ is the quickest way to find resources of interest with fungal entries. Some tools though are resources specific and can be indirectly accessed from a particular database in the Entrez system. These include graphical viewers and comparative analysis tools such as TaxPlot, TaxMap and UniGene DDD (found via UniGene Homepage). Gene and BioProject pages also serve as portals to external data such as community annotation websites, BioGrid and UniProt. There are many different ways of accessing genomic data at NCBI. Depending on the focus and goal of research projects or the level of interest, a user would select a particular route for accessing genomic databases and resources. This review article describes methods of accessing fungal genome data and provides examples that illustrate the use of analysis tools.

  9. Online production validation in a HEP environment

    NASA Astrophysics Data System (ADS)

    Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.

    2017-03-01

    In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.

  10. Coverage-maximization in networks under resource constraints.

    PubMed

    Nandi, Subrata; Brusch, Lutz; Deutsch, Andreas; Ganguly, Niloy

    2010-06-01

    Efficient coverage algorithms are essential for information search or dispersal in all kinds of networks. We define an extended coverage problem which accounts for constrained resources of consumed bandwidth B and time T . Our solution to the network challenge is here studied for regular grids only. Using methods from statistical mechanics, we develop a coverage algorithm with proliferating message packets and temporally modulated proliferation rate. The algorithm performs as efficiently as a single random walker but O(B(d-2)/d) times faster, resulting in significant service speed-up on a regular grid of dimension d . The algorithm is numerically compared to a class of generalized proliferating random walk strategies and on regular grids shown to perform best in terms of the product metric of speed and efficiency.

  11. Grid parity analysis of stand-alone hybrid microgrids: A comparative study of Germany, Pakistan, South Africa and the United States

    NASA Astrophysics Data System (ADS)

    Siddiqui, Jawad M.

    Grid parity for alternative energy resources occurs when the cost of electricity generated from the source is lower than or equal to the purchasing price of power from the electricity grid. This thesis aims to quantitatively analyze the evolution of hybrid stand-alone microgrids in the US, Germany, Pakistan and South Africa to determine grid parity for a solar PV/Diesel/Battery hybrid system. The Energy System Model (ESM) and NREL's Hybrid Optimization of Multiple Energy Resources (HOMER) software are used to simulate the microgrid operation and determine a Levelized Cost of Electricity (LCOE) figure for each location. This cost per kWh is then compared with two distinct estimates of future retail electricity prices at each location to determine grid parity points. Analysis results reveal that future estimates of LCOE for such hybrid stand-alone microgrids range within the 35-55 cents/kWh over the 25 year study period. Grid parity occurs earlier in locations with higher power prices or unreliable grids. For Pakistan grid parity is already here, while Germany hits parity between the years 2023-2029. Results for South Africa suggest a parity time range of the years 2040-2045. In the US, places with low grid prices do not hit parity during the study period. Sensitivity analysis results reveal the significant impact of financing and the cost of capital on these grid parity points, particularly in developing markets of Pakistan and South Africa. Overall, the study helps conclude that variations in energy markets may determine the fate of emerging energy technologies like microgrids. However, policy interventions have a significant impact on the final outcome, such as the grid parity in this case. Measures such as eliminating uncertainty in policies and improving financing can help these grids overcome barriers in developing economies, where they may find a greater use much earlier in time.

  12. Feasibility Study of Economics and Performance of Solar Photovoltaics at Johnson County Landfill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salasovich, J.; Mosey, G.

    2012-01-01

    The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Johnson County Landfill in Shawnee, Kansas, for a feasibility study of renewable energy production. Citizens of Shawnee, city planners, and site managers are interested in redevelopment uses for landfills in Kansas that are particularly well suited for grid-tied solar photovoltaic (PV) installation. This report assesses the Johnson County Landfill for possible grid-tied PV installations and estimates the cost, performance, and site impacts of three different PV options: crystalline silicon (fixed tilt), crystalline silicon (single-axis tracking), and thin film (fixed tilt). Each option represents amore » standalone system that can be sized to use an entire available site area. In addition, the report outlines financing options that could assist in the implementation of a system. The feasibility of PV systems installed on landfills is highly impacted by the available area for an array, solar resource, operating status, landfill cap status, distance to transmission lines, and distance to major roads. The report findings are applicable to other landfills in the surrounding area.« less

  13. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    NASA Astrophysics Data System (ADS)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  14. The ATLAS Production System Evolution: New Data Processing and Analysis Paradigm for the LHC Run2 and High-Luminosity

    NASA Astrophysics Data System (ADS)

    Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.

  15. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  16. Grid-Level Application of Electrical Energy Storage: Example Use Cases in the United States and China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Gevorgian, Vahan; Wang, Caixia

    Electrical energy storage (EES) systems are expected to play an increasing role in helping the United States and China-the world's largest economies with the two largest power systems-meet the challenges of integrating more variable renewable resources and enhancing the reliability of power systems by improving the operating capabilities of the electric grid. EES systems are becoming integral components of a resilient and efficient grid through a diverse set of applications that include energy management, load shifting, frequency regulation, grid stabilization, and voltage support.

  17. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Lawrence E.

    This report provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.

  18. Energy Systems Integration: Demonstrating Distributed Resource Communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-01-01

    Overview fact sheet about the Electric Power Research Institute (EPRI) and Schneider Electric Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.

  19. Demand Response Availability Profiles for California in the Year 2020

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Daniel; Sohn, Michael; Piette, Mary Ann

    2014-11-01

    Demand response (DR) is being considered as a valuable resource for keeping the electrical grid stable and efficient, and deferring upgrades to generation, transmission, and distribution systems. However, simulations to determine how much infrastructure upgrades can be deferred are necessary in order to plan optimally. Production cost modeling is a technique, which simulates the dispatch of generators to meet demand and reserves in each hour of the year, at minimal cost. By integrating demand response resources into a production cost model (PCM), their value to the grid can be estimated and used to inform operations and infrastructure planning. DR availabilitymore » profiles and constraints for 13 end-uses in California for the year 2020 were developed by Lawrence Berkeley National Laboratory (LBNL), and integrated into a production cost model by Lawrence Livermore National Laboratory (LLNL), for the California Energy Commission’s Value of Energy Storage and Demand Response for Renewable Integration in California Study. This report summarizes the process for developing the DR availability profiles for California, and their aggregate capabilities. While LBNL provided potential DR hourly profiles for regulation product in the ancillary services market and five-minute load following product in the energy market for LLNL’s study, additional results in contingency reserves and an assumed flexible product are also defined. These additional products are included in the analysis for managing high ramps associated with renewable generation and capacity products and they are also presented in this report.« less

  20. The Marine Geoscience Data System and the Global Multi-Resolution Topography Synthesis: Online Resources for Exploring Ocean Mapping Data

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.

    2016-02-01

    The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.

  1. North-East Asian Super Grid: Renewable energy mix and economics

    NASA Astrophysics Data System (ADS)

    Breyer, Christian; Bogdanov, Dmitrii; Komoto, Keiichi; Ehara, Tomoki; Song, Jinsoo; Enebish, Namjil

    2015-08-01

    Further development of the North-East Asian energy system is at a crossroads due to severe limitations of the current conventional energy based system. For North-East Asia it is proposed that the excellent solar and wind resources of the Gobi desert could enable the transformation towards a 100% renewable energy system. An hourly resolved model describes an energy system for North-East Asia, subdivided into 14 regions interconnected by high voltage direct current (HVDC) transmission grids. Simulations are made for highly centralized, decentralized and country-wide grids scenarios. The results for total system levelized cost of electricity (LCOE) are 0.065 and 0.081 €/(kW·h) for the centralized and decentralized approaches for 2030 assumptions. The presented results for 100% renewable resources-based energy systems are lower in LCOE by about 30-40% than recent findings in Europe for conventional alternatives. This research clearly indicates that a 100% renewable resources-based energy system is THE real policy option.

  2. A framework supporting the development of a Grid portal for analysis based on ROI.

    PubMed

    Ichikawa, K; Date, S; Kaishima, T; Shimojo, S

    2005-01-01

    In our research on brain function analysis, users require two different simultaneous types of processing: interactive processing to a specific part of data and high-performance batch processing to an entire dataset. The difference between these two types of processing is in whether or not the analysis is for data in the region of interest (ROI). In this study, we propose a Grid portal that has a mechanism to freely assign computing resources to the users on a Grid environment according to the users' two different types of processing requirements. We constructed a Grid portal which integrates interactive processing and batch processing by the following two mechanisms. First, a job steering mechanism controls job execution based on user-tagged priority among organizations with heterogeneous computing resources. Interactive jobs are processed in preference to batch jobs by this mechanism. Second, a priority-based result delivery mechanism that administrates a rank of data significance. The portal ensures a turn-around time of interactive processing by the priority-based job controlling mechanism, and provides the users with quality of services (QoS) for interactive processing. The users can access the analysis results of interactive jobs in preference to the analysis results of batch jobs. The Grid portal has also achieved high-performance computation of MEG analysis with batch processing on the Grid environment. The priority-based job controlling mechanism has been realized to freely assign computing resources to the users' requirements. Furthermore the achievement of high-performance computation contributes greatly to the overall progress of brain science. The portal has thus made it possible for the users to flexibly include the large computational power in what they want to analyze.

  3. Overgeneration from Solar Energy in California. A Field Guide to the Duck Chart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; O'Connell, Matthew; Brinkman, Gregory

    In 2013, the California Independent System Operator published the 'duck chart,' which shows a significant drop in mid-day net load on a spring day as solar photovoltaics (PV) are added to the system. The chart raises concerns that the conventional power system will be unable to accommodate the ramp rate and range needed to fully utilize solar energy, particularly on days characterized by the duck shape. This could result in 'overgeneration' and curtailed renewable energy, increasing its costs and reducing its environmental benefits. This paper explores the duck chart in detail, examining how much PV might need to be curtailedmore » if additional grid flexibility measures are not taken, and how curtailment rates can be decreased by changing grid operational practices. It finds that under "business-as-usual"" types of assumptions and corresponding levels of grid flexibility in California, solar penetrations as low as 20% of annual energy could lead to marginal curtailment rates that exceed 30%. However, by allowing (or requiring) distributed PV and storage (including new installations that are part of the California storage mandate) to provide grid services, system flexibility could be greatly enhanced. Doing so could significantly reduce curtailment and allow much greater penetration of variable generation resources. Overall, the work described in this paper points to the need to fully integrate distributed resources into grid system planning and operations to allow maximum use of the solar resource.« less

  4. Overgeneration from Solar Energy in California - A Field Guide to the Duck Chart

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Brinkman, Gregory; Jorgenson, Jennie

    In 2013, the California Independent System Operator published the "duck chart,"" which shows a significant drop in mid-day net load on a spring day as solar photovoltaics (PV) are added to the system. The chart raises concerns that the conventional power system will be unable to accommodate the ramp rate and range needed to fully utilize solar energy, particularly on days characterized by the duck shape. This could result in "overgeneration"" and curtailed renewable energy, increasing its costs and reducing its environmental benefits. This paper explores the duck chart in detail, examining how much PV might need to be curtailedmore » if additional grid flexibility measures are not taken, and how curtailment rates can be decreased by changing grid operational practices. It finds that under business-as-usual types of assumptions and corresponding levels of grid flexibility in California, solar penetrations as low as 20 percent of annual energy could lead to marginal curtailment rates that exceed 30 percent. However, by allowing (or requiring) distributed PV and storage (including new installations that are part of the California storage mandate) to provide grid services, system flexibility could be greatly enhanced. Doing so could significantly reduce curtailment and allow much greater penetration of variable generation resources in achieving a 50 percent renewable portfolio standard. Overall, the work described in this paper points to the need to fully integrate distributed resources into grid system planning and operations to allow maximum use of the solar resource.« less

  5. Multi-Megawatt-Scale Power-Hardware-in-the-Loop Interface for Testing Ancillary Grid Services by Converter-Coupled Generation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koralewicz, Przemyslaw J; Gevorgian, Vahan; Wallen, Robert B

    Power-hardware-in-the-loop (PHIL) is a simulation tool that can support electrical systems engineers in the development and experimental validation of novel, advanced control schemes that ensure the robustness and resiliency of electrical grids that have high penetrations of low-inertia variable renewable resources. With PHIL, the impact of the device under test on a generation or distribution system can be analyzed using a real-time simulator (RTS). PHIL allows for the interconnection of the RTS with a 7 megavolt ampere (MVA) power amplifier to test multi-megawatt renewable assets available at the National Wind Technology Center (NWTC). This paper addresses issues related to themore » development of a PHIL interface that allows testing hardware devices at actual scale. In particular, the novel PHIL interface algorithm and high-speed digital interface, which minimize the critical loop delay, are discussed.« less

  6. Multi-Megawatt-Scale Power-Hardware-in-the-Loop Interface for Testing Ancillary Grid Services by Converter-Coupled Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koralewicz, Przemyslaw J; Gevorgian, Vahan; Wallen, Robert B

    Power-hardware-in-the-loop (PHIL) is a simulation tool that can support electrical systems engineers in the development and experimental validation of novel, advanced control schemes that ensure the robustness and resiliency of electrical grids that have high penetrations of low-inertia variable renewable resources. With PHIL, the impact of the device under test on a generation or distribution system can be analyzed using a real-time simulator (RTS). PHIL allows for the interconnection of the RTS with a 7 megavolt ampere (MVA) power amplifier to test multi-megawatt renewable assets available at the National Wind Technology Center (NWTC). This paper addresses issues related to themore » development of a PHIL interface that allows testing hardware devices at actual scale. In particular, the novel PHIL interface algorithm and high-speed digital interface, which minimize the critical loop delay, are discussed.« less

  7. Test Protocols for Advanced Inverter Interoperability Functions – Main Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Gonzalez, Sigifredo; Ralph, Mark E.

    2013-11-01

    Distributed energy resources (DER) such as photovoltaic (PV) systems, when deployed in a large scale, are capable of influencing significantly the operation of power systems. Looking to the future, stakeholders are working on standards to make it possible to manage the potentially complex interactions between DER and the power system. In 2009, the Electric Power Research Institute (EPRI), Sandia National Laboratories (SNL) with the U.S. Department of Energy (DOE), and the Solar Electric Power Association (SEPA) initiated a large industry collaborative to identify and standardize definitions for a set of DER grid support functions. While the initial effort concentrated onmore » grid-tied PV inverters and energy storage systems, the concepts have applicability to all DER. A partial product of this on-going effort is a reference definitions document (IEC TR 61850-90-7, Object models for power converters in distributed energy resources (DER) systems) that has become a basis for expansion of related International Electrotechnical Commission (IEC) standards, and is supported by US National Institute of Standards and Technology (NIST) Smart Grid Interoperability Panel (SGIP). Some industry-led organizations advancing communications protocols have also embraced this work. As standards continue to evolve, it is necessary to develop test protocols to independently verify that the inverters are properly executing the advanced functions. Interoperability is assured by establishing common definitions for the functions and a method to test compliance with operational requirements. This document describes test protocols developed by SNL to evaluate the electrical performance and operational capabilities of PV inverters and energy storage, as described in IEC TR 61850-90-7. While many of these functions are not currently required by existing grid codes or may not be widely available commercially, the industry is rapidly moving in that direction. Interoperability issues are already apparent as some of these inverter capabilities are being incorporated in large demonstration and commercial projects. The test protocols are intended to be used to verify acceptable performance of inverters within the standard framework described in IEC TR 61850-90-7. These test protocols, as they are refined and validated over time, can become precursors for future certification test procedures for DER advanced grid support functions.« less

  8. Test Protocols for Advanced Inverter Interoperability Functions - Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Gonzalez, Sigifredo; Ralph, Mark E.

    2013-11-01

    Distributed energy resources (DER) such as photovoltaic (PV) systems, when deployed in a large scale, are capable of influencing significantly the operation of power systems. Looking to the future, stakeholders are working on standards to make it possible to manage the potentially complex interactions between DER and the power system. In 2009, the Electric Power Research Institute (EPRI), Sandia National Laboratories (SNL) with the U.S. Department of Energy (DOE), and the Solar Electric Power Association (SEPA) initiated a large industry collaborative to identify and standardize definitions for a set of DER grid support functions. While the initial effort concentrated onmore » grid-tied PV inverters and energy storage systems, the concepts have applicability to all DER. A partial product of this on-going effort is a reference definitions document (IEC TR 61850-90-7, Object models for power converters in distributed energy resources (DER) systems) that has become a basis for expansion of related International Electrotechnical Commission (IEC) standards, and is supported by US National Institute of Standards and Technology (NIST) Smart Grid Interoperability Panel (SGIP). Some industry-led organizations advancing communications protocols have also embraced this work. As standards continue to evolve, it is necessary to develop test protocols to independently verify that the inverters are properly executing the advanced functions. Interoperability is assured by establishing common definitions for the functions and a method to test compliance with operational requirements. This document describes test protocols developed by SNL to evaluate the electrical performance and operational capabilities of PV inverters and energy storage, as described in IEC TR 61850-90-7. While many of these functions are not now required by existing grid codes or may not be widely available commercially, the industry is rapidly moving in that direction. Interoperability issues are already apparent as some of these inverter capabilities are being incorporated in large demonstration and commercial projects. The test protocols are intended to be used to verify acceptable performance of inverters within the standard framework described in IEC TR 61850-90-7. These test protocols, as they are refined and validated over time, can become precursors for future certification test procedures for DER advanced grid support functions.« less

  9. Release of a 10-m-resolution DEM for the whole Italian territory: a new, freely available resource for research purposes

    NASA Astrophysics Data System (ADS)

    Tarquini, S.; Nannipieri, L.; Favalli, M.; Fornaciai, A.; Vinci, S.; Doumaz, F.

    2012-04-01

    Digital elevation models (DEMs) are fundamental in any kind of environmental or morphological study. DEMs are obtained from a variety of sources and generated in several ways. Nowadays, a few global-coverage elevation datasets are available for free (e.g., SRTM, http://www.jpl.nasa.gov/srtm; ASTER, http://asterweb.jpl.nasa.gov/). When the matrix of a DEM is used also for computational purposes, the choice of the elevation dataset which better suits the target of the study is crucial. Recently, the increasing use of DEM-based numerical simulation tools (e.g. for gravity driven mass flows), would largely benefit from the use of a higher resolution/higher accuracy topography than those available at planetary scale. Similar elevation datasets are neither easily nor freely available for all countries worldwide. Here we introduce a new web resource which made available for free (for research purposes only) a 10 m-resolution DEM for the whole Italian territory. The creation of this elevation dataset was presented by Tarquini et al. (2007). This DEM was obtained in triangular irregular network (TIN) format starting from heterogeneous vector datasets, mostly consisting in elevation contour lines and elevation points derived from several sources. The input vector database was carefully cleaned up to obtain an improved seamless TIN refined by using the DEST algorithm, thus improving the Delaunay tessellation. The whole TINITALY/01 DEM was converted in grid format (10-m cell size) according to a tiled structure composed of 193, 50-km side square elements. The grid database consists of more than 3 billions of cells and occupies almost 12 GB of disk memory. A web-GIS has been created (http://tinitaly.pi.ingv.it/ ) where a seamless layer of images in full resolution (10 m) obtained from the whole DEM (both in color-shaded and anaglyph mode) is open for browsing. Accredited navigators are allowed to download the elevation dataset.

  10. Integration of Grid and Local Batch Resources at DESY

    NASA Astrophysics Data System (ADS)

    Beyer, Christoph; Finnern, Thomas; Gellrich, Andreas; Hartmann, Thomas; Kemp, Yves; Lewendel, Birgit

    2017-10-01

    As one of the largest resource centres DESY has to support differing work flows of users from various scientific backgrounds. Users can be for one HEP experiments in WLCG or Belle II as well as local HEP users but also physicists from other fields as photon science or accelerator development. By abandoning specific worker node setups in favour of generic flat nodes with middleware resources provided via CVMFS, we gain flexibility to subsume different use cases in a homogeneous environment. Grid jobs and the local batch system are managed in a HTCondor based setup, accepting pilot, user and containerized jobs. The unified setup allows dynamic re-assignment of resources between the different use cases. Monitoring is implemented on global batch system metrics as well as on a per job level utilizing corresponding cgroup information.

  11. Wind and Solar on the Power Grid: Myths and Misperceptions, Greening the Grid (Spanish Version)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Authors: Denholm, Paul; Cochran, Jaquelin; Brancucci Martinez-Anido, Carlo

    This is the Spanish version of the 'Greening the Grid - Wind and Solar on the Power Grid: Myths and Misperceptions'. Wind and solar are inherently more variable and uncertain than the traditional dispatchable thermal and hydro generators that have historically provided a majority of grid-supplied electricity. The unique characteristics of variable renewable energy (VRE) resources have resulted in many misperceptions regarding their contribution to a low-cost and reliable power grid. Common areas of concern include: 1) The potential need for increased operating reserves, 2) The impact of variability and uncertainty on operating costs and pollutant emissions of thermal plants,more » and 3) The technical limits of VRE penetration rates to maintain grid stability and reliability. This fact sheet corrects misperceptions in these areas.« less

  12. Developing Water Resource Security in a Greenhouse Gas Constrained Context - A Case Study in California

    NASA Astrophysics Data System (ADS)

    Tarroja, B.; Aghakouchak, A.; Samuelsen, S.

    2015-12-01

    The onset of drought conditions in regions such as California due to shortfalls in precipitation has brought refreshed attention to the vulnerability of our water supply paradigm to changes in climate patterns. In the face of a changing climate which can exacerbate drought conditions in already dry areas, building resiliency into our water supply infrastructure requires some decoupling of water supply availability from climate behavior through conservation, efficiency, and alternative water supply measures such as desalination and water reuse. The installation of these measures requires varying degrees of direct energy inputs and/or impacts the energy usage of the water supply infrastructure (conveyance, treatment, distribution, wastewater treatment). These impacts have implications for greenhouse gas emissions from direct fuel usage or impacts on the emissions from the electric grid. At the scale that these measures may need to be deployed to secure water supply availability, especially under climate change impacted hydrology, they can potentially pose obstacles for meeting greenhouse gas emissions reduction and renewable utilization goals. Therefore, the portfolio of these measures must be such that detrimental impacts on greenhouse gas emissions are minimized. This study combines climate data with a water reservoir network model and an electric grid dispatch model for the water-energy system of California to evaluate 1) the different pathways and scale of alternative water resource measures needed to secure water supply availability and 2) the impacts of following these pathways on the ability to meet greenhouse gas and renewable utilization goals. It was discovered that depending on the water supply measure portfolio implemented, impacts on greenhouse gas emissions and renewable utilization can either be beneficial or detrimental, and optimizing the portfolio is more important under climate change conditions due to the scale of measures required.

  13. Hybrid PV/Wind Power Systems Incorporating Battery Storage and Considering the Stochastic Nature of Renewable Resources

    NASA Astrophysics Data System (ADS)

    Barnawi, Abdulwasa Bakr

    Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.

  14. Digital-map grids of mean-annual precipitation for 1961-90, and generalized skew coefficients of annual maximum streamflow for Oklahoma

    USGS Publications Warehouse

    Rea, A.H.; Tortorelli, R.L.

    1997-01-01

    This digital report contains two digital-map grids of data that were used to develop peak-flow regression equations in Tortorelli, 1997, 'Techniques for estimating peak-streamflow frequency for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma,' U.S. Geological Survey Water-Resources Investigations Report 97-4202. One data set is a grid of mean annual precipitation, in inches, based on the period 1961-90, for Oklahoma. The data set was derived from the PRISM (Parameter-elevation Regressions on Independent Slopes Model) mean annual precipitation grid for the United States, developed by Daly, Neilson, and Phillips (1994, 'A statistical-topographic model for mapping climatological precipitation over mountainous terrain:' Journal of Applied Meteorology, v. 33, no. 2, p. 140-158). The second data set is a grid of generalized skew coefficients of logarithms of annual maximum streamflow for Oklahoma streams less than or equal to 2,510 square miles in drainage area. This grid of skew coefficients is taken from figure 11 of Tortorelli and Bergman, 1985, 'Techniques for estimating flood peak discharges for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma,' U.S. Geological Survey Water-Resources Investigations Report 84-4358. To save disk space, the skew coefficient values have been multiplied by 100 and rounded to integers with two significant digits. The data sets are provided in an ASCII grid format.

  15. Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Emma; Kiliccote, Sila; McParland, Charles

    2014-07-01

    This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (µPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. µPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the grid’s increasingly complex loads that include features such as large volumes of distributed generation.more » Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using µPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendors’ sensors and advanced measurement devices. In addition, data from advanced sources such as µPMUs could be used to validate models to improve/ensure accuracy, providing information on normally estimated values such as underground conductor impedance, and characterization of complex loads. Although the input of high-fidelity data to existing tools will be challenging, µPMU data on phase angle (as well as other data from advanced sensors) will be useful for basic operational decisions that are based on a trend of changing data.« less

  16. Wiki-Based Rapid Prototyping for Teaching-Material Design in E-Learning Grids

    ERIC Educational Resources Information Center

    Shih, Wen-Chung; Tseng, Shian-Shyong; Yang, Chao-Tung

    2008-01-01

    Grid computing environments with abundant resources can support innovative e-Learning applications, and are promising platforms for e-Learning. To support individualized and adaptive learning, teachers are encouraged to develop various teaching materials according to different requirements. However, traditional methodologies for designing teaching…

  17. IEEE 1547 Standards Advancing Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basso, Thomas; Chakraborty, Sudipta; Hoke, Andy

    Technology advances including development of advanced distributed energy resources (DER) and grid-integrated operations and controls functionalities have surpassed the requirements in current standards and codes for DER interconnection with the distribution grid. The full revision of IEEE Standards 1547 (requirements for DER-grid interconnection and interoperability) and 1547.1 (test procedures for conformance to 1547) are establishing requirements and best practices for state-of-the-art DER including variable renewable energy sources. The revised standards will also address challenges associated with interoperability and transmission-level effects, in addition to strictly addressing the distribution grid needs. This paper provides the status and future direction of the ongoingmore » development focus for the 1547 standards.« less

  18. [Analysis of expectations on the nurse's leadership in the light of Grid's theories].

    PubMed

    Trevizan, M A; Mendes, I A; Hayashida, M; Galvão, C M; Cury, S R

    2001-01-01

    Based on the understanding that leadership is a fundamental resource for nurses in health institutions, the aim of the authors was to analyze, under the light of Blake & Mouton's Grid Theories, the expectations of the Nursing team regarding nurse's leadership. The analysis was based on four investigations performed in different contexts of Brazilian Nursing and data were collected through the application of the "Grid & Leadership in Nursing Instrument" developed by Trevizan. Results show that the subjects prefer the Grid style 9.9. The authors discuss the results and emphasize the need for the development of leadership in Nursing.

  19. Role of Smart Grids in Integrating Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Speer, B.; Miller, M.; Schaffer, W.

    2015-05-27

    This report was prepared for the International Smart Grid Action Network (ISGAN), which periodically publishes briefs and discussion papers on key topics of smart grid development globally. The topic of this report was selected by a multilateral group of national experts participating in ISGAN Annex 4, a working group that aims to produce synthesis insights for decision makers. This report is an update of a 2012 ISGAN Annex 4 report entitled “Smart Grid Contributions to Variable Renewable Resource Integration.” That report and other past publications of ISGAN Annexes can be found at www.iea-isgan.org and at www.cleanenergysolutions.org.

  20. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.

  1. An Analysis of the Published Mineral Resource Estimates of the Haji-Gak Iron Deposit, Afghanistan

    USGS Publications Warehouse

    Sutphin, D.M.; Renaud, K.M.; Drew, L.J.

    2011-01-01

    The Haji-Gak iron deposit of eastern Bamyan Province, eastern Afghanistan, was studied extensively and resource calculations were made in the 1960s by Afghan and Russian geologists. Recalculation of the resource estimates verifies the original estimates for categories A (in-place resources known in detail), B (in-place resources known in moderate detail), and C 1 (in-place resources estimated on sparse data), totaling 110. 8 Mt, or about 6% of the resources as being supportable for the methods used in the 1960s. C 2 (based on a loose exploration grid with little data) resources are based on one ore grade from one drill hole, and P 2 (prognosis) resources are based on field observations, field measurements, and an ore grade derived from averaging grades from three better sampled ore bodies. C 2 and P 2 resources are 1,659. 1 Mt or about 94% of the total resources in the deposit. The vast P 2 resources have not been drilled or sampled to confirm their extent or quality. The purpose of this article is to independently evaluate the resources of the Haji-Gak iron deposit by using the available geologic and mineral resource information including geologic maps and cross sections, sampling data, and the analog-estimating techniques of the 1960s to determine the size and tenor of the deposit. ?? 2011 International Association for Mathematical Geology (outside the USA).

  2. Opportunities for Joint Water–Energy Management: Sensitivity of the 2010 Western U.S. Electricity Grid Operations to Climate Oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, N.; Kintner-Meyer, M.; Wu, D.

    The 2016 SECURE Water Act report’s natural water availability benchmark, combined with the 2010 level of water demand from an integrated assessment model, is used as input to drive a large-scale water management model. The regulated flow at hydropower plants and thermoelectric plants in the Western U.S. electricity grid (WECC) is translated into potential hydropower generation and generation capacity constraints. The impact on reliability (unserved energy, reserve margin) and cost (production cost, carbon emissions) of water constraints on 2010-level WECC power system operations is assessed using an electricity production cost model (PCM). Use of the PCM reveals the changes inmore » generation dispatch that reflect the inter-regional interdependencies in water-constrained generation and the ability to use other generation resources to meet all electricity loads in the WECC. August grid operational benchmarks show a range of sensitivity in production cost (-8 to +11%) and carbon emissions (-7 to 11%). The reference reserve margin threshold of 15% above peak load is maintained in the scenarios analyzed, but in 5 out of 55 years unserved energy is observed when normal operations are maintained. There is 1 chance in 10 that a year will demonstrate unserved energy in August, which defines the system’s historical performance threshold to support impact, vulnerability, and adaptation analysis. For seasonal and longer term planning, i.e., multi-year drought, we demonstrate how the Water Scarcity Grid Impact Factor and climate oscillations (ENSO, PDO) can be used to plan for joint water-electricity management to maintain grid reliability.« less

  3. Technology solutions for wind integration in Ercot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Texas has for more than a decade led all other states in the U.S. with the most wind generation capacity on the U.S. electric grid. The State recognized the value that wind energy could provide, and committed early on to build out the transmission system necessary to move power from the windy regions in West Texas to the major population centers across the state. It also signaled support for renewables on the grid by adopting an aggressive renewable portfolio standard (RPS). The joining of these conditions with favorable Federal tax credits has driven the rapid growth in Texas wind capacitymore » since its small beginning in 2000. In addition to the major transmission grid upgrades, there have been a number of technology and policy improvements that have kept the grid reliable while adding more and more intermittent wind generation. Technology advancements such as better wind forecasting and deployment of a nodal market system have improved the grid efficiency of wind. Successful large scale wind integration into the electric grid, however, continues to pose challenges. The continuing rapid growth in wind energy calls for a number of technology additions that will be needed to reliably accommodate an expected 65% increase in future wind resources. The Center for the Commercialization of Electric Technologies (CCET) recognized this technology challenge in 2009 when it submitted an application for funding of a regional demonstration project under the Recovery Act program administered by the U.S. Department of Energy1. Under that program the administration announced the largest energy grid modernization investment in U.S. history, making available some $3.4 billion in grants to fund development of a broad range of technologies for a more efficient and reliable electric system, including the growth of renewable energy sources like wind and solar. At that time, Texas was (and still is) the nation’s leader in the integration of wind into the grid, and was investing heavily in the infrastructure needed to increase the viability of this important resource. To help Texas and the rest of the nation address the challenges associated with the integration of large amounts of renewables, CCET seized on the federal opportunity to undertake a multi-faceted project aimed at demonstrating the viability of new “smart grid” technologies to facilitate larger amounts of wind energy through better system monitoring capabilities, enhanced operator visualization, and improved load management. In early 2010, CCET was awarded a $27 million grant, half funded by the Department of Energy and half-funded by project participants. With this funding, CCET undertook the project named Discovery Across Texas which has demonstrated how existing and new technologies can better integrate wind power into the state’s grid. The following pages summarize the results of seven technology demonstrations that will help Texas and the nation meet this wind integration challenge.« less

  4. Technology solutions for wind integration in ERCOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Texas has for more than a decade led all other states in the U.S. with the most wind generation capacity on the U.S. electric grid. The State recognized the value that wind energy could provide, and committed early on to build out the transmission system necessary to move power from the windy regions in West Texas to the major population centers across the state. It also signaled support for renewables on the grid by adopting an aggressive renewable portfolio standard (RPS). The joining of these conditions with favorable Federal tax credits has driven the rapid growth in Texas wind capacitymore » since its small beginning in 2000. In addition to the major transmission grid upgrades, there have been a number of technology and policy improvements that have kept the grid reliable while adding more and more intermittent wind generation. Technology advancements such as better wind forecasting and deployment of a nodal market system have improved the grid efficiency of wind. Successful large scale wind integration into the electric grid, however, continues to pose challenges. The continuing rapid growth in wind energy calls for a number of technology additions that will be needed to reliably accommodate an expected 65% increase in future wind resources. The Center for the Commercialization of Electric Technologies (CCET) recognized this technology challenge in 2009 when it submitted an application for funding of a regional demonstration project under the Recovery Act program administered by the U.S. Department of Energy1. Under that program the administration announced the largest energy grid modernization investment in U.S. history, making available some $3.4 billion in grants to fund development of a broad range of technologies for a more efficient and reliable electric system, including the growth of renewable energy sources like wind and solar. At that time, Texas was (and still is) the nation’s leader in the integration of wind into the grid, and was investing heavily in the infrastructure needed to increase the viability of this important resource. To help Texas and the rest of the nation address the challenges associated with the integration of large amounts of renewables, CCET seized on the federal opportunity to undertake a multi-faceted project aimed at demonstrating the viability of new “smart grid” technologies to facilitate larger amounts of wind energy through better system monitoring capabilities, enhanced operator visualization, and improved load management. In early 2010, CCET was awarded a $27 million grant, half funded by the Department of Energy and half-funded by project participants. With this funding, CCET undertook the project named Discovery Across Texas which has demonstrated how existing and new technologies can better integrate wind power into the state’s grid. The following pages summarize the results of seven technology demonstrations that will help Texas and the nation meet this wind integration challenge.« less

  5. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  6. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  7. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  8. HOMER Economic Models - US Navy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Jason William; Myers, Kurt Steven

    This LETTER REPORT has been prepared by Idaho National Laboratory for US Navy NAVFAC EXWC to support in testing pre-commercial SIREN (Simulated Integration of Renewable Energy Networks) computer software models. In the logistics mode SIREN software simulates the combination of renewable power sources (solar arrays, wind turbines, and energy storage systems) in supplying an electrical demand. NAVFAC EXWC will create SIREN software logistics models of existing or planned renewable energy projects at five Navy locations (San Nicolas Island, AUTEC, New London, & China Lake), and INL will deliver additional HOMER computer models for comparative analysis. In the transient mode SIRENmore » simulates the short time-scale variation of electrical parameters when a power outage or other destabilizing event occurs. In the HOMER model, a variety of inputs are entered such as location coordinates, Generators, PV arrays, Wind Turbines, Batteries, Converters, Grid costs/usage, Solar resources, Wind resources, Temperatures, Fuels, and Electric Loads. HOMER's optimization and sensitivity analysis algorithms then evaluate the economic and technical feasibility of these technology options and account for variations in technology costs, electric load, and energy resource availability. The Navy can then use HOMER’s optimization and sensitivity results to compare to those of the SIREN model. The U.S. Department of Energy (DOE) Idaho National Laboratory (INL) possesses unique expertise and experience in the software, hardware, and systems design for the integration of renewable energy into the electrical grid. NAVFAC EXWC will draw upon this expertise to complete mission requirements.« less

  9. Analysis of off-grid hybrid wind turbine/solar PV water pumping systems

    USDA-ARS?s Scientific Manuscript database

    While many remote water pumping systems exist (e.g. mechanical windmills, solar photovoltaic , wind-electric, diesel powered), very few combine both the wind and solar energy resources to possibly improve the reliability and the performance of the system. In this paper, off-grid wind turbine (WT) a...

  10. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    NASA Astrophysics Data System (ADS)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  11. The OSG Open Facility: an on-ramp for opportunistic scientific computing

    NASA Astrophysics Data System (ADS)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.; Gardner, R.; Rynge, M.; Würthwein, F.

    2017-10-01

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource owners and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.

  12. The OSG Open Facility: An On-Ramp for Opportunistic Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayatilaka, B.; Levshina, T.; Sehgal, C.

    The Open Science Grid (OSG) is a large, robust computing grid that started primarily as a collection of sites associated with large HEP experiments such as ATLAS, CDF, CMS, and DZero, but has evolved in recent years to a much larger user and resource platform. In addition to meeting the US LHC community’s computational needs, the OSG continues to be one of the largest providers of distributed high-throughput computing (DHTC) to researchers from a wide variety of disciplines via the OSG Open Facility. The Open Facility consists of OSG resources that are available opportunistically to users other than resource ownersmore » and their collaborators. In the past two years, the Open Facility has doubled its annual throughput to over 200 million wall hours. More than half of these resources are used by over 100 individual researchers from over 60 institutions in fields such as biology, medicine, math, economics, and many others. Over 10% of these individual users utilized in excess of 1 million computational hours each in the past year. The largest source of these cycles is temporary unused capacity at institutions affiliated with US LHC computational sites. An increasing fraction, however, comes from university HPC clusters and large national infrastructure supercomputers offering unused capacity. Such expansions have allowed the OSG to provide ample computational resources to both individual researchers and small groups as well as sizable international science collaborations such as LIGO, AMS, IceCube, and sPHENIX. Opening up access to the Fermilab FabrIc for Frontier Experiments (FIFE) project has also allowed experiments such as mu2e and NOvA to make substantial use of Open Facility resources, the former with over 40 million wall hours in a year. We present how this expansion was accomplished as well as future plans for keeping the OSG Open Facility at the forefront of enabling scientific research by way of DHTC.« less

  13. Integrating Clinical Trial Imaging Data Resources Using Service-Oriented Architecture and Grid Computing

    PubMed Central

    Cladé, Thierry; Snyder, Joshua C.

    2010-01-01

    Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magee, Thoman

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), batterymore » storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG. The SGDP enables the efficient, flexible integration of these disparate resources and lays the architectural foundations for future scalability. Con Edison assembled an SGDP team of more than 16 different project partners, including technology vendors, and participating organizations, and the Con Edison team provided overall guidance and project management. Project team members are listed in Table 1-1.« less

  15. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  16. An improved global wind resource estimate for integrated assessment models

    DOE PAGES

    Eurek, Kelly; Sullivan, Patrick; Gleason, Michael; ...

    2017-11-25

    This study summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km 2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less

  17. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  18. An improved global wind resource estimate for integrated assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eurek, Kelly; Sullivan, Patrick; Gleason, Michael

    This study summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km 2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less

  19. Critical Infrastructure Protection: EMP Impacts on the U.S. Electric Grid

    NASA Astrophysics Data System (ADS)

    Boston, Edwin J., Jr.

    The purpose of this research is to identify the United States electric grid infrastructure systems vulnerabilities to electromagnetic pulse attacks and the cyber-based impacts of those vulnerabilities to the electric grid. Additionally, the research identifies multiple defensive strategies designed to harden the electric grid against electromagnetic pulse attack that include prevention, mitigation and recovery postures. Research results confirm the importance of the electric grid to the United States critical infrastructures system and that an electromagnetic pulse attack against the electric grid could result in electric grid degradation, critical infrastructure(s) damage and the potential for societal collapse. The conclusions of this research indicate that while an electromagnetic pulse attack against the United States electric grid could have catastrophic impacts on American society, there are currently many defensive strategies under consideration designed to prevent, mitigate and or recover from an electromagnetic pulse attack. However, additional research is essential to further identify future target hardening opportunities, efficient implementation strategies and funding resources.

  20. Fault tolerance in computational grids: perspectives, challenges, and issues.

    PubMed

    Haider, Sajjad; Nazir, Babar

    2016-01-01

    Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.

  1. Transactive System: Part I: Theoretical Underpinnings of Payoff Functions, Control Decisions, Information Privacy, and Solution Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Zhang, Wei; Sun, Y.

    The increased penetration of renewable energy has significantly changed the conditions and the operational timing of the electricity grid. More flexible, faster ramping resources are needed to compensate for the uncertainty and variability introduced by renewable energy. Distributed energy resources (DERs) such as distributed generators, energy storage, and controllable loads could help manage the power grid in terms of both economic efficiency and operational reliability. In order to realize the benefits of DERs, coordination and control approaches must be designed to enable seamless integration of DERs into the power grid. Transactive coordination and control is a new approach for DERmore » integration, where individual resources are automated and engaged through market interaction. Transactive approaches use economic signals—prices or incentives—to engage DERs. These economic signals must reflect the true value of the DER contributions, so that they seamlessly and equitably compete for the opportunities that today are only available to grid-owned assets. Value signals must be communicated to the DERs in near-real time, the assets must be imbued with new forms of distributed intelligence and control to take advantage of the opportunities presented by these signals, and they must be capable of negotiating and transacting a range of market-driven energy services. The concepts of transactive energy systems are not new, but build upon evolutionary economic changes in financial and electric power markets. These concepts also recognize the different regional structures of wholesale power markets, electricity delivery markets, retail markets, and vertically integrated service provider markets. Although transactive energy systems are not revolutionary, they will be transformational in their ability to provide flexibility and operational efficiency. A main goal of this research is to establish useful foundation for analysis of transactive energy systems and to facilitate new transactive energy system design with demonstrable guarantees on stability and performance. Specifically, the goals are to (1) establish a theoretical basis for evaluating the performance of different transactive systems, (2) devise tools to address canonical problems that exemplify challenges and scenarios of transactive systems, and (3) provide guidelines for design of future transactive systems. This report, Part 1 of a two part series, advances the above-listed research objectives by reviewing existing transactive systems and identifying a theoretical foundation that integrates payoff functions, control decisions, information privacy, and mathematical solution concepts.« less

  2. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herner, K.; Alba Hernandex, A. F.; Bhat, S.

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less

  3. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.

  4. JINR cloud infrastructure evolution

    NASA Astrophysics Data System (ADS)

    Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.

    2016-09-01

    To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.

  5. NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications

    NASA Technical Reports Server (NTRS)

    Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.

  6. Assessing the Wave Energy Potential of Jamaica, a Greater Antilles Island, through Dynamic Modelling

    NASA Astrophysics Data System (ADS)

    Daley, A. P., Jr.; Dorville, J. F. M.; Taylor, M. A.

    2017-12-01

    Globally wave energy has been on the rise as a result of the impacts of climate change and continuous fluctuation in oil prices. The water's inertia provides waves with greater stability than that of other renewable energy sources such as solar and wind. Jamaica is part of the Greater Antilles Arc and has over 1000 km of coast line with an abundance of shallow water approximately 80% within a 50km band. This configuration provides a wealth of sites for wave exploitation even in minimal wave energy conditions. Aside from harnessing the oceans waves converters can be viewed as a tool for protection of coastal areas against natural marine occurrences. Jamica has done extensive studies where solar, hydro and wind resouces are concerned. However, there has been no studies done to date on the country's wave energy resources.The aim of this study is to bridge this gap by characterizing Jamaica's wave energy resources generating in a half-closed Caribbean Sea using data available from: buoys, altimetric satellite, and numerical model. Available data has been used to assess the available resource on the coastal area for the last 12 years. Statistical analysis of the available energy is determined using the sea state (Hs, Tp and Dir) and the atmospheric forcing (10m-wind, atmospheric pressure, sea-air temperature) relating to the season.The chain of dynamical model is presented (WW3-SWAN-SWASH), allowing for the tracking of the propagation of the wave energy from an offshore region to nearshore zone along with their interaction with areas of shallow depth. This will provide a better assessment of the energy and the quality of the waves closer to the electrical grid.Climate prediction is used to estimate the sea state and wave energy exploitable up to 2100. An analysis of the possible usage of the available coastal resource up to 2100. The main results present small but exploitable resources with seasonal variability in the energy available but not wave direction.

  7. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    NASA Astrophysics Data System (ADS)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  8. A smart grid simulation testbed using Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei

    2014-06-01

    The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.

  9. Earth System Grid II, Turning Climate Datasets into Community Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Don

    2006-08-01

    The Earth System Grid (ESG) II project, funded by the Department of Energy’s Scientific Discovery through Advanced Computing program, has transformed climate data into community resources. ESG II has accomplished this goal by creating a virtual collaborative environment that links climate centers and users around the world to models and data via a computing Grid, which is based on the Department of Energy’s supercomputing resources and the Internet. Our project’s success stems from partnerships between climate researchers and computer scientists to advance basic and applied research in the terrestrial, atmospheric, and oceanic sciences. By interfacing with other climate science projects,more » we have learned that commonly used methods to manage and remotely distribute data among related groups lack infrastructure and under-utilize existing technologies. Knowledge and expertise gained from ESG II have helped the climate community plan strategies to manage a rapidly growing data environment more effectively. Moreover, approaches and technologies developed under the ESG project have impacted datasimulation integration in other disciplines, such as astrophysics, molecular biology and materials science.« less

  10. Unraveling the Importance of Climate Change Resilience in Planning the Future Sustainable Energy System

    NASA Astrophysics Data System (ADS)

    Tarroja, B.; AghaKouchak, A.; Forrest, K.; Chiang, F.; Samuelsen, S.

    2017-12-01

    In response to concerns regarding the environmental impacts of the current energy resource mix, significant research efforts have been focused on determining the future energy resource mix to meet emissions reduction and environmental sustainability goals. Many of these studies focus on various constraints such as costs, grid operability requirements, and environmental performance, and develop different plans for the rollout of energy resources between the present and future years. One aspect that has not yet been systematically taken into account in these planning studies, however, is the potential impacts that changing climates may have on the availability and performance of key energy resources that compose these plans. This presentation will focus on a case study for California which analyzes the impacts of climate change on the greenhouse gas emissions and renewable resource utilization of an energy resource plan developed by Energy Environmental Economics for meeting the state's year 2050 greenhouse gas goal of 80% reduction in emissions by the year 2050. Specifically, climate change impacts on three aspects of the energy system are investigated: 1) changes in hydropower generation due to altered precipitation, streamflow and runoff patterns, 2) changes in the availability of solar thermal and geothermal power plant capacity due to shifting water availability, and 3) changes in the residential and commercial electric building loads due to increased temperatures. These impacts were discovered to cause the proposed resource plan to deviate from meeting its emissions target by up to 5.9 MMT CO2e/yr and exhibit a reduction in renewable resource penetration of up to 3.1% of total electric energy. The impacts of climate change on energy system performance were found to be mitigated by increasing the flexibility of the energy system through increased storage and electric load dispatchability. Overall, this study highlights the importance of taking into account and building resilience against potential climate change impacts on the energy system in planning the future energy resource mix.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Timothy M.; Kadavil, Rahul; Palmintier, Bryan

    The 21st century electric power grid is transforming with an unprecedented increase in demand and increase in new technologies. In the United States Energy Independence and Security Act of 2007, Title XIII sets the tenets for modernizing the electricity grid through what is known as the 'Smart Grid Initiative.' This initiative calls for increased design, deployment, and integration of distributed energy resources, smart technologies and appliances, and advanced storage devices. The deployment of these new technologies requires rethinking and re-engineering the traditional boundaries between different electric power system domains.

  12. The Overgrid Interface for Computational Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.

  13. Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2003-01-01

    A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.

  14. Disruptive Ideas for Power Grid Security and Resilience With DER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Erfan

    This presentation by Erfan Ibrahim was prepared for NREL's 2017 Cybersecurity and Reslience Workshop on distributed energy resource (DER) best practices. The presentation provides an overview of NREL's Cyber-Physical Systems Security and Resilience R&D Center, the Center's approach to cybersecurity, and disruptive ideas for power grid security and resilience with DER.

  15. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    NASA Astrophysics Data System (ADS)

    Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.

    2008-07-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  16. The GridPP DIRAC project - DIRAC for non-LHC communities

    NASA Astrophysics Data System (ADS)

    Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.

    2015-12-01

    The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.

  17. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  18. Jobs masonry in LHCb with elastic Grid Jobs

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, Ph

    2015-12-01

    In any distributed computing infrastructure, a job is normally forbidden to run for an indefinite amount of time. This limitation is implemented using different technologies, the most common one being the CPU time limit implemented by batch queues. It is therefore important to have a good estimate of how much CPU work a job will require: otherwise, it might be killed by the batch system, or by whatever system is controlling the jobs’ execution. In many modern interwares, the jobs are actually executed by pilot jobs, that can use the whole available time in running multiple consecutive jobs. If at some point the available time in a pilot is too short for the execution of any job, it should be released, while it could have been used efficiently by a shorter job. Within LHCbDIRAC, the LHCb extension of the DIRAC interware, we developed a simple way to fully exploit computing capabilities available to a pilot, even for resources with limited time capabilities, by adding elasticity to production MonteCarlo (MC) simulation jobs. With our approach, independently of the time available, LHCbDIRAC will always have the possibility to execute a MC job, whose length will be adapted to the available amount of time: therefore the same job, running on different computing resources with different time limits, will produce different amounts of events. The decision on the number of events to be produced is made just in time at the start of the job, when the capabilities of the resource are known. In order to know how many events a MC job will be instructed to produce, LHCbDIRAC simply requires three values: the CPU-work per event for that type of job, the power of the machine it is running on, and the time left for the job before being killed. Knowing these values, we can estimate the number of events the job will be able to simulate with the available CPU time. This paper will demonstrate that, using this simple but effective solution, LHCb manages to make a more efficient use of the available resources, and that it can easily use new types of resources. An example is represented by resources provided by batch queues, where low-priority MC jobs can be used as "masonry" jobs in multi-jobs pilots. A second example is represented by opportunistic resources with limited available time.

  19. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  20. [Tumor Data Interacted System Design Based on Grid Platform].

    PubMed

    Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke

    2016-06-01

    In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.

  1. Maui Smart Grid Demonstration Project Managing Distribution System Resources for Improved Service Quality and Reliability, Transmission Congestion Relief, and Grid Support Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2014-09-30

    The Maui Smart Grid Project (MSGP) is under the leadership of the Hawaii Natural Energy Institute (HNEI) of the University of Hawaii at Manoa. The project team includes Maui Electric Company, Ltd. (MECO), Hawaiian Electric Company, Inc. (HECO), Sentech (a division of SRA International, Inc.), Silver Spring Networks (SSN), Alstom Grid, Maui Economic Development Board (MEDB), University of Hawaii-Maui College (UHMC), and the County of Maui. MSGP was supported by the U.S. Department of Energy (DOE) under Cooperative Agreement Number DE-FC26-08NT02871, with approximately 50% co-funding supplied by MECO. The project was designed to develop and demonstrate an integrated monitoring, communications,more » database, applications, and decision support solution that aggregates renewable energy (RE), other distributed generation (DG), energy storage, and demand response technologies in a distribution system to achieve both distribution and transmission-level benefits. The application of these new technologies and procedures will increase MECO’s visibility into system conditions, with the expected benefits of enabling more renewable energy resources to be integrated into the grid, improving service quality, increasing overall reliability of the power system, and ultimately reducing costs to both MECO and its customers.« less

  2. IEEE 1547 and 2030 Standards for Distributed Energy Resources Interconnection and Interoperability with the Electricity Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basso, T.

    Public-private partnerships have been a mainstay of the U.S. Department of Energy and the National Renewable Energy Laboratory (DOE/NREL) approach to research and development. These partnerships also include technology development that enables grid modernization and distributed energy resources (DER) advancement, especially renewable energy systems integration with the grid. Through DOE/NREL and industry support of Institute of Electrical and Electronics Engineers (IEEE) standards development, the IEEE 1547 series of standards has helped shape the way utilities and other businesses have worked together to realize increasing amounts of DER interconnected with the distribution grid. And more recently, the IEEE 2030 series ofmore » standards is helping to further realize greater implementation of communications and information technologies that provide interoperability solutions for enhanced integration of DER and loads with the grid. For these standards development partnerships, for approximately $1 of federal funding, industry partnering has contributed $5. In this report, the status update is presented for the American National Standards IEEE 1547 and IEEE 2030 series of standards. A short synopsis of the history of the 1547 standards is first presented, then the current status and future direction of the ongoing standards development activities are discussed.« less

  3. Contributing opportunistic resources to the grid with HTCondor-CE-Bosco

    NASA Astrophysics Data System (ADS)

    Weitzel, Derek; Bockelman, Brian

    2017-10-01

    The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.

  4. Aggregation server for grid-integrated vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempton, Willett

    2015-05-26

    Methods, systems, and apparatus for aggregating electric power flow between an electric grid and electric vehicles are disclosed. An apparatus for aggregating power flow may include a memory and a processor coupled to the memory to receive electric vehicle equipment (EVE) attributes from a plurality of EVEs, aggregate EVE attributes, predict total available capacity based on the EVE attributes, and dispatch at least a portion of the total available capacity to the grid. Power flow may be aggregated by receiving EVE operational parameters from each EVE, aggregating the received EVE operational parameters, predicting total available capacity based on the aggregatedmore » EVE operational parameters, and dispatching at least a portion of the total available capacity to the grid.« less

  5. Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Ton, Dan

    Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes largemore » economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews and resources, the expediting of the restoration process, and the reduction of outage durations for customers, in response to severe blackouts due to extreme weather hazards.« less

  6. Power management and frequency regulation for microgrid and smart grid: A real-time demand response approach

    NASA Astrophysics Data System (ADS)

    Pourmousavi Kani, Seyyed Ali

    Future power systems (known as smart grid) will experience a high penetration level of variable distributed energy resources to bring abundant, affordable, clean, efficient, and reliable electric power to all consumers. However, it might suffer from the uncertain and variable nature of these generations in terms of reliability and especially providing required balancing reserves. In the current power system structure, balancing reserves (provided by spinning and non-spinning power generation units) usually are provided by conventional fossil-fueled power plants. However, such power plants are not the favorite option for the smart grid because of their low efficiency, high amount of emissions, and expensive capital investments on transmission and distribution facilities, to name a few. Providing regulation services in the presence of variable distributed energy resources would be even more difficult for islanded microgrids. The impact and effectiveness of demand response are still not clear at the distribution and transmission levels. In other words, there is no solid research reported in the literature on the evaluation of the impact of DR on power system dynamic performance. In order to address these issues, a real-time demand response approach along with real-time power management (specifically for microgrids) is proposed in this research. The real-time demand response solution is utilized at the transmission (through load-frequency control model) and distribution level (both in the islanded and grid-tied modes) to provide effective and fast regulation services for the stable operation of the power system. Then, multiple real-time power management algorithms for grid-tied and islanded microgrids are proposed to economically and effectively operate microgrids. Extensive dynamic modeling of generation, storage, and load as well as different controller design are considered and developed throughout this research to provide appropriate models and simulation environment to evaluate the effectiveness of the proposed methodologies. Simulation results revealed the effectiveness of the proposed methods in providing balancing reserves and microgrids' economic and stable operation. The proposed tools and approaches can significantly enhance the application of microgrids and demand response in the smart grid era. They will also help to increase the penetration level of variable distributed generation resources in the smart grid.

  7. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  8. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  9. Time-aware service-classified spectrum defragmentation algorithm for flex-grid optical networks

    NASA Astrophysics Data System (ADS)

    Qiu, Yang; Xu, Jing

    2018-01-01

    By employing sophisticated routing and spectrum assignment (RSA) algorithms together with a finer spectrum granularity (namely frequency slot) in resource allocation procedures, flex-grid optical networks can accommodate diverse kinds of services with high spectrum-allocation flexibility and resource-utilization efficiency. However, the continuity and the contiguity constraints in spectrum allocation procedures may always induce some isolated, small-sized, and unoccupied spectral blocks (known as spectrum fragments) in flex-grid optical networks. Although these spectrum fragments are left unoccupied, they can hardly be utilized by the subsequent service requests directly because of their spectral characteristics and the constraints in spectrum allocation. In this way, the existence of spectrum fragments may exhaust the available spectrum resources for a coming service request and thus worsens the networking performance. Therefore, many reactive defragmentation algorithms have been proposed to handle the fragmented spectrum resources via re-optimizing the routing paths and the spectrum resources for the existing services. But the routing-path and the spectrum-resource re-optimization in reactive defragmentation algorithms may possibly disrupt the traffic of the existing services and require extra components. By comparison, some proactive defragmentation algorithms (e.g. fragmentation-aware algorithms) were proposed to suppress spectrum fragments from their generation instead of handling the fragmented spectrum resources. Although these proactive defragmentation algorithms induced no traffic disruption and required no extra components, they always left the generated spectrum fragments unhandled, which greatly affected their efficiency in spectrum defragmentation. In this paper, by comprehensively considering the characteristics of both the reactive and the proactive defragmentation algorithms, we proposed a time-aware service-classified (TASC) spectrum defragmentation algorithm, which simultaneously employed proactive and reactive mechanisms in suppressing spectrum fragments with the awareness of services' types and their duration times. By dividing the spectrum resources into several flexible groups according to services' types and limiting both the spectrum allocation and the spectrum re-tuning for a certain service inside one specific spectrum group according to its type, the proposed TASC defragmentation algorithm cannot only suppress spectrum fragments from generation inside each spectrum group, but also handle the fragments generated between two adjacent groups. In this way, the proposed TASC algorithm gains higher efficiency in suppressing spectrum fragments than both the reactive and the proactive defragmentation algorithms. Additionally, as the generation of spectrum fragments is retrained between spectrum groups and the defragmentation procedure is limited inside each spectrum group, the induced traffic disruption for the existing services can be possibly reduced. Besides, the proposed TASC defragmentation algorithm always re-tunes the spectrum resources of the service with the maximum duration time first in spectrum defragmentation procedure, which can further reduce spectrum fragments because of the fact that the services with longer duration times always have higher possibility in inducing spectrum fragments than the services with shorter duration times. The simulation results show that the proposed TASC defragmentation algorithm can significantly reduce the number of the generated spectrum fragments while improving the service blocking performance.

  10. glideinWMS—a generic pilot-based workload management system

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.

    2008-07-01

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. glideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a set of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.

  11. glideinWMS - A generic pilot-based Workload Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sfiligoi, Igor; /Fermilab

    The Grid resources are distributed among hundreds of independent Grid sites, requiring a higher level Workload Management System (WMS) to be used efficiently. Pilot jobs have been used for this purpose by many communities, bringing increased reliability, global fair share and just in time resource matching. GlideinWMS is a WMS based on the Condor glidein concept, i.e. a regular Condor pool, with the Condor daemons (startds) being started by pilot jobs, and real jobs being vanilla, standard or MPI universe jobs. The glideinWMS is composed of a set of Glidein Factories, handling the submission of pilot jobs to a setmore » of Grid sites, and a set of VO Frontends, requesting pilot submission based on the status of user jobs. This paper contains the structural overview of glideinWMS as well as a detailed description of the current implementation and the current scalability limits.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul

    While it may seem obvious that wind and solar 'need' energy storage to be successfully integrated into the world's electricity grids, both detailed integration studies and real-world experience have shown that storage is only one of many options that could enable substantially increased growth of these renewable resources. This talk will discuss the potential role of energy storage in the integrating wind and solar, demonstrating that in the near term perhaps less exciting -- but often more cost-effective -- alternatives will likely provide much of the grid flexibility needed to add renewable resources. The talk will also demonstrate that themore » decreasing value of PV and wind and at increased penetration creates greater opportunities for storage. It also demonstrates the fact that 'the sun doesn't always shine and the wind always doesn't blow' is only one reason why energy storage may be an increasingly attractive solution to the challenges of operating the grid of the future.« less

  13. Framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeramany, Arun; Unwin, Stephen D.; Coles, Garill A.

    2016-06-25

    Natural and man-made hazardous events resulting in loss of grid infrastructure assets challenge the security and resilience of the electric power grid. However, the planning and allocation of appropriate contingency resources for such events requires an understanding of their likelihood and the extent of their potential impact. Where these events are of low likelihood, a risk-informed perspective on planning can be difficult, as the statistical basis needed to directly estimate the probabilities and consequences of their occurrence does not exist. Because risk-informed decisions rely on such knowledge, a basis for modeling the risk associated with high-impact, low-frequency events (HILFs) ismore » essential. Insights from such a model indicate where resources are most rationally and effectively expended. A risk-informed realization of designing and maintaining a grid resilient to HILFs will demand consideration of a spectrum of hazards/threats to infrastructure integrity, an understanding of their likelihoods of occurrence, treatment of the fragilities of critical assets to the stressors induced by such events, and through modeling grid network topology, the extent of damage associated with these scenarios. The model resulting from integration of these elements will allow sensitivity assessments based on optional risk management strategies, such as alternative pooling, staging and logistic strategies, and emergency contingency planning. This study is focused on the development of an end-to-end HILF risk-assessment framework. Such a framework is intended to provide the conceptual and overarching technical basis for the development of HILF risk models that can inform decision-makers across numerous stakeholder groups in directing resources optimally towards the management of risks to operational continuity.« less

  14. An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua

    2011-07-09

    Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Gridmore » Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.« less

  15. Smart Grid Status and Metrics Report Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balducci, Patrick J.; Antonopoulos, Chrissi A.; Clements, Samuel L.

    A smart grid uses digital power control and communication technology to improve the reliability, security, flexibility, and efficiency of the electric system, from large generation through the delivery systems to electricity consumers and a growing number of distributed generation and storage resources. To convey progress made in achieving the vision of a smart grid, this report uses a set of six characteristics derived from the National Energy Technology Laboratory Modern Grid Strategy. The Smart Grid Status and Metrics Report defines and examines 21 metrics that collectively provide insight into the grid’s capacity to embody these characteristics. This appendix presents papersmore » covering each of the 21 metrics identified in Section 2.1 of the Smart Grid Status and Metrics Report. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone.« less

  16. Cartograms Facilitate Communication of Climate Change Risks and Responsibilities

    NASA Astrophysics Data System (ADS)

    Döll, Petra

    2017-12-01

    Communication of climate change (CC) risks is challenging, in particular if global-scale spatially resolved quantitative information is to be conveyed. Typically, visualization of CC risks, which arise from the combination of hazard, exposure and vulnerability, is confined to showing only the hazards in the form of global thematic maps. This paper explores the potential of contiguous value-by-area cartograms, that is, distorted density-equalizing maps, for improving communication of CC risks and the countries' differentiated responsibilities for CC. Two global-scale cartogram sets visualize, as an example, groundwater-related CC risks in 0.5° grid cells, another one the correlation of (cumulative) fossil-fuel carbon dioxide emissions with the countries' population and gross domestic product. Viewers of the latter set visually recognize the lack of global equity and that the countries' wealth has been built on harmful emissions. I recommend that CC risks are communicated by bivariate gridded cartograms showing the hazard in color and population, or a combination of population and a vulnerability indicator, by distortion of grid cells. Gridded cartograms are also appropriate for visualizing the availability of natural resources to humans. For communicating complex information, sets of cartograms should be carefully designed instead of presenting single cartograms. Inclusion of a conventionally distorted map enhances the viewers' capability to take up the information represented by distortion. Empirical studies about the capability of global cartograms to convey complex information and to trigger moral emotions should be conducted, with a special focus on risk communication.

  17. Hawaiian Electric Advanced Inverter Grid Support Function Laboratory Validation and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Austin; Nagarajan, Adarsh; Prabakar, Kumar

    The objective for this test plan was to better understand how to utilize the performance capabilities of advanced inverter functions to allow the interconnection of distributed energy resource (DER) systems to support the new Customer Self-Supply, Customer Grid-Supply, and other future DER programs. The purpose of this project was: 1) to characterize how the tested grid supportive inverters performed the functions of interest, 2) to evaluate the grid supportive inverters in an environment that emulates the dynamics of O'ahu's electrical distribution system, and 3) to gain insight into the benefits of the grid support functions on selected O'ahu island distributionmore » feeders. These goals were achieved through laboratory testing of photovoltaic inverters, including power hardware-in-the-loop testing.« less

  18. Enhancement of Voltage Stability of DC Smart Grid During Islanded Mode by Load Shedding Scheme

    NASA Astrophysics Data System (ADS)

    Nassor, Thabit Salim; Senjyu, Tomonobu; Yona, Atsushi

    2015-10-01

    This paper presents the voltage stability of a DC smart grid based on renewable energy resources during grid connected and isolated modes. During the islanded mode the load shedding, based on the state of charge of the battery and distribution line voltage, was proposed for voltage stability and reservation of critical load power. The analyzed power system comprises a wind turbine, a photovoltaic generator, storage battery as controllable load, DC loads, and power converters. A fuzzy logic control strategy was applied for power consumption control of controllable loads and the grid-connected dual active bridge series resonant converters. The proposed DC Smart Grid operation has been verified by simulation using MATLAB® and PLECS® Blockset. The obtained results show the effectiveness of the proposed method.

  19. Design of Grid Portal System Based on RIA

    NASA Astrophysics Data System (ADS)

    Cao, Caifeng; Luo, Jianguo; Qiu, Zhixin

    Grid portal is an important branch of grid research. In order to solve the weak expressive force, the poor interaction, the low operating efficiency and other insufficiencies of the first and second generation of grid portal system, RIA technology was introduced to it. A new portal architecture was designed based on RIA and Web service. The concrete realizing scheme of portal system was presented by using Adobe Flex/Flash technology, which formed a new design pattern. In system architecture, the design pattern has B/S and C/S superiorities, balances server and its client side, optimizes the system performance, realizes platform irrelevance. In system function, the design pattern realizes grid service call, provides client interface with rich user experience, integrates local resources by using FABridge, LCDS, Flash player and some other components.

  20. Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations

    NASA Astrophysics Data System (ADS)

    van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas

    2017-04-01

    We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.

  1. Wind Sensing and Modeling | Grid Modernization | NREL

    Science.gov Websites

    Simulation at the turbine, wind plant, and regional scales for resource prospecting, resource assessment Sensing and Modeling Wind Sensing and Modeling NREL's wind sensing and modeling work supports the deployment of wind-based generation technologies for all stages of a plant's life, from resource estimates to

  2. Using a Decision Grid Process to Build Consensus in Electronic Resources Cancellation Decisions

    ERIC Educational Resources Information Center

    Foudy, Gerri; McManus, Alesia

    2005-01-01

    Many libraries are expending an increasing part of their collections budgets on electronic resources. At the same time many libraries, especially those which are state funded, face diminishing budgets and high rates of inflation for serials subscriptions in all formats, including electronic resources. Therefore, many libraries need to develop ways…

  3. Sustainable recycling technologies for Solar PV off-grid system

    NASA Astrophysics Data System (ADS)

    Uppal, Bhavesh; Tamboli, Adish; Wubhayavedantapuram, Nandan

    2017-11-01

    Policy makers throughout the world have accepted climate change as a repercussion of fossil fuel exploitation. This has led the governments to integrate renewable energy streams in their national energy mix. PV off-grid Systems have been at the forefront of this transition because of their permanently increasing efficiency and cost effectiveness. These systems are expected to produce large amount of different waste streams at the end of their lifetime. It is important that these waste streams should be recycled because of the lack of available resources. Our study found that separate researches have been carried out to increase the efficiencies of recycling of individual PV system components but there is a lack of a comprehensive methodical research which details efficient and sustainable recycling processes for the entire PV off-grid system. This paper reviews the current and future recycling technologies for PV off-grid systems and presents a scheme of the most sustainable recycling technologies which have the potential for adoption. Full Recovery End-of-Life Photovoltaic (FRELP) recycling technology can offer opportunities to sustainably recycle crystalline silicon PV modules. Electro-hydrometallurgical process & Vacuum technologies can be used for recovering lead from lead acid batteries with a high recovery rate. The metals in the WEEE can be recycled by using a combination of biometallurgical technology, vacuum metallurgical technology and other advanced metallurgical technologies (utrasonical, mechano-chemical technology) while the plastic components can be effectively recycled without separation by using compatibilizers. All these advanced technologies when used in combination with each other provide sustainable recycling options for growing PV off-grid systems waste. These promising technologies still need further improvement and require proper integration techniques before implementation.

  4. ARC SDK: A toolbox for distributed computing and data applications

    NASA Astrophysics Data System (ADS)

    Skou Andersen, M.; Cameron, D.; Lindemann, J.

    2014-06-01

    Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware. It is thus important for the Grid middleware to provide a friendly, well documented and simple to use interface for the applications to build upon. The Advanced Resource Connector (ARC), developed by NorduGrid, provides a Software Development Kit (SDK) which enables applications to use the middleware for job and data management. This paper presents the architecture and functionality of the ARC SDK along with an example graphical application developed with the SDK. The SDK consists of a set of libraries accessible through Application Programming Interfaces (API) in several languages. It contains extensive documentation and example code and is available on multiple platforms. The libraries provide generic interfaces and rely on plugins to support a given technology or protocol and this modular design makes it easy to add a new plugin if the application requires supporting additional technologies.The ARC Graphical Clients package is a graphical user interface built on top of the ARC SDK and the Qt toolkit and it is presented here as a fully functional example of an application. It provides a graphical interface to enable job submission and management at the click of a button, and allows data on any Grid storage system to be manipulated using a visual file system hierarchy, as if it were a regular file system.

  5. Grid and Cloud for Developing Countries

    NASA Astrophysics Data System (ADS)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  6. Spaceflight Operations Services Grid (SOSG)

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Thigpen, William W.

    2004-01-01

    In an effort to adapt existing space flight operations services to new emerging Grid technologies we are developing a Grid-based prototype space flight operations Grid. This prototype is based on the operational services being provided to the International Space Station's Payload operations located at the Marshall Space Flight Center, Alabama. The prototype services will be Grid or Web enabled and provided to four user communities through portal technology. Users will have the opportunity to assess the value and feasibility of Grid technologies to their specific areas or disciplines. In this presentation descriptions of the prototype development, User-based services, Grid-based services and status of the project will be presented. Expected benefits, findings and observations (if any) to date will also be discussed. The focus of the presentation will be on the project in general, status to date and future plans. The End-use services to be included in the prototype are voice, video, telemetry, commanding, collaboration tools and visualization among others. Security is addressed throughout the project and is being designed into the Grid technologies and standards development. The project is divided into three phases. Phase One establishes the baseline User-based services required for space flight operations listed above. Phase Two involves applying Gridlweb technologies to the User-based services and development of portals for access by users. Phase Three will allow NASA and end users to evaluate the services and determine the future of the technology as applied to space flight operational services. Although, Phase One, which includes the development of the quasi-operational User-based services of the prototype, development will be completed by March 2004, the application of Grid technologies to these services will have just begun. We will provide status of the Grid technologies to the individual User-based services. This effort will result in an extensible environment that incorporates existing and new spaceflight services into a standards-based framework providing current and future NASA programs with cost savings and new and evolvable methods to conduct science. This project will demonstrate how the use of new programming paradigms such as web and grid services can provide three significant benefits to the cost-effective delivery of spaceflight services. They will enable applications to operate more efficiently by being able to utilize pooled resources. They will also permit the reuse of common services to rapidly construct new and more powerful applications. Finally they will permit easy and secure access to services via a combination of grid and portal technology by a distributed user community consisting of NASA operations centers, scientists, the educational community and even the general population as outreach. The approach will be to deploy existing mission support applications such as the Telescience Resource Kit (TReK) and new applications under development, such as the Grid Video Distribution System (GViDS), together with existing grid applications and services such as high-performance computing and visualization services provided by NASA s Information Power Grid (IPG) in the MSFC s Payload Operations Integration Center (POIC) HOSC Annex. Once the initial applications have been moved to the grid, a process will begin to apply the new programming paradigms to integrate them where possible. For example, with GViDS, instead of viewing the Distribution service as an application that must run on a single node, the new approach is to build it such that it can be dispatched across a pool of resources in response to dynamic loads. To make this a reality, reusable services will be critical, such as a brokering service to locate appropriate resource within the pool. This brokering service can then be used by other applications such as the TReK. To expand further, if the GViDS application is constructed using a services-based mel, then other applications such as the Video Auditorium can then use GViDS as a service to easily incorporate these video streams into a collaborative conference. Finally, as these applications are re-factored into this new services-based paradigm, the construction of portals to integrate them will be a simple process. As a result, portals can be tailored to meet the requirements of specific user communities.

  7. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  8. Coupled Crop/Hydrology Model to Estimate Expanded Irrigation Impact on Water Resources

    NASA Astrophysics Data System (ADS)

    Handyside, C. T.; Cruise, J.

    2017-12-01

    A coupled agricultural and hydrologic systems model is used to examine the environmental impact of irrigation in the Southeast. A gridded crop model for the Southeast is used to determine regional irrigation demand. This irrigation demand is used in a regional hydrologic model to determine the hydrologic impact of irrigation. For the Southeast to maintain/expand irrigated agricultural production and provide adaptation to climate change and climate variability it will require integrated agricultural and hydrologic system models that can calculate irrigation demand and the impact of the this demand on the river hydrology. These integrated models can be used as (1) historical tools to examine vulnerability of expanded irrigation to past climate extremes (2) future tools to examine the sustainability of expanded irrigation under future climate scenarios and (3) a real-time tool to allow dynamic water resource management. Such tools are necessary to assure stakeholders and the public that irrigation can be carried out in a sustainable manner. The system tools to be discussed include a gridded version of the crop modeling system (DSSAT). The gridded model is referred to as GriDSSAT. The irrigation demand from GriDSSAT is coupled to a regional hydrologic model developed by the Eastern Forest Environmental Threat Assessment Center of the USDA Forest Service) (WaSSI). The crop model provides the dynamic irrigation demand which is a function of the weather. The hydrologic model includes all other competing uses of water. Examples of use the crop model coupled with the hydrologic model include historical analyses which show the change in hydrology as additional acres of irrigated land are added to water sheds. The first order change in hydrology is computed in terms of changes in the Water Availability Stress Index (WASSI) which is the ratio of water demand (irrigation, public water supply, industrial use, etc.) and water availability from the hydrologic model. Also, statistics such as the number of times certain WASSI thresholds are exceeded are calculated to show the impact of expanded irrigation during times of hydrologic drought and the coincident use of water by other sectors. Also, integrated downstream impacts of irrigation are also calculated through changes in flows through the whole river systems.

  9. An Improved Global Wind Resource Estimate for Integrated Assessment Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eurek, Kelly; Sullivan, Patrick; Gleason, Michael

    This paper summarizes initial steps to improving the robustness and accuracy of global renewable resource and techno-economic assessments for use in integrated assessment models. We outline a method to construct country-level wind resource supply curves, delineated by resource quality and other parameters. Using mesoscale reanalysis data, we generate estimates for wind quality, both terrestrial and offshore, across the globe. Because not all land or water area is suitable for development, appropriate database layers provide exclusions to reduce the total resource to its technical potential. We expand upon estimates from related studies by: using a globally consistent data source of uniquelymore » detailed wind speed characterizations; assuming a non-constant coefficient of performance for adjusting power curves for altitude; categorizing the distance from resource sites to the electric power grid; and characterizing offshore exclusions on the basis of sea ice concentrations. The product, then, is technical potential by country, classified by resource quality as determined by net capacity factor. Additional classifications dimensions are available, including distance to transmission networks for terrestrial wind and distance to shore and water depth for offshore. We estimate the total global wind generation potential of 560 PWh for terrestrial wind with 90% of resource classified as low-to-mid quality, and 315 PWh for offshore wind with 67% classified as mid-to-high quality. These estimates are based on 3.5 MW composite wind turbines with 90 m hub heights, 0.95 availability, 90% array efficiency, and 5 MW/km2 deployment density in non-excluded areas. We compare the underlying technical assumption and results with other global assessments.« less

  10. The effect of supplemental food on the growth rates of neonatal, young, and adult cotton rats ( Sigmodon hispidus) in northeastern Kansas, USA

    NASA Astrophysics Data System (ADS)

    Eifler, Maria A.; Slade, Norman A.; Doonan, Terry J.

    2003-09-01

    In food-limited populations, the presence of extra food resources can influence the way individuals allocate energy to growth and reproduction. We experimentally increased food available to cotton rats ( Sigmodon hispidus) near the northern limit of their range over a 2-year period and tested the hypothesis that seasonal growth rates would be enhanced by supplemental food during winter and spring when natural food levels are low. We also examined whether additional food resources were allocated to somatic growth or reproductive effort by pregnant and lactating females. The effect of supplemental food on growth varied with mass and season, but did not influence the growth rates of most cotton rats during spring and winter. In winter, small animals on supplemented grids had higher growth rates than small animals on control grids, but females in spring had lower growth rates under supplemented conditions. Growth rates of supplemented cotton rats were enhanced in summer. Northern cotton rat populations may use season-specific foraging strategies, maximizing energy intake during the reproductive season and minimizing time spent foraging in winter. Adult females invest extra resources in reproduction rather than in somatic growth. Pregnant females receiving supplemental food had higher growth rates than control females, and dependent pups (≤ 1 month of age) born to supplemented mothers had higher growth rates than those born to control mothers. Increased body size seems to confer an advantage during the reproductive season, but has no concomitant advantage to overwinter survival.

  11. Stochastic Characterization of Communication Network Latency for Wide Area Grid Control Applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameme, Dan Selorm Kwami; Guttromson, Ross

    This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selectionmore » within the simulated network.« less

  12. NREL’s Controllable Grid Interface Saves Time and Resources, Improves Reliability of Renewable Energy Technologies; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The National Renewable Energy Laboratory's (NREL) controllable grid interface (CGI) test system at the National Wind Technology Center (NWTC) is one of two user facilities at NREL capable of testing and analyzing the integration of megawatt-scale renewable energy systems. The CGI specializes in testing of multimegawatt-scale wind and photovoltaic (PV) technologies as well as energy storage devices, transformers, control and protection equipment at medium-voltage levels, allowing the determination of the grid impacts of the tested technology.

  13. Stability and Scalability of the CMS Global Pool: Pushing HTCondor and GlideinWMS to New Limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Bockelman, B.; Hufnagel, D.

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such asmore » multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.« less

  14. Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Aftab Khan, F.; Larson, K.; Letts, J.; Marra da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.

  15. New aerogravity and aeromagnetic anomaly data over Lomonosov Ridge and adjacent areas for bathymetric and tectonic mapping

    NASA Astrophysics Data System (ADS)

    Dossing, A.; Olesen, A. V.; Forsberg, R.

    2010-12-01

    Results of an 800 x 800 km aero-gravity and aeromagnetic survey (LOMGRAV) of the southern Lomonosov Ridge and surrounding area are presented. The survey was acquired by the Danish National Space Center, DTU in cooperation with National Resources Canada in spring 2009 as a net of ~NE-SW flight lines spaced 8-10 km apart. Nominal flight level was 2000 ft. We have compiled a detailed 2.5x2.5 km gravity anomaly grid based on the LOMGRAV data and existing data from the southern Arctic Ocean (NRL98/99) and the North Greenland continental margin (KMS98/99). The gravity grid reveals detailed, elongated high-low anomaly patterns over the Lomonosov Ridge which is interpreted as the presence of narrow ridges and subbasins. Distinct local topography is also interpreted over the southernmost part of the Lomonosov Ridge where existing bathymetry compilations suggest a smooth topography due to the lack of data. A new bathymetry model is presented for the region predicted by formalized inversion of the available gravity data. Finally, a detailed magnetic anomaly grid has been compiled from the LOMGRAV data and existing NRL98/99 and PMAP data. New tectonic features are revealed, particularly in the Amerasia Basin, compared with existing magnetic anomaly data from the region.

  16. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  17. Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid

    NASA Astrophysics Data System (ADS)

    Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei

    2018-02-01

    As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.

  18. Real-Time Very High-Resolution Regional 4D Assimilation in Supporting CRYSTAL-FACE Experiment

    NASA Technical Reports Server (NTRS)

    Wang, Donghai; Minnis, Patrick

    2004-01-01

    To better understand tropical cirrus cloud physical properties and formation processes with a view toward the successful modeling of the Earth's climate, the CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment) field experiment took place over southern Florida from 1 July to 29 July 2002. During the entire field campaign, a very high-resolution numerical weather prediction (NWP) and assimilation system was performed in support of the mission with supercomputing resources provided by NASA Center for Computational Sciences (NCCS). By using NOAA NCEP Eta forecast for boundary conditions and as a first guess for initial conditions assimilated with all available observations, two nested 15/3 km grids are employed over the CRYSTAL-FACE experiment area. The 15-km grid covers the southeast US domain, and is run two times daily for a 36-hour forecast starting at 0000 UTC and 1200 UTC. The nested 3-km grid covering only southern Florida is used for 9-hour and 18-hour forecasts starting at 1500 and 0600 UTC, respectively. The forecasting system provided more accurate and higher spatial and temporal resolution forecasts of 4-D atmospheric fields over the experiment area than available from standard weather forecast models. These forecasts were essential for flight planning during both the afternoon prior to a flight day and the morning of a flight day. The forecasts were used to help decide takeoff times and the most optimal flight areas for accomplishing the mission objectives. See more detailed products on the web site http://asd-www.larc.nasa.gov/mode/crystal. The model/assimilation output gridded data are archived on the NASA Center for Computational Sciences (NCCS) UniTree system in the HDF format at 30-min intervals for real-time forecasts or 5-min intervals for the post-mission case studies. Particularly, the data set includes the 3-D cloud fields (cloud liquid water, rain water, cloud ice, snow and graupe/hail).

  19. A robust nonlinear stabilizer as a controller for improving transient stability in micro-grids.

    PubMed

    Azimi, Seyed Mohammad; Afsharnia, Saeed

    2017-01-01

    This paper proposes a parametric-Lyapunov approach to the design of a stabilizer aimed at improving the transient stability of micro-grids (MGs). This strategy is applied to electronically-interfaced distributed resources (EI-DRs) operating with a unified control configuration applicable to all operational modes (i.e. grid-connected mode, islanded mode, and mode transitions). The proposed approach employs a simple structure compared with other nonlinear controllers, allowing ready implementation of the stabilizer. A new parametric-Lyapunov function is proposed rendering the proposed stabilizer more effective in damping system transition transients. The robustness of the proposed stabilizer is also verified based on both time-domain simulations and mathematical proofs, and an ultimate bound has been derived for the frequency transition transients. The proposed stabilizer operates by deploying solely local information and there are no needs for communication links. The deteriorating effects of the primary resource delays on the transient stability are also treated analytically. Finally, the effectiveness of the proposed stabilizer is evaluated through time-domain simulations and compared with the recently-developed stabilizers performed on a multi-resource MG. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  1. Modeling impacts of climate change on freshwater availability in Africa

    NASA Astrophysics Data System (ADS)

    Faramarzi, Monireh; Abbaspour, Karim C.; Ashraf Vaghefi, Saeid; Farzaneh, Mohammad Reza; Zehnder, Alexander J. B.; Srinivasan, Raghavan; Yang, Hong

    2013-02-01

    SummaryThis study analyzes the impact of climate change on freshwater availability in Africa at the subbasin level for the period of 2020-2040. Future climate projections from five global circulation models (GCMs) under the four IPCC emission scenarios were fed into an existing SWAT hydrological model to project the impact on different components of water resources across the African continent. The GCMs have been downscaled based on observed data of Climate Research Unit to represent local climate conditions at 0.5° grid spatial resolution. The results show that for Africa as a whole, the mean total quantity of water resources is likely to increase. For individual subbasins and countries, variations are substantial. Although uncertainties are high in the simulated results, we found that in many regions/countries, most of the climate scenarios projected the same direction of changes in water resources, suggesting a relatively high confidence in the projections. The assessment of the number of dry days and the frequency of their occurrences suggests an increase in the drought events and their duration in the future. Overall, the dry regions have higher uncertainties than the wet regions in the projected impacts on water resources. This poses additional challenge to the agriculture in dry regions where water shortage is already severe while irrigation is expected to become more important to stabilize and increase food production.

  2. 2015 California Demand Response Potential Study - Charting California’s Demand Response Future. Interim Report on Phase 1 Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alstone, Peter; Potter, Jennifer; Piette, Mary Ann

    Demand response (DR) is an important resource for keeping the electricity grid stable and efficient; deferring upgrades to generation, transmission, and distribution systems; and providing other customer economic benefits. This study estimates the potential size and cost of the available DR resource for California’s three investor-owned utilities (IOUs), as the California Public Utilities Commission (CPUC) evaluates how to enhance the role of DR in meeting California’s resource planning needs and operational requirements. As the state forges a clean energy future, the contributions of wind and solar electricity from centralized and distributed generation will fundamentally change the power grid’s operational dynamics.more » This transition requires careful planning to ensure sufficient capacity is available with the right characteristics – flexibility and fast response – to meet reliability needs. Illustrated is a snapshot of how net load (the difference between demand and intermittent renewables) is expected to shift. Increasing contributions from renewable generation introduces steeper ramps and a shift, into the evening, of the hours that drive capacity needs. These hours of peak capacity need are indicated by the black dots on the plots. Ultimately this study quantifies the ability and the cost of using DR resources to help meet the capacity need at these forecasted critical hours in the state.« less

  3. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  4. Grid computing technology for hydrological applications

    NASA Astrophysics Data System (ADS)

    Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V.

    2011-06-01

    SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.

  5. SuperB Simulation Production System

    NASA Astrophysics Data System (ADS)

    Tomassetti, L.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Del Prete, D.; Di Simone, A.; Donvito, G.; Fella, A.; Franchini, P.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Paolini, A.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.

    2012-12-01

    The SuperB asymmetric e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2 s-1. The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.

  6. A Green Prison: The Santa Rita Jail Campus Microgrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; DeForest, Nicholas; Lai, Judy

    2012-01-22

    A large microgrid project is nearing completion at Alameda County’s twenty-two-year-old 45 ha 4,000-inmate Santa Rita Jail, about 70 km east of San Francisco. Often described as a green prison, it has a considerable installed base of distributed energy resources (DER) including an eight-year old 1.2 MW PV array, a five-year old 1 MW fuel cell with heat recovery, and considerable efficiency investments. A current US$14 M expansion adds a 2 MW-4 MWh Li-ion battery, a static disconnect switch, and various controls upgrades. During grid blackouts, or when conditions favor it, the Jail can now disconnect from the grid andmore » operate as an island, using the on-site resources described together with its back-up diesel generators. In other words, the Santa Rita Jail is a true microgrid, or μgrid, because it fills both requirements, i.e. it is a locally controlled system, and it can operate both grid connected and islanded. The battery’s electronics includes Consortium for Electric Reliability Technology (CERTS) Microgrid technology. This enables the battery to maintain energy balance using droops without need for a fast control system.« less

  7. omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling

    PubMed Central

    Phan, John H.; Kothari, Sonal; Wang, May D.

    2016-01-01

    Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062

  8. Privacy protection in HealthGrid: distributing encryption management over the VO.

    PubMed

    Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente

    2006-01-01

    Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.

  9. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  10. Grid integration and smart grid implementation of emerging technologies in electric power systems through approximate dynamic programming

    NASA Astrophysics Data System (ADS)

    Xiao, Jingjie

    A key hurdle for implementing real-time pricing of electricity is a lack of consumers' responses. Solutions to overcome the hurdle include the energy management system that automatically optimizes household appliance usage such as plug-in hybrid electric vehicle charging (and discharging with vehicle-to-grid) via a two-way communication with the grid. Real-time pricing, combined with household automation devices, has a potential to accommodate an increasing penetration of plug-in hybrid electric vehicles. In addition, the intelligent energy controller on the consumer-side can help increase the utilization rate of the intermittent renewable resource, as the demand can be managed to match the output profile of renewables, thus making the intermittent resource such as wind and solar more economically competitive in the long run. One of the main goals of this dissertation is to present how real-time retail pricing, aided by control automation devices, can be integrated into the wholesale electricity market under various uncertainties through approximate dynamic programming. What distinguishes this study from the existing work in the literature is that whole- sale electricity prices are endogenously determined as we solve a system operator's economic dispatch problem on an hourly basis over the entire optimization horizon. This modeling and algorithm framework will allow a feedback loop between electricity prices and electricity consumption to be fully captured. While we are interested in a near-optimal solution using approximate dynamic programming; deterministic linear programming benchmarks are use to demonstrate the quality of our solutions. The other goal of the dissertation is to use this framework to provide numerical evidence to the debate on whether real-time pricing is superior than the current flat rate structure in terms of both economic and environmental impacts. For this purpose, the modeling and algorithm framework is tested on a large-scale test case with hundreds of power plants based on data available for California, making our findings useful for policy makers, system operators and utility companies to gain a concrete understanding on the scale of the impact with real-time pricing.

  11. Foundational Report Series: Advanced Distribution Management Systems for Grid Modernization, DMS Integration of Distributed Energy Resources and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Ravindra; Reilly, James T.; Wang, Jianhui

    Deregulation of the electric utility industry, environmental concerns associated with traditional fossil fuel-based power plants, volatility of electric energy costs, Federal and State regulatory support of “green” energy, and rapid technological developments all support the growth of Distributed Energy Resources (DERs) in electric utility systems and ensure an important role for DERs in the smart grid and other aspects of modern utilities. DERs include distributed generation (DG) systems, such as renewables; controllable loads (also known as demand response); and energy storage systems. This report describes the role of aggregators of DERs in providing optimal services to distribution networks, through DERmore » monitoring and control systems—collectively referred to as a Distributed Energy Resource Management System (DERMS)—and microgrids in various configurations.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Small Wind Electric Systems: A Colorado Consumer's Guide provides consumers with information to help them determine whether a small wind electric system can provide all or a portion of the energy they need for their home or business based on their wind resource, energy needs, and their economics. Topics discussed in the guide include how to make a home more energy efficient, how to choose the correct turbine size, the parts of a wind electric system, how to determine whether enough wind resource exists, how to choose the best site for a turbine, how to connect a system to themore » utility grid, and whether it's possible to become independent of the utility grid using wind energy. In addition, the cover of the guide contains a regional wind resource map and a list of incentives and contacts for more information.« less

  13. Diversity in computing technologies and strategies for dynamic resource allocation

    DOE PAGES

    Garzoglio, G.; Gutsche, O.

    2015-12-23

    Here, High Energy Physics (HEP) is a very data intensive and trivially parallelizable science discipline. HEP is probing nature at increasingly finer details requiring ever increasing computational resources to process and analyze experimental data. In this paper, we discuss how HEP provisioned resources so far using Grid technologies, how HEP is starting to include new resource providers like commercial Clouds and HPC installations, and how HEP is transparently provisioning resources at these diverse providers.

  14. Wind Energy Resource Atlas of Sri Lanka and the Maldives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, D.; Schwartz, M.; Scott, G.

    2003-08-01

    The Wind Energy Resource Atlas of Sri Lanka and the Maldives, produced by the National Renewable Energy Laboratory's (NREL's) wind resource group identifies the wind characteristics and distribution of the wind resource in Sri Lanka and the Maldives. The detailed wind resource maps and other information contained in the atlas facilitate the identification of prospective areas for use of wind energy technologies, both for utility-scale power generation and off-grid wind energy applications.

  15. The data storage grid: the next generation of fault-tolerant storage for backup and disaster recovery of clinical images

    NASA Astrophysics Data System (ADS)

    King, Nelson E.; Liu, Brent; Zhou, Zheng; Documet, Jorge; Huang, H. K.

    2005-04-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models that can address the problem of fault-tolerant storage for backup and recovery of clinical images. We have researched and developed a novel Data Grid testbed involving several federated PAC systems based on grid architecture. By integrating a grid computing architecture to the DICOM environment, a failed PACS archive can recover its image data from others in the federation in a timely and seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid architecture representing three federated PAC systems, the Fault-Tolerant PACS archive server at the Image Processing and Informatics Laboratory, Marina del Rey, the clinical PACS at Saint John's Health Center, Santa Monica, and the clinical PACS at the Healthcare Consultation Center II, USC Health Science Campus, will be presented. The successful demonstration of the Data Grid in the testbed will provide an understanding of the Data Grid concept in clinical image data backup as well as establishment of benchmarks for performance from future grid technology improvements and serve as a road map for expanded research into large enterprise and federation level data grids to guarantee 99.999 % up time.

  16. Experience on HTCondor batch system for HEP and other research fields at KISTI-GSDC

    NASA Astrophysics Data System (ADS)

    Ahn, S. U.; Jaikar, A.; Kong, B.; Yeo, I.; Bae, S.; Kim, J.

    2017-10-01

    Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) located at Daejeon in South Korea is the unique datacenter in the country which helps with its computing resources fundamental research fields dealing with the large-scale of data. For historical reason, it has run Torque batch system while recently it starts running HTCondor for new systems. Having different kinds of batch systems implies inefficiency in terms of resource management and utilization. We conducted a research on resource management with HTCondor for several user scenarios corresponding to the user environments that currently GSDC supports. A recent research on the resource usage patterns at GSDC is considered in this research to build the possible user scenarios. Checkpointing and Super-Collector model of HTCondor give us more efficient and flexible way to manage resources and Grid Gate provided by HTCondor helps to interface with the Grid environment. In this paper, the overview on the essential features of HTCondor exploited in this work is described and the practical examples for HTCondor cluster configuration in our cases are presented.

  17. On transferring the grid technology to the biomedical community.

    PubMed

    Mohammed, Yassene; Sax, Ulrich; Dickmann, Frank; Lippert, Joerg; Solodenko, Juri; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which resulted in the Grid. The inter domain transfer process of this technology has been an intuitive process. Some difficulties facing the life science community can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies that have achieved certain stability. Grid and Cloud solutions are technologies that are still in flux. We illustrate how Grid computing creates new difficulties for the technology transfer process that are not considered in Bozeman's model. We show why the success of health Grids should be measured by the qualified scientific human capital and opportunities created, and not primarily by the market impact. With two examples we show how the Grid technology transfer theory corresponds to the reality. We conclude with recommendations that can help improve the adoption of Grid solutions into the biomedical community. These results give a more concise explanation of the difficulties most life science IT projects are facing in the late funding periods, and show some leveraging steps which can help to overcome the "vale of tears".

  18. An Intelligent Approach to Strengthening of the Rural Electrical Power Supply Using Renewable Energy Resources

    NASA Astrophysics Data System (ADS)

    Robert, F. C.; Sisodia, G. S.; Gopalan, S.

    2017-08-01

    The healthy growth of economy lies in the balance between rural and urban development. Several developing countries have achieved a successful growth of urban areas, yet rural infrastructure has been neglected until recently. The rural electrical grids are weak with heavy losses and low capacity. Renewable energy represents an efficient way to generate electricity locally. However, the renewable energy generation may be limited by the low grid capacity. The current solutions focus on grid reinforcement only. This article presents a model for improving renewable energy integration in rural grids with the intelligent combination of three strategies: 1) grid reinforcement, 2) use of storage and 3) renewable energy curtailments. Such approach provides a solution to integrate a maximum of renewable energy generation on low capacity grids while minimising project cost and increasing the percentage of utilisation of assets. The test cases show that a grid connection agreement and a main inverter sized at 60 kW (resp. 80 kW) can accommodate a 100 kWp solar park (resp. 100 kW wind turbine) with minimal storage.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Laszewski, G.; Foster, I.; Gawor, J.

    In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less

  20. Operating a production pilot factory serving several scientific domains

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Würthwein, F.; Andrews, W.; Dost, J. M.; MacNeill, I.; McCrea, A.; Sheripon, E.; Murphy, C. W.

    2011-12-01

    Pilot infrastructures are becoming prominent players in the Grid environment. One of the major advantages is represented by the reduced effort required by the user communities (also known as Virtual Organizations or VOs) due to the outsourcing of the Grid interfacing services, i.e. the pilot factory, to Grid experts. One such pilot factory, based on the glideinWMS pilot infrastructure, is being operated by the Open Science Grid at University of California San Diego (UCSD). This pilot factory is serving multiple VOs from several scientific domains. Currently the three major clients are the analysis operations of the HEP experiment CMS, the community VO HCC, which serves mostly math, biology and computer science users, and the structural biology VO NEBioGrid. The UCSD glidein factory allows the served VOs to use Grid resources distributed over 150 sites in North and South America, in Europe, and in Asia. This paper presents the steps taken to create a production quality pilot factory, together with the challenges encountered along the road.

  1. Exascale Virtualized and Programmable Distributed Cyber Resource Control: Final Scientific Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, S.J.Ben; Lauer, Gregory S.

    Extreme-science drives the need for distributed exascale processing and communications that are carefully, yet flexibly, managed. Exponential growth of data for scientific simulations, experimental data, collaborative data analyses, remote visualization and GRID computing requirements of scientists in fields as diverse as high energy physics, climate change, genomics, fusion, synchrotron radiation, material science, medicine, and other scientific disciplines cannot be accommodated by simply applying existing transport protocols to faster pipes. Further, scientific challenges today demand diverse research teams, heightening the need for and increasing the complexity of collaboration. To address these issues within the network layer and physical layer, we havemore » performed a number of research activities surrounding effective allocation and management of elastic optical network (EON) resources, particularly focusing on FlexGrid transponders. FlexGrid transponders support the opportunity to build Layer-1 connections at a wide range of bandwidths and to reconfigure them rapidly. The new flexibility supports complex new ways of using the physical layer that must be carefully managed and hidden from the scientist end-users. FlexGrid networks utilize flexible (or elastic) spectral bandwidths for each data link without using fixed wavelength grids. The flexibility in spectrum allocation brings many appealing features to network operations. Current networks are designed for the worst case impairments in transmission performance and the assigned spectrum is over-provisioned. In contrast, the FlexGrid networks can operate with the highest spectral efficiency and minimum bandwidth for the given traffic demand while meeting the minimum quality of transmission (QoT) requirement. Two primary focuses of our research are: (1) resource and spectrum allocation (RSA) for IP traffic over EONs, and (2) RSA for cross-domain optical networks. Previous work concentrates primarily on large file transfers within a single domain. Adding support for IP traffic changes the nature of the RSA problem: instead of choosing to accept or deny each request for network support, IP traffic is inherently elastic and thus lends itself to a bandwidth maximization formulation. We developed a number of algorithms that could be easily deployed within existing and new FlexGrid networks, leading to networks that better support scientific collaboration. Cross-domain RSA research is essential to support large-scale FlexGrid networks, since configuration information is generally not shared or coordinated across domains. The results presented here are in their early stages. They are technically feasible and practical, but still require coordination among organizations and equipment owners and a higher-layer framework for managing network requests.« less

  2. Improvements to the gridding of precipitation data across Europe under the E-OBS scheme

    NASA Astrophysics Data System (ADS)

    Cornes, Richard; van den Besselaar, Else; Jones, Phil; van der Schrier, Gerard; Verver, Ge

    2016-04-01

    Gridded precipitation data are a valuable resource for analyzing past variations and trends in the hydroclimate. Such data also provide a reference against which model simulations may be driven, compared and/or adjusted. The E-OBS precipitation dataset is widely used for such analyses across Europe, and is particularly valuable since it provides a spatially complete, daily field across the European domain. In this analysis, improvements to the E-OBS precipitation dataset will be presented that aim to provide a more reliable estimate of grid-box precipitation values, particularly in mountainous areas and in regions with a relative sparsity of input station data. The established three-stage E-OBS gridding scheme is retained, whereby monthly precipitation totals are gridded using a thin-plate spline; daily anomalies are gridded using indicator kriging; and the final dataset is produced by multiplying the two grids. The current analysis focuses on improving the monthly thin-plate spline, which has overall control on the final daily dataset. The results from different techniques are compared and the influence on the final daily data is assessed by comparing the data against gridded country-wide datasets produced by various National Meteorological Services

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Laszewski, G.; Gawor, J.; Lane, P.

    In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less

  4. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  5. Visualizing NetCDF Files by Using the EverVIEW Data Viewer

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Jessica; Denholm, Paul; Pless, Jacquelyn

    Wind and solar are inherently more variable and uncertain than the traditional dispatchable thermal and hydro generators that have historically provided a majority of grid-supplied electricity. The unique characteristics of variable renewable energy (VRE) resources have resulted in many misperceptions regarding their contribution to a low-cost and reliable power grid. Common areas of concern include: 1) The potential need for increased operating reserves, 2) The impact of variability and uncertainty on operating costs and pollutant emissions of thermal plants, and 3) The technical limits of VRE penetration rates to maintain grid stability and reliability. This fact sheet corrects misperceptions inmore » these areas.« less

  7. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    NASA Technical Reports Server (NTRS)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  8. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  9. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  10. Cybersecurity for distributed energy resources and smart inverters

    DOE PAGES

    Qi, Junjian; Hahn, Adam; Lu, Xiaonan; ...

    2016-12-01

    The increased penetration of distributed energy resources (DER) will significantly increase the number of devices that are owned and controlled by consumers and third parties. These devices have a significant dependency on digital communication and control, which presents a growing risk from cyber attacks. This paper proposes a holistic attack-resilient framework to protect the the integrated DER and the critical power grid infrastructure from malicious cyber attacks, helping ensure the secure integration of DER without harming the grid reliability and stability. Specifically, we discuss the architecture of the cyber-physical power system with a high penetration of DER and analyze themore » unique cybersecurity challenges introduced by DER integration. Next, we summarize important attack scenarios against DER, propose a systematic DER resilience analysis methodology, and develop effective and quantifiable resilience metrics and design principles. Lastly, we introduce attack prevention, detection, and response measures specifically designed for DER integration across cyber, physical device, and utility layers of the future smart grid.« less

  11. First Gridded Spatial Field Reconstructions of Snow from Tree Rings

    NASA Astrophysics Data System (ADS)

    Coulthard, B. L.; Anchukaitis, K. J.; Pederson, G. T.; Alder, J. R.; Hostetler, S. W.; Gray, S. T.

    2017-12-01

    Western North America's mountain snowpacks provide critical water resources for human populations and ecosystems. Warmer temperatures and changing precipitation patterns will increasingly alter the quantity, extent, and persistence of snow in coming decades. A comprehensive understanding of the causes and range of long-term variability in this system is required for forecasting future anomalies, but snowpack observations are limited and sparse. While individual tree ring-based annual snowpack reconstructions have been developed for specific regions and mountain ranges, we present here the first collection of spatially-explicit gridded field reconstructions of seasonal snowpack within the American Rocky Mountains. Capitalizing on a new western North American snow-sensitive network of over 700 tree-ring chronologies, as well as recent advances in PRISM-based snow modeling, our gridded reconstructions offer a full space-time characterization of snow and associated water resource fluctuations over several centuries. The quality of reconstructions is evaluated against existing observations, proxy-records, and an independently-developed first-order monthly snow model.

  12. Challenges for Social Control in Wireless Mobile Grids

    NASA Astrophysics Data System (ADS)

    Balke, Tina; Eymann, Torsten

    The evolution of mobile phones has lead to new wireless mobile grids that lack a central controlling instance and require the cooperation of autonomous entities that can voluntarily commit resources, forming a common pool which can be used in order to achieve common and/or individual goals. The social dilemma in such systems is that it is advantageous for rational users to access the common pool resources without any own commitment, since every commitment has its price (see ? for example). However, if a substantial number of users would follow this selfish strategy, the network itself would be at stake. Thus, the question arises on how cooperation can be fostered in wireless mobile grids. Whereas many papers have dealt with this question from a technical point of view, instead this paper will concentrate on a concept that has lately been discussed a lot with this regard: social control. Thereby social control concepts will be contrasted to technical approaches and resulting challenges (as well as possible solutions to these challenges) for social concepts will be discussed.

  13. Implementation of a SOA-Based Service Deployment Platform with Portal

    NASA Astrophysics Data System (ADS)

    Yang, Chao-Tung; Yu, Shih-Chi; Lai, Chung-Che; Liu, Jung-Chun; Chu, William C.

    In this paper we propose a Service Oriented Architecture to provide a flexible and serviceable environment. SOA comes up with commercial requirements; it integrates many techniques over ten years to find the solution in different platforms, programming languages and users. SOA provides the connection with a protocol between service providers and service users. After this, the performance and the reliability problems are reviewed. Finally we apply SOA into our Grid and Hadoop platform. Service acts as an interface in front of the Resource Broker in the Grid, and the Resource Broker is middleware that provides functions for developers. The Hadoop has a file replication feature to ensure file reliability. Services provided on the Grid and Hadoop are centralized. We design a portal, in which users can use services on it directly or register service through the service provider. The portal also offers a service workflow function so that users can customize services according to the need of their jobs.

  14. Cybersecurity for distributed energy resources and smart inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Hahn, Adam; Lu, Xiaonan

    The increased penetration of distributed energy resources (DER) will significantly increase the number of devices that are owned and controlled by consumers and third parties. These devices have a significant dependency on digital communication and control, which presents a growing risk from cyber attacks. This paper proposes a holistic attack-resilient framework to protect the the integrated DER and the critical power grid infrastructure from malicious cyber attacks, helping ensure the secure integration of DER without harming the grid reliability and stability. Specifically, we discuss the architecture of the cyber-physical power system with a high penetration of DER and analyze themore » unique cybersecurity challenges introduced by DER integration. Next, we summarize important attack scenarios against DER, propose a systematic DER resilience analysis methodology, and develop effective and quantifiable resilience metrics and design principles. Lastly, we introduce attack prevention, detection, and response measures specifically designed for DER integration across cyber, physical device, and utility layers of the future smart grid.« less

  15. Smart grid as a service: a discussion on design issues.

    PubMed

    Chao, Hung-Lin; Tsai, Chen-Chou; Hsiung, Pao-Ann; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as "smart" as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system.

  16. Interconnection, Integration, and Interactive Impact Analysis of Microgrids and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Ning; Wang, Jianhui; Singh, Ravindra

    2017-01-01

    Distribution management systems (DMSs) are increasingly used by distribution system operators (DSOs) to manage the distribution grid and to monitor the status of both power imported from the transmission grid and power generated locally by a distributed energy resource (DER), to ensure that power flows and voltages along the feeders are maintained within designed limits and that appropriate measures are taken to guarantee service continuity and energy security. When microgrids are deployed and interconnected to the distribution grids, they will have an impact on the operation of the distribution grid. The challenge is to design this interconnection in such amore » way that it enhances the reliability and security of the distribution grid and the loads embedded in the microgrid, while providing economic benefits to all stakeholders, including the microgrid owner and operator and the distribution system operator.« less

  17. First Steps in the Smart Grid Framework: An Optimal and Feasible Pathway Toward Power System Reform in Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bracho, Riccardo; Linvill, Carl; Sedano, Richard

    With the vision to transform the power sector, Mexico included in the new laws and regulations deployment of smart grid technologies and provided various attributes to the Ministry of Energy and the Energy Regulatory Commission to enact public policies and regulation. The use of smart grid technologies can have a significant impact on the integration of variable renewable energy resources while maintaining reliability and stability of the system, significantly reducing technical and non-technical electricity losses in the grid, improving cyber security, and allowing consumers to make distributed generation and demand response decisions. This report describes for Mexico's Ministry of Energymore » (SENER) an overall approach (Optimal Feasible Pathway) for moving forward with smart grid policy development in Mexico to enable increasing electric generation from renewable energy in a way that optimizes system stability and reliability in an efficient and cost-effective manner.« less

  18. Smart Grid as a Service: A Discussion on Design Issues

    PubMed Central

    Tsai, Chen-Chou; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as “smart” as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system. PMID:25243214

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  20. Evaluation of automated global mapping of Reference Soil Groups of WRB2015

    NASA Astrophysics Data System (ADS)

    Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria

    2017-04-01

    SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992

Top