Sample records for existing grid infrastructure

  1. GEMSS: grid-infrastructure for medical service provision.

    PubMed

    Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R

    2005-01-01

    The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.

  2. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  3. Concept of intellectual charging system for electrical and plug-in hybrid vehicles in Russian Federation

    NASA Astrophysics Data System (ADS)

    Kolbasov, A.; Karpukhin, K.; Terenchenko, A.; Kavalchuk, I.

    2018-02-01

    Electric vehicles have become the most common solution to improve sustainability of the transportation systems all around the world. Despite all benefits, wide adaptation of electric vehicles requires major changes in the infrastructure, including grid adaptation to the rapidly increased power demand and development of the Connected Car concept. This paper discusses the approaches to improve usability of electric vehicles, by creating suitable web-services, with possible connections vehicle-to-vehicle, vehicle-to-infrastructure, and vehicle-to-grid. Developed concept combines information about electrical loads on the grid in specific direction, navigation information from the on-board system, existing and empty charging slots and power availability. In addition, this paper presents the universal concept of the photovoltaic integrated charging stations, which are connected to the developed information systems. It helps to achieve rapid adaptation of the overall infrastructure to the needs of the electric vehicles users with minor changes in the existing grid and loads.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less

  5. caGrid 1.0: a Grid enterprise architecture for cancer research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  6. caGrid 1.0: A Grid Enterprise Architecture for Cancer Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-01-01

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIGTM) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIGTM. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5. PMID:18693901

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalimunthe, Amty Ma’rufah Ardhiyah; Mindara, Jajat Yuda; Panatarani, Camellia

    Smart grid and distributed generation should be the solution of the global climate change and the crisis energy of the main source of electrical power generation which is fossil fuel. In order to meet the rising electrical power demand and increasing service quality demands, as well as reduce pollution, the existing power grid infrastructure should be developed into a smart grid and distributed power generation which provide a great opportunity to address issues related to energy efficiency, energy security, power quality and aging infrastructure systems. The conventional of the existing distributed generation system is an AC grid while for amore » renewable resources requires a DC grid system. This paper explores the model of smart DC grid by introducing a model of smart DC grid with the stable power generation give a minimal and compressed circuitry that can be implemented very cost-effectively with simple components. The PC based application software for controlling was developed to show the condition of the grid and to control the grid become ‘smart’. The model is then subjected to a severe system perturbation, such as incremental change in loads to test the performance of the system again stability. It is concluded that the system able to detect and controlled the voltage stability which indicating the ability of power system to maintain steady voltage within permissible rangers in normal condition.« less

  8. Analysis of the World Experience of Smart Grid Deployment: Economic Effectiveness Issues

    NASA Astrophysics Data System (ADS)

    Ratner, S. V.; Nizhegorodtsev, R. M.

    2018-06-01

    Despite the positive dynamics in the growth of RES-based power production in electric power systems of many countries, the further development of commercially mature technologies of wind and solar generation is often constrained by the existing grid infrastructure and conventional energy supply practices. The integration of large wind and solar power plants into a single power grid and the development of microgeneration require the widespread introduction of a new smart grid technology cluster (smart power grids), whose technical advantages over the conventional ones have been fairly well studied, while issues of their economic effectiveness remain open. Estimation and forecasting potential economic effects from the introduction of innovative technologies in the power sector during the stage preceding commercial development is a methodologically difficult task that requires the use of knowledge from different sciences. This paper contains the analysis of smart grid project implementation in Europe and the United States. Interval estimates are obtained for their basic economic parameters. It was revealed that the majority of smart grid implemented projects are not yet commercially effective, since their positive externalities are usually not recognized on the revenue side due to the lack of universal methods for public benefits monetization. The results of the research can be used in modernization and development planning for the existing grid infrastructure both at the federal level and at the level of certain regions and territories.

  9. The GILDA t-Infrastructure: grid training activities in Africa and future opportunities

    NASA Astrophysics Data System (ADS)

    Ardizzone, V.; Barbera, R.; Ciuffo, L.; Giorgio, E.

    2009-04-01

    Scientists, educators, and students from many parts of the worlds are not able to take advantage of ICT because the digital divide is growing and prevents less developed countries to exploit its benefits. Instead of becoming more empowered and involved in worldwide developments, they are becoming increasingly marginalised as the world of education and science becomes increasingly Internet-dependent. The Grid Infn Laboratory for Dissemination Activities (GILDA) spreads since almost five years the awareness of Grid technology to a large audience, training new communities and fostering new organisations to provide resources. The knowledge dissemination process guided by the training activities is a key factor to ensure that all users can fully understand the characteristics of the Grid services offered by large existing e-Infrastructure. GILDA is becoming a "de facto" standard in training infrastructures (t-Infrastructures) and it is adopted by many grid projects worldwide. In this contribution we will report on the latest status of GILDA services and on the training activities recently carried out in sub-Saharan Africa (Malawi and South Africa). Particular care will be devoted to show how GILDA can be "cloned" to satisfy both education and research demands of African Organisations. The opportunities to benefit from GILDA in the framework of the EPIKH project as well as the plans of the European Commission on grid training and education for the 2010-2011 calls of its 7th Framework Programme will be presented and discussed.

  10. International Symposium on Grids and Clouds (ISGC) 2016

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  11. Grid computing in large pharmaceutical molecular modeling.

    PubMed

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  12. A Security Architecture for Grid-enabling OGC Web Services

    NASA Astrophysics Data System (ADS)

    Angelini, Valerio; Petronzio, Luca

    2010-05-01

    In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.

  13. 3rd Annual Earth System Grid Federation and 3rd Annual Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Face-to-Face Meeting Report December 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    The climate and weather data science community gathered December 3–5, 2013, at Lawrence Livermore National Laboratory, in Livermore, California, for the third annual Earth System Grid Federation (ESGF) and Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Meeting, which was hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UV-CDAT are global collaborations designed to develop a new generation of open-source software infrastructure that provides distributed access and analysis to observed andmore » simulated data from the climate and weather communities. The tools and infrastructure developed under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change, while the F2F meetings help to build a stronger climate and weather data science community and stronger federated software infrastructure. The 2013 F2F meeting determined requirements for existing and impending national and international community projects; enhancements needed for data distribution, analysis, and visualization infrastructure; and standards and resources needed for better collaborations.« less

  14. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    PubMed

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  15. European grid services for global earth science

    NASA Astrophysics Data System (ADS)

    Brewer, S.; Sipos, G.

    2012-04-01

    This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.

  16. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  17. Critical Infrastructure Protection: EMP Impacts on the U.S. Electric Grid

    NASA Astrophysics Data System (ADS)

    Boston, Edwin J., Jr.

    The purpose of this research is to identify the United States electric grid infrastructure systems vulnerabilities to electromagnetic pulse attacks and the cyber-based impacts of those vulnerabilities to the electric grid. Additionally, the research identifies multiple defensive strategies designed to harden the electric grid against electromagnetic pulse attack that include prevention, mitigation and recovery postures. Research results confirm the importance of the electric grid to the United States critical infrastructures system and that an electromagnetic pulse attack against the electric grid could result in electric grid degradation, critical infrastructure(s) damage and the potential for societal collapse. The conclusions of this research indicate that while an electromagnetic pulse attack against the United States electric grid could have catastrophic impacts on American society, there are currently many defensive strategies under consideration designed to prevent, mitigate and or recover from an electromagnetic pulse attack. However, additional research is essential to further identify future target hardening opportunities, efficient implementation strategies and funding resources.

  18. Collaborative Access Control For Critical Infrastructures

    NASA Astrophysics Data System (ADS)

    Baina, Amine; El Kalam, Anas Abou; Deswarte, Yves; Kaaniche, Mohamed

    A critical infrastructure (CI) can fail with various degrees of severity due to physical and logical vulnerabilities. Since many interdependencies exist between CIs, failures can have dramatic consequences on the entire infrastructure. This paper focuses on threats that affect information and communication systems that constitute the critical information infrastructure (CII). A new collaborative access control framework called PolyOrBAC is proposed to address security problems that are specific to CIIs. The framework offers each organization participating in a CII the ability to collaborate with other organizations while maintaining control of its resources and internal security policy. The approach is demonstrated on a practical scenario involving the electrical power grid.

  19. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    PubMed Central

    Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification. PMID:25152923

  20. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  1. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  2. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  3. Application of green IT for physics data processing at INCDTIM

    NASA Astrophysics Data System (ADS)

    Farcas, Felix; Trusca, Radu; Albert, Stefan; Szabo, Izabella; Popeneciu, Gabriel

    2012-02-01

    Green IT is the next generation technology used in all datacenter around the world. Its benefit is of economic and financial interest. The new technologies are energy efficient, reduce cost and avoid potential disruptions to the existing infrastructure. The most important problem appears at the cooling systems which are the most important in the functionality of a datacenter. Green IT used in Grid Network will benefit the environment and is the next phase in computer infrastructure that will fundamentally change the way we think about and use computing power. At the National Institute for Research and Development of Isotopic and Molecular Technologies Cluj-Napoca (INCDTIM) we have implemented such kind of technology and its support helped us in processing multiple data in different domains, which brought INCDTIM on the major Grid domain with the RO-14-ITIM Grid site. In this paper we present benefits that the new technology brought us and the result obtained in the last year after the implementation of the new green technology.

  4. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  5. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  6. Web service module for access to g-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.; Goranov, G.

    2012-10-01

    G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.

  7. Reliability analysis of interdependent lattices

    NASA Astrophysics Data System (ADS)

    Limiao, Zhang; Daqing, Li; Pengju, Qin; Bowen, Fu; Yinan, Jiang; Zio, Enrico; Rui, Kang

    2016-06-01

    Network reliability analysis has drawn much attention recently due to the risks of catastrophic damage in networked infrastructures. These infrastructures are dependent on each other as a result of various interactions. However, most of the reliability analyses of these interdependent networks do not consider spatial constraints, which are found important for robustness of infrastructures including power grid and transport systems. Here we study the reliability properties of interdependent lattices with different ranges of spatial constraints. Our study shows that interdependent lattices with strong spatial constraints are more resilient than interdependent Erdös-Rényi networks. There exists an intermediate range of spatial constraints, at which the interdependent lattices have minimal resilience.

  8. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  9. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  10. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  11. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  12. Complex Networks and Critical Infrastructures

    NASA Astrophysics Data System (ADS)

    Setola, Roberto; de Porcellinis, Stefano

    The term “Critical Infrastructures” indicates all those technological infrastructures such as: electric grids, telecommunication networks, railways, healthcare systems, financial circuits, etc. that are more and more relevant for the welfare of our countries. Each one of these infrastructures is a complex, highly non-linear, geographically dispersed cluster of systems, that interact with their human owners, operators, users and with the other infrastructures. Their augmented relevance and the actual political and technological scenarios, which have increased their exposition to accidental failure and deliberate attacks, demand for different and innovative protection strategies (generally indicate as CIP - Critical Infrastructure Protection). To this end it is mandatory to understand the mechanisms that regulate the dynamic of these infrastructures. In this framework, an interesting approach is those provided by the complex networks. In this paper we illustrate some results achieved considering structural and functional properties of the corresponding topological networks both when each infrastructure is assumed as an autonomous system and when we take into account also the dependencies existing among the different infrastructures.

  13. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  14. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  15. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  16. An Attack-Resilient Middleware Architecture for Grid Integration of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yifu; Mendis, Gihan J.; He, Youbiao

    In recent years, the increasing penetration of Distributed Energy Resources (DERs) has made an impact on the operation of the electric power systems. In the grid integration of DERs, data acquisition systems and communications infrastructure are crucial technologies to maintain system economic efficiency and reliability. Since most of these generators are relatively small, dedicated communications investments for every generator are capital cost prohibitive. Combining real-time attack-resilient communications middleware with Internet of Things (IoTs) technologies allows for the use of existing infrastructure. In our paper, we propose an intelligent communication middleware that utilizes the Quality of Experience (QoE) metrics to complementmore » the conventional Quality of Service (QoS) evaluation. Furthermore, our middleware employs deep learning techniques to detect and defend against congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  17. 2014 Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Conference Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2015-01-27

    The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less

  18. Boundary condition identification for a grid model by experimental and numerical dynamic analysis

    NASA Astrophysics Data System (ADS)

    Mao, Qiang; Devitis, John; Mazzotti, Matteo; Bartoli, Ivan; Moon, Franklin; Sjoblom, Kurt; Aktan, Emin

    2015-04-01

    There is a growing need to characterize unknown foundations and assess substructures in existing bridges. It is becoming an important issue for the serviceability and safety of bridges as well as for the possibility of partial reuse of existing infrastructures. Within this broader contest, this paper investigates the possibility of identifying, locating and quantifying changes of boundary conditions, by leveraging a simply supported grid structure with a composite deck. Multi-reference impact tests are operated for the grid model and modification of one supporting bearing is done by replacing a steel cylindrical roller with a roller of compliant material. Impact based modal analysis provide global modal parameters such as damped natural frequencies, mode shapes and flexibility matrix that are used as indicators of boundary condition changes. An updating process combining a hybrid optimization algorithm and the finite element software suit ABAQUS is presented in this paper. The updated ABAQUS model of the grid that simulates the supporting bearing with springs is used to detect and quantify the change of the boundary conditions.

  19. AstroGrid: the UK's Virtual Observatory Initiative

    NASA Astrophysics Data System (ADS)

    Mann, Robert G.; Astrogrid Consortium; Lawrence, Andy; Davenhall, Clive; Mann, Bob; McMahon, Richard; Irwin, Mike; Walton, Nic; Rixon, Guy; Watson, Mike; Osborne, Julian; Page, Clive; Allan, Peter; Giaretta, David; Perry, Chris; Pike, Dave; Sherman, John; Murtagh, Fionn; Harra, Louise; Bentley, Bob; Mason, Keith; Garrington, Simon

    AstroGrid is the UK's Virtual Observatory (VO) initiative. It brings together the principal astronomical data centres in the UK, and has been funded to the tune of ˜pounds 5M over the next three years, via PPARC, as part of the UK e--science programme. Its twin goals are the provision of the infrastructure and tools for the federation and exploitation of large astronomical (X-ray to radio), solar and space plasma physics datasets, and the delivery of federations of current datasets for its user communities to exploit using those tools. Whilst AstroGrid's work will be centred on existing and future (e.g. VISTA) UK datasets, it will seek solutions to generic VO problems and will contribute to the developing international virtual observatory framework: AstroGrid is a member of the EU-funded Astrophysical Virtual Observatory project, has close links to a second EU Grid initiative, the European Grid of Solar Observations (EGSO), and will seek an active role in the development of the common standards on which the international virtual observatory will rely. In this paper we shall primarily describe the concrete plans for AstroGrid's one-year Phase A study, which will centre on: (i) the definition of detailed science requirements through community consultation; (ii) the undertaking of a ``functionality market survey" to test the utility of existing technologies for the VO; and (iii) a pilot programme of database federations, each addressing different aspects of the general database federation problem. Further information on AstroGrid can be found at AstroGrid .

  20. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  1. Power Systems Integration Laboratory | Energy Systems Integration Facility

    Science.gov Websites

    inverters. Key Infrastructure Grid simulator, load bank, Opal-RT, battery, inverter mounting racks, data , frequency-watt, and grid anomaly ride-through. Key Infrastructure House power, Opal-RT, PV simulator access

  2. Current Grid operation and future role of the Grid

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.

  3. Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment

    PubMed Central

    Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel

    2008-01-01

    Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979

  4. Building Stronger State Partnerships with the US Department of Energy (Energy Assurance)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mike Keogh

    2011-09-30

    From 2007 until 2011, the National Association of Regulatory Utility Commissioners (NARUC) engaged in a partnership with the National Energy Technology Lab (NETL) to improve State-Federal coordination on electricity policy and energy assurance issues. This project allowed State Public Utility Commissioners and their staffs to engage on the most cutting-edge level in the arenas of energy assurance and electricity policy. Four tasks were outlined in the Statement of Performance Objectives: Task 1 - Training for Commissions on Critical Infrastructure Topics; Task 2 - Analyze and Implement Recommendations on Energy Assurance Issues; Task 3 - Ongoing liaison activities & outreach tomore » build stronger networks between federal agencies and state regulators; and Task 4 - Additional Activities. Although four tasks were prescribed, in practice these tasks were carried out under two major activity areas: the critical infrastructure and energy assurance partnership with the US Department of Energy's Infrastructure Security and Emergency Response office, and the National Council on Electricity Policy, a collaborative which since 1994 has brought together State and Federal policymakers to address the most pressing issues facing the grid from restructuring to smart grid implementation. On Critical Infrastructure protection, this cooperative agreement helped State officials yield several important advances. The lead role on NARUC's side was played by our Committee on Critical Infrastructure Protection. Key lessons learned in this arena include the following: (1) Tabletops and exercises work - They improve the capacity of policymakers and their industry counterparts to face the most challenging energy emergencies, and thereby equip these actors with the capacity to face everything up to that point as well. (2) Information sharing is critical - Connecting people who need information with people who have information is a key success factor. However, exposure of critical infrastructure information to bad actors also creates new vulnerabilities. (3) Tensions exist between the transparency-driven basis of regulatory activity and the information-protection requirements of asset protection. (4) Coordination between states is a key success factor - Because comparatively little federal authority exists over electricity and other energy infrastructure, the interstate nature of these energy grids defy centralized command and control governance. Patchwork responses are a risk when addressed at a state-by-state level. Coordination is the key to ensuring consistent response to shared threats. In Electricity Policy, the National Council on Electricity Policy continued to make important strides forward. Coordinated electricity policy among States remains the best surrogate for an absent national electricity policy. In every area from energy efficiency to clean coal, State policies are driving the country's electricity policy, and regional responses to climate change, infrastructure planning, market operation, and new technology deployment depend on a forum for bringing the States together.« less

  5. Beyond grid security

    NASA Astrophysics Data System (ADS)

    Hoeft, B.; Epting, U.; Koenig, T.

    2008-07-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls.

  6. GLIDE: a grid-based light-weight infrastructure for data-intensive environments

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.

    2005-01-01

    The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.

  7. Pervasive access to MRI bias artifact suppression service on a grid.

    PubMed

    Ardizzone, Edoardo; Gambino, Orazio; Genco, Alessandro; Pirrone, Roberto; Sorce, Salvatore

    2009-01-01

    Bias artifact corrupts MRIs in such a way that the image is afflicted by illumination variations. Some of the authors proposed the exponential entropy-driven homomorphic unsharp masking ( E(2)D-HUM) algorithm that corrects this artifact without any a priori hypothesis about the tissues or the MRI modality. Moreover, E(2)D-HUM does not care about the body part under examination and does not require any particular training task. People who want to use this algorithm, which is Matlab-based, have to set their own computers in order to execute it. Furthermore, they have to be Matlab-skilled to exploit all the features of the algorithm. In this paper, we propose to make such algorithm available as a service on a grid infrastructure, so that people can use it almost from everywhere, in a pervasive fashion, by means of a suitable user interface running on smartphones. The proposed solution allows physicians to use the E(2)D-HUM algorithm (or any other kind of algorithm, given that it is available as a service on the grid), being it remotely executed somewhere in the grid, and the results are sent back to the user's device. This way, physicians do not need to be aware of how to use Matlab to process their images. The pervasive service provision for medical image enhancement is presented, along with some experimental results obtained using smartphones connected to an existing Globus-based grid infrastructure.

  8. Electric Vehicle Charging and the California Power Sector: Evaluating the Effect of Location and Time on Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Sohnen, Julia Meagher

    This thesis explores the implications of the increased adoption of plug-in electric vehicles in California through its effect on the operation of the state's electric grid. The well-to-wheels emissions associated with driving an electric vehicle depend on the resource mix of the electricity grid used to charge the battery. We present a new least-cost dispatch model, EDGE-NET, for the California electricity grid consisting of interconnected sub-regions that encompass the six largest state utilities that can be used to evaluate the impact of growing electric vehicle demand on existing power grid infrastructure system and energy resources. This model considers spatiality and temporal dynamics of energy demand and supply when determining the regional impacts of additional charging profiles on the current electricity network. Model simulation runs for one year show generation and transmission congestion to be reasonable similar to historical data. Model simulation results show that average emissions and system costs associated with electricity generation vary significantly by time of day, season, and location. Marginal cost and emissions also exhibit seasonal and diurnal differences, but show less spatial variation. Sensitivity of demand analysis shows that the relative changes to average emissions and system costs respond asymmetrically to increases and decreases in electricity demand. These results depend on grid mix at the time and the marginal power plant type. In minimizing total system cost, the model will choose to dispatch the lowest-cost resource to meet additional vehicle demand, regardless of location, as long as transmission capacity is available. Location of electric vehicle charging has a small effect on the marginal greenhouse gas emissions associated with additional generation, due to electricity losses in the transmission grid. We use a geographically explicit, charging assessment model for California to develop and compare the effects of two charging profiles. Comparison of these two basic scenarios points to savings in greenhouse gas emissions savings and operational costs from delayed charging of electric vehicles. Vehicle charging simulations confirm that plug-in electric vehicles alone are unlikely to require additional generation or transmission infrastructure. EDGE-NET was successfully benchmarked against historical data for the present grid but additional work is required to expand the model for future scenario evaluation. We discuss how the model might be adapted for high penetrations of variable renewable energy resources, and the use of grid storage. Renewable resources such as wind and solar vary in California vary significantly by time-of-day, season, and location. However, combination of multiple resources from different geographic regions through transmission grid interconnection is expected to help mitigate the impacts of variability. EDGE-NET can evaluate interaction of supply and demand through the existing transmission infrastructure and can identify any critical network bottlenecks or areas for expansion. For this reason, EDGE-NET will be an important tool to evaluate energy policy scenarios.

  9. Bringing Federated Identity to Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teheran, Jeny

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less

  10. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Wu, Yifu; Wei, Jin

    Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS)more » information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  11. A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias

    Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service informationmore » to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less

  12. An infrastructure for the integration of geoscience instruments and sensors on the Grid

    NASA Astrophysics Data System (ADS)

    Pugliese, R.; Prica, M.; Kourousias, G.; Del Linz, A.; Curri, A.

    2009-04-01

    The Grid, as a computing paradigm, has long been in the attention of both academia and industry[1]. The distributed and expandable nature of its general architecture result to scalability and more efficient utilisation of the computing infrastructures. The scientific community, including that of geosciences, often handles problems with very high requirements in data processing, transferring, and storing[2,3]. This has raised the interest on Grid technologies but these are often viewed solely as an access gateway to HPC. Suitable Grid infrastructures could provide the geoscience community with additional benefits like those of sharing, remote access and control of scientific systems. These systems can be scientific instruments, sensors, robots, cameras and any other device used in geosciences. The solution for practical, general, and feasible Grid-enabling of such devices requires non-intrusive extensions on core parts of the current Grid architecture. We propose an extended version of an architecture[4] that can serve as the solution to the problem. The solution we propose is called Grid Instrument Element (IE) [5]. It is an addition to the existing core Grid parts; the Computing Element (CE) and the Storage Element (SE) that serve the purposes that their name suggests. The IE that we will be referring to, and the related technologies have been developed in the EU project on the Deployment of Remote Instrumentation Infrastructure (DORII1). In DORII, partners of various scientific communities including those of Earthquake, Environmental science, and Experimental science, have adopted the technology of the Instrument Element in order to integrate to the Grid their devices. The Oceanographic and coastal observation and modelling Mediterranean Ocean Observing Network (OGS2), a DORII partner, is in the process of deploying the above mentioned Grid technologies on two types of observational modules: Argo profiling floats and a novel Autonomous Underwater Vehicle (AUV). In this paper i) we define the need for integration of instrumentation in the Grid, ii) we introduce the solution of the Instrument Element, iii) we demonstrate a suitable end-user web portal for accessing Grid resources, iv) we describe from the Grid-technological point of view the process of the integration to the Grid of two advanced environmental monitoring devices. References [1] M. Surridge, S. Taylor, D. De Roure, and E. Zaluska, "Experiences with GRIA—Industrial Applications on a Web Services Grid," e-Science and Grid Computing, First International Conference on e-Science and Grid Computing, 2005, pp. 98-105. [2] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, "The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets," Journal of Network and Computer Applications, vol. 23, 2000, pp. 187-200. [3] B. Allcock, J. Bester, J. Bresnahan, A.L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, "Data management and transfer in high-performance computational grid environments," Parallel Computing, vol. 28, 2002, pp. 749-771. [4] E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, and S. Traldi, "Instrument Element: A New Grid component that Enables the Control of Remote Instrumentation," Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)-Volume 00, IEEE Computer Society Washington, DC, USA, 2006. [5] R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, and M. Prica, "A Web-based Tool for Collaborative Access to Scientific Instruments in Cyberinfrastructures." 1 The DORII project is supported by the European Commission within the 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-213110. URL: http://www.dorii.eu 2 Istituto Nazionale di Oceanografia e di Geofisica Sperimentale. URL: http://www.ogs.trieste.it

  13. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  14. Mediated definite delegation - Certified Grid jobs in ALICE and beyond

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten; Betev, Latchezar; Buchmann, Johannes

    2012-12-01

    Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of Multi-user Grid Jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of jobs and data. These limitations are discussed and formulated, both in general and with respect to an adoption in line with Multi-user Grid Jobs. A new general model of mediated definite delegation is developed, allowing a broker to dynamically process and assign Grid jobs to agents while providing strong accountability and long-term traceability. A prototype implementation allowing for fully certified Grid jobs is presented as well as a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, including a discussion of non-repudiation in the face of malicious Grid jobs.

  15. DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.

    2010-03-01

    Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.

  16. Infrastructure for Integration of Legacy Electrical Equipment into a Smart-Grid Using Wireless Sensor Networks.

    PubMed

    de Araújo, Paulo Régis C; Filho, Raimir Holanda; Rodrigues, Joel J P C; Oliveira, João P C M; Braga, Stephanie A

    2018-04-24

    At present, the standardisation of electrical equipment communications is on the rise. In particular, manufacturers are releasing equipment for the smart grid endowed with communication protocols such as DNP3, IEC 61850, and MODBUS. However, there are legacy equipment operating in the electricity distribution network that cannot communicate using any of these protocols. Thus, we propose an infrastructure to allow the integration of legacy electrical equipment to smart grids by using wireless sensor networks (WSNs). In this infrastructure, each legacy electrical device is connected to a sensor node, and the sink node runs a middleware that enables the integration of this device into a smart grid based on suitable communication protocols. This middleware performs tasks such as the translation of messages between the power substation control centre (PSCC) and electrical equipment in the smart grid. Moreover, the infrastructure satisfies certain requirements for communication between the electrical equipment and the PSCC, such as enhanced security, short response time, and automatic configuration. The paper’s contributions include a solution that enables electrical companies to integrate their legacy equipment into smart-grid networks relying on any of the above mentioned communication protocols. This integration will reduce the costs related to the modernisation of power substations.

  17. Infrastructure for Integration of Legacy Electrical Equipment into a Smart-Grid Using Wireless Sensor Networks

    PubMed Central

    de Araújo, Paulo Régis C.; Filho, Raimir Holanda; Oliveira, João P. C. M.; Braga, Stephanie A.

    2018-01-01

    At present, the standardisation of electrical equipment communications is on the rise. In particular, manufacturers are releasing equipment for the smart grid endowed with communication protocols such as DNP3, IEC 61850, and MODBUS. However, there are legacy equipment operating in the electricity distribution network that cannot communicate using any of these protocols. Thus, we propose an infrastructure to allow the integration of legacy electrical equipment to smart grids by using wireless sensor networks (WSNs). In this infrastructure, each legacy electrical device is connected to a sensor node, and the sink node runs a middleware that enables the integration of this device into a smart grid based on suitable communication protocols. This middleware performs tasks such as the translation of messages between the power substation control centre (PSCC) and electrical equipment in the smart grid. Moreover, the infrastructure satisfies certain requirements for communication between the electrical equipment and the PSCC, such as enhanced security, short response time, and automatic configuration. The paper’s contributions include a solution that enables electrical companies to integrate their legacy equipment into smart-grid networks relying on any of the above mentioned communication protocols. This integration will reduce the costs related to the modernisation of power substations. PMID:29695099

  18. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  20. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  1. Grid computing technology for hydrological applications

    NASA Astrophysics Data System (ADS)

    Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V.

    2011-06-01

    SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.

  2. SEE-GRID eInfrastructure for Regional eScience

    NASA Astrophysics Data System (ADS)

    Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel

    In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure compatible with European developments, and empowering the scientists in the region in equal participation in the use of pan- European infrastructures, is materializing through the above initiatives. This model has a number of concrete operational and organizational guidelines which can be adapted to help e-Infrastructure developments in other world regions. In this paper we review the most important developments and contributions by the SEEGRID- SCI project.

  3. Approach to sustainable e-Infrastructures - The case of the Latin American Grid

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Diacovo, Ramon; Brasileiro, Francisco; Carvalho, Diego; Dutra, Inês; Faerman, Marcio; Gavillet, Philippe; Hoeger, Herbert; Lopez Pourailly, Maria Jose; Marechal, Bernard; Garcia, Rafael Mayo; Neumann Ciuffo, Leandro; Ramos Pollan, Paul; Scardaci, Diego; Stanton, Michael

    2010-05-01

    The EELA (E-Infrastructure shared between Europe and Latin America) and EELA-2 (E-science grid facility for Europe and Latin America) projects, co-funded by the European Commission under FP6 and FP7, respectively, have been successful in building a high capacity, production-quality, scalable Grid Facility for a wide spectrum of applications (e.g. Earth & Life Sciences, High energy physics, etc.) from several European and Latin American User Communities. This paper presents the 4-year experience of EELA and EELA-2 in: • Providing each Member Institution the unique opportunity to benefit of a huge distributed computing platform for its research activities, in particular through initiatives such as OurGrid which proposes a so-called Opportunistic Grid Computing well adapted to small and medium Research Laboratories such as most of those of Latin America and Africa; • Developing a realistic strategy to ensure the long-term continuity of the e-Infrastructure in the Latin American continent, beyond the term of the EELA-2 project, in association with CLARA and collaborating with EGI. Previous interactions between EELA and African Grid members at events such as the IST Africa'07, 08 and 09, the International Conference on Open Access'08 and EuroAfriCa-ICT'08, to which EELA and EELA-2 contributed, have shown that the e-Infrastructure situation in Africa compares well with the Latin American one. This means that African Grids are likely to face the same problems that EELA and EELA-2 experienced, especially in getting the necessary User and Decision Makers support to create NGIs and, later, a possible continent-wide African Grid Initiative (AGI). The hope is that the EELA-2 endeavour towards sustainability as described in this presentation could help the progress of African Grids.

  4. Using ESB and BPEL for Evolving Healthcare Systems Towards Pervasive, Grid-Enabled SOA

    NASA Astrophysics Data System (ADS)

    Koufi, V.; Malamateniou, F.; Papakonstantinou, D.; Vassilacopoulos, G.

    Healthcare organizations often face the challenge of integrating diverse and geographically disparate information technology systems to respond to changing requirements and to exploit the capabilities of modern technologies. Hence, systems evolution, through modification and extension of the existing information technology infrastructure, becomes a necessity. Moreover, the availability of these systems at the point of care when needed is a vital issue for the quality of healthcare provided to patients. This chapter takes a process perspective of healthcare delivery within and across organizational boundaries and presents a disciplined approach for evolving healthcare systems towards a pervasive, grid-enabled service-oriented architecture using the enterprise system bus middleware technology for resolving integration issues, the business process execution language for supporting collaboration requirements and grid middleware technology for both addressing common SOA scalability requirements and complementing existing system functionality. In such an environment, appropriate security mechanisms must ensure authorized access to integrated healthcare services and data. To this end, a security framework addressing security aspects such as authorization and access control is also presented.

  5. Distributed Energy Systems Integration and Demand Optimization for Autonomous Operations and Electric Grid Transactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghatikar, Girish; Mashayekh, Salman; Stadler, Michael

    Distributed power systems in the U.S. and globally are evolving to provide reliable and clean energy to consumers. In California, existing regulations require significant increases in renewable generation, as well as identification of customer-side distributed energy resources (DER) controls, communication technologies, and standards for interconnection with the electric grid systems. As DER deployment expands, customer-side DER control and optimization will be critical for system flexibility and demand response (DR) participation, which improves the economic viability of DER systems. Current DER systems integration and communication challenges include leveraging the existing DER and DR technology and systems infrastructure, and enabling optimized cost,more » energy and carbon choices for customers to deploy interoperable grid transactions and renewable energy systems at scale. Our paper presents a cost-effective solution to these challenges by exploring communication technologies and information models for DER system integration and interoperability. This system uses open standards and optimization models for resource planning based on dynamic-pricing notifications and autonomous operations within various domains of the smart grid energy system. It identifies architectures and customer engagement strategies in dynamic DR pricing transactions to generate feedback information models for load flexibility, load profiles, and participation schedules. The models are tested at a real site in California—Fort Hunter Liggett (FHL). Furthermore, our results for FHL show that the model fits within the existing and new DR business models and networked systems for transactive energy concepts. Integrated energy systems, communication networks, and modeling tools that coordinate supply-side networks and DER will enable electric grid system operators to use DER for grid transactions in an integrated system.« less

  6. The Anatomy of a Grid portal

    NASA Astrophysics Data System (ADS)

    Licari, Daniele; Calzolari, Federico

    2011-12-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  7. Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications

    NASA Astrophysics Data System (ADS)

    Shamsi, Pourya

    Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.

  8. Microgrid Design Toolkit (MDT) User Guide Software v1.2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eddy, John P.

    2017-08-01

    The Microgrid Design Toolkit (MDT) supports decision analysis for new ("greenfield") microgrid designs as well as microgrids with existing infrastructure. The current version of MDT includes two main capabilities. The first capability, the Microgrid Sizing Capability (MSC), is used to determine the size and composition of a new, grid connected microgrid in the early stages of the design process. MSC is focused on developing a microgrid that is economically viable when connected to the grid. The second capability is focused on designing a microgrid for operation in islanded mode. This second capability relies on two models: the Technology Management Optimizationmore » (TMO) model and Performance Reliability Model (PRM).« less

  9. GreenView and GreenLand Applications Development on SEE-GRID Infrastructure

    NASA Astrophysics Data System (ADS)

    Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor

    2010-05-01

    The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).

  10. The Evolution of the Internet Community and the"Yet-to-Evolve" Smart Grid Community: Parallels and Lessons-to-be-Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McParland, Charles

    The Smart Grid envisions a transformed US power distribution grid that enables communicating devices, under human supervision, to moderate loads and increase overall system stability and security. This vision explicitly promotes increased participation from a community that, in the past, has had little involvement in power grid operations -the consumer. The potential size of this new community and its member's extensive experience with the public Internet prompts an analysis of the evolution and current state of the Internet as a predictor for best practices in the architectural design of certain portions of the Smart Grid network. Although still evolving, themore » vision of the Smart Grid is that of a community of communicating and cooperating energy related devices that can be directed to route power and modulate loads in pursuit of an integrated, efficient and secure electrical power grid. The remaking of the present power grid into the Smart Grid is considered as fundamentally transformative as previous developments such as modern computing technology and high bandwidth data communications. However, unlike these earlier developments, which relied on the discovery of critical new technologies (e.g. the transistor or optical fiber transmission lines), the technologies required for the Smart Grid currently exist and, in many cases, are already widely deployed. In contrast to other examples of technical transformations, the path (and success) of the Smart Grid will be determined not by its technology, but by its system architecture. Fortunately, we have a recent example of a transformative force of similar scope that shares a fundamental dependence on our existing communications infrastructure - namely, the Internet. We will explore several ways in which the scale of the Internet and expectations of its users have shaped the present Internet environment. As the presence of consumers within the Smart Grid increases, some experiences from the early growth of the Internet are expected to be informative and pertinent.« less

  11. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment

    PubMed Central

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2013-01-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation’s electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments. PMID:25685516

  12. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment.

    PubMed

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2014-07-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.

  13. Evaluation of Service Level Agreement Approaches for Portfolio Management in the Financial Industry

    NASA Astrophysics Data System (ADS)

    Pontz, Tobias; Grauer, Manfred; Kuebert, Roland; Tenschert, Axel; Koller, Bastian

    The idea of service-oriented Grid computing seems to have the potential for fundamental paradigm change and a new architectural alignment concerning the design of IT infrastructures. There is a wide range of technical approaches from scientific communities which describe basic infrastructures and middlewares for integrating Grid resources in order that by now Grid applications are technically realizable. Hence, Grid computing needs viable business models and enhanced infrastructures to move from academic application right up to commercial application. For a commercial usage of these evolutions service level agreements are needed. The developed approaches are primary of academic interest and mostly have not been put into practice. Based on a business use case of the financial industry, five service level agreement approaches have been evaluated in this paper. Based on the evaluation, a management architecture has been designed and implemented as a prototype.

  14. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  15. A Theoretical Secure Enterprise Architecture for Multi Revenue Generating Smart Grid Sub Electric Infrastructure

    ERIC Educational Resources Information Center

    Chaudhry, Hina

    2013-01-01

    This study is a part of the smart grid initiative providing electric vehicle charging infrastructure. It is a refueling structure, an energy generating photovoltaic system and charge point electric vehicle charging station. The system will utilize advanced design and technology allowing electricity to flow from the site's normal electric service…

  16. Energy Theft in the Advanced Metering Infrastructure

    NASA Astrophysics Data System (ADS)

    McLaughlin, Stephen; Podkuiko, Dmitry; McDaniel, Patrick

    Global energy generation and delivery systems are transitioning to a new computerized "smart grid". One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is theft still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it.

  17. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  18. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  19. Improving Distribution Resiliency with Microgrids and State and Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.

    Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking themore » system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using advanced metering infrastructure and other distribution-level measurements to create a three-phase, unbalanced distribution state estimation approach. With distribution-level state estimation, the grid can be operated more efficiently, and outages or equipment failures can be caught faster, improving the overall resilience and reliability of the grid.« less

  20. Framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeramany, Arun; Unwin, Stephen D.; Coles, Garill A.

    2016-06-25

    Natural and man-made hazardous events resulting in loss of grid infrastructure assets challenge the security and resilience of the electric power grid. However, the planning and allocation of appropriate contingency resources for such events requires an understanding of their likelihood and the extent of their potential impact. Where these events are of low likelihood, a risk-informed perspective on planning can be difficult, as the statistical basis needed to directly estimate the probabilities and consequences of their occurrence does not exist. Because risk-informed decisions rely on such knowledge, a basis for modeling the risk associated with high-impact, low-frequency events (HILFs) ismore » essential. Insights from such a model indicate where resources are most rationally and effectively expended. A risk-informed realization of designing and maintaining a grid resilient to HILFs will demand consideration of a spectrum of hazards/threats to infrastructure integrity, an understanding of their likelihoods of occurrence, treatment of the fragilities of critical assets to the stressors induced by such events, and through modeling grid network topology, the extent of damage associated with these scenarios. The model resulting from integration of these elements will allow sensitivity assessments based on optional risk management strategies, such as alternative pooling, staging and logistic strategies, and emergency contingency planning. This study is focused on the development of an end-to-end HILF risk-assessment framework. Such a framework is intended to provide the conceptual and overarching technical basis for the development of HILF risk models that can inform decision-makers across numerous stakeholder groups in directing resources optimally towards the management of risks to operational continuity.« less

  1. ICT-infrastructures for hydrometeorology science and natural disaster societal impact assessment: the DRIHMS project

    NASA Astrophysics Data System (ADS)

    Parodi, A.; Craig, G. C.; Clematis, A.; Kranzlmueller, D.; Schiffers, M.; Morando, M.; Rebora, N.; Trasforini, E.; D'Agostino, D.; Keil, K.

    2010-09-01

    Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modeling tools, post processing methodologies and observational data and corresponding ICT (Information and Communication Technology) technologies are available. Recent European efforts in developing a platform for e-Science, such as EGEE (Enabling Grids for E-sciencE), SEEGRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, have demonstrated their abilities to provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind and the fact that European ICT-infrastructures are in the progress of transferring to a sustainable and permanent service utility as underlined by the European Grid Initiative (EGI) and the Partnership for Advanced Computing in Europe (PRACE), the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS, co-Founded by the EC under the 7th Framework Programme) project has been initiated. The goal of DRIHMS is the promotion of the Grids in particular and e-Infrastructures in general within the European hydrometeorological research (HMR) community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change. This paper will be devoted to provide an overview of DRIHMS ideas and to present the results of the DRIHMS HMR and ICT surveys.

  2. Grid Modernization | NREL

    Science.gov Websites

    development to improve the nation's electrical grid infrastructure, making it more flexible, reliable Standard, IEEE 1547 Blue cover page of report with hexagon shapes over electric grid Basic Research Needs Controls Power Systems Design and Studies Security and Resilience Institutional Support NREL grid research

  3. Sun-Burned: Space Weather's Impact on United States National Security

    NASA Astrophysics Data System (ADS)

    Stebbins, B.

    2014-12-01

    The heightened media attention surrounding the 2013-14 solar maximum presented an excellent opportunity to examine the ever-increasing vulnerability of US national security and its Department of Defense to space weather. This vulnerability exists for three principal reasons: 1) a massive US space-based infrastructure; 2) an almost exclusive reliance on an aging and stressed continental US power grid; and 3) a direct dependence upon a US economy adapted to the conveniences of space and uninterrupted power. I tailored my research and work for the national security policy maker and military strategists in an endeavor to initiate and inform a substantive dialogue on America's preparation for, and response to, a major solar event that would severely degrade core national security capabilities, such as military operations. Significant risk to the Department of Defense exists from powerful events that could impact its space-based infrastructure and even the terrestrial power grid. Given this ever-present and increasing risk to the United States, my work advocates raising the issue of space weather and its impacts to the level of a national security threat. With the current solar cycle having already peaked and the next projected solar maximum just a decade away, the government has a relatively small window to make policy decisions that prepare the nation and its Defense Department to mitigate impacts from these potentially catastrophic phenomena.

  4. Recovery Act-SmartGrid regional demonstration transmission and distribution (T&D) Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedges, Edward T.

    This document represents the Final Technical Report for the Kansas City Power & Light Company (KCP&L) Green Impact Zone SmartGrid Demonstration Project (SGDP). The KCP&L project is partially funded by Department of Energy (DOE) Regional Smart Grid Demonstration Project cooperative agreement DE-OE0000221 in the Transmission and Distribution Infrastructure application area. This Final Technical Report summarizes the KCP&L SGDP as of April 30, 2015 and includes summaries of the project design, implementation, operations, and analysis performed as of that date.

  5. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  6. Life science research and drug discovery at the turn of the 21st century: the experience of SwissBioGrid.

    PubMed

    den Besten, Matthijs; Thomas, Arthur J; Schroeder, Ralph

    2009-04-22

    It is often said that the life sciences are transforming into an information science. As laboratory experiments are starting to yield ever increasing amounts of data and the capacity to deal with those data is catching up, an increasing share of scientific activity is seen to be taking place outside the laboratories, sifting through the data and modelling "in silico" the processes observed "in vitro." The transformation of the life sciences and similar developments in other disciplines have inspired a variety of initiatives around the world to create technical infrastructure to support the new scientific practices that are emerging. The e-Science programme in the United Kingdom and the NSF Office for Cyberinfrastructure are examples of these. In Switzerland there have been no such national initiatives. Yet, this has not prevented scientists from exploring the development of similar types of computing infrastructures. In 2004, a group of researchers in Switzerland established a project, SwissBioGrid, to explore whether Grid computing technologies could be successfully deployed within the life sciences. This paper presents their experiences as a case study of how the life sciences are currently operating as an information science and presents the lessons learned about how existing institutional and technical arrangements facilitate or impede this operation. SwissBioGrid gave rise to two pilot projects: one for proteomics data analysis and the other for high-throughput molecular docking ("virtual screening") to find new drugs for neglected diseases (specifically, for dengue fever). The proteomics project was an example of a data management problem, applying many different analysis algorithms to Terabyte-sized datasets from mass spectrometry, involving comparisons with many different reference databases; the virtual screening project was more a purely computational problem, modelling the interactions of millions of small molecules with a limited number of protein targets on the coat of the dengue virus. Both present interesting lessons about how scientific practices are changing when they tackle the problems of large-scale data analysis and data management by means of creating a novel technical infrastructure. In the experience of SwissBioGrid, data intensive discovery has a lot to gain from close collaboration with industry and harnessing distributed computing power. Yet the diversity in life science research implies only a limited role for generic infrastructure; and the transience of support means that researchers need to integrate their efforts with others if they want to sustain the benefits of their success, which are otherwise lost.

  7. Modelling noise propagation using Grid Resources. Progress within GDI-Grid

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut

    2010-05-01

    Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation. This immense intensive calculation needs to be performed for a major part of European landscape. A LINUX version of the commercial LimA software for noise mapping analysis has been implemented on a test cluster within the German D-GRID computer network. Results and performance indicators will be presented. The presentation is an extension to last-years presentation "Spatial Data Infrastructures and Grid Computing: the GDI-Grid project" that described the gridification concept developed in the GDI-Grid project and provided an overview of the conceptual gaps between Grid Computing and Spatial Data Infrastructures. Results from the GDI-Grid project are incorporated in the OGC-OGF (Open Grid Forum) collaboration efforts as well as the OGC WPS 2.0 standards working group developing the next major version of the WPS specification.

  8. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  9. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  10. Experiences of engineering Grid-based medical software.

    PubMed

    Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T

    2007-08-01

    Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.

  11. Complex Dynamics of the Power Transmission Grid (and other Critical Infrastructures)

    NASA Astrophysics Data System (ADS)

    Newman, David

    2015-03-01

    Our modern societies depend crucially on a web of complex critical infrastructures such as power transmission networks, communication systems, transportation networks and many others. These infrastructure systems display a great number of the characteristic properties of complex systems. Important among these characteristics, they exhibit infrequent large cascading failures that often obey a power law distribution in their probability versus size. This power law behavior suggests that conventional risk analysis does not apply to these systems. It is thought that much of this behavior comes from the dynamical evolution of the system as it ages, is repaired, upgraded, and as the operational rules evolve with human decision making playing an important role in the dynamics. In this talk, infrastructure systems as complex dynamical systems will be introduced and some of their properties explored. The majority of the talk will then be focused on the electric power transmission grid though many of the results can be easily applied to other infrastructures. General properties of the grid will be discussed and results from a dynamical complex systems power transmission model will be compared with real world data. Then we will look at a variety of uses of this type of model. As examples, we will discuss the impact of size and network homogeneity on the grid robustness, the change in risk of failure as generation mix (more distributed vs centralized for example) changes, as well as the effect of operational changes such as the changing the operational risk aversion or grid upgrade strategies. One of the important outcomes from this work is the realization that ``improvements'' in the system components and operational efficiency do not always improve the system robustness, and can in fact greatly increase the risk, when measured as a risk of large failure.

  12. Preservation Environments

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.

    2004-01-01

    The long-term preservation of digital entities requires mechanisms to manage the authenticity of massive data collections that are written to archival storage systems. Preservation environments impose authenticity constraints and manage the evolution of the storage system technology by building infrastructure independent solutions. This seeming paradox, the need for large archives, while avoiding dependence upon vendor specific solutions, is resolved through use of data grid technology. Data grids provide the storage repository abstractions that make it possible to migrate collections between vendor specific products, while ensuring the authenticity of the archived data. Data grids provide the software infrastructure that interfaces vendor-specific storage archives to preservation environments.

  13. Enabling fast charging - Infrastructure and economic considerations

    NASA Astrophysics Data System (ADS)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; Francfort, James; Michelbacher, Christopher; Carlson, Richard B.; Zhang, Jiucai; Vijayagopal, Ram; Dias, Fernando; Mohanpurkar, Manish; Scoffield, Don; Hardy, Keith; Shirk, Matthew; Hovsapian, Rob; Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; Keyser, Matthew; Kreuzer, Cory; Markel, Anthony; Meintz, Andrew; Pesaran, Ahmad; Tanim, Tanvir R.

    2017-11-01

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehicle service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.

  14. Enabling fast charging – Infrastructure and economic considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  15. Enabling fast charging – Infrastructure and economic considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  16. Enabling fast charging – Infrastructure and economic considerations

    DOE PAGES

    Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; ...

    2017-10-23

    The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less

  17. Land Cover Change Community-based Processing and Analysis System (LC-ComPS): Lessons Learned from Technology Infusion

    NASA Astrophysics Data System (ADS)

    Masek, J.; Rao, A.; Gao, F.; Davis, P.; Jackson, G.; Huang, C.; Weinstein, B.

    2008-12-01

    The Land Cover Change Community-based Processing and Analysis System (LC-ComPS) combines grid technology, existing science modules, and dynamic workflows to enable users to complete advanced land data processing on data available from local and distributed archives. Changes in land cover represent a direct link between human activities and the global environment, and in turn affect Earth's climate. Thus characterizing land cover change has become a major goal for Earth observation science. Many science algorithms exist to generate new products (e.g., surface reflectance, change detection) used to study land cover change. The overall objective of the LC-ComPS is to release a set of tools and services to the land science community that can be implemented as a flexible LC-ComPS to produce surface reflectance and land-cover change information with ground resolution on the order of Landsat-class instruments. This package includes software modules for pre-processing Landsat-type satellite imagery (calibration, atmospheric correction, orthorectification, precision registration, BRDF correction) for performing land-cover change analysis and includes pre-built workflow chains to automatically generate surface reflectance and land-cover change products based on user input. In order to meet the project objectives, the team created the infrastructure (i.e., client-server system with graphical and machine interfaces) to expand the use of these existing science algorithm capabilities in a community with distributed, large data archives and processing centers. Because of the distributed nature of the user community, grid technology was chosen to unite the dispersed community resources. At that time, grid computing was not used consistently and operationally within the Earth science research community. Therefore, there was a learning curve to configure and implement the underlying public key infrastructure (PKI) interfaces, required for the user authentication, secure file transfer and remote job execution on the grid network of machines. In addition, science support was needed to vet that the grid technology did not have any adverse affects of the science module outputs. Other open source, unproven technologies, such as a workflow package to manage jobs submitted by the user, were infused into the overall system with successful results. This presentation will discuss the basic capabilities of LC-ComPS, explain how the technology was infused, and provide lessons learned for using and integrating the various technologies while developing and operating the system, and finally outline plans moving forward (maintenance and operations decisions) based on the experience to date.

  18. A Smart Home Test Bed for Undergraduate Education to Bridge the Curriculum Gap from Traditional Power Systems to Modernized Smart Grids

    ERIC Educational Resources Information Center

    Hu, Qinran; Li, Fangxing; Chen, Chien-fei

    2015-01-01

    There is a worldwide trend to modernize old power grid infrastructures to form future smart grids, which will achieve efficient, flexible energy consumption by using the latest technologies in communication, computing, and control. Smart grid initiatives are moving power systems curricula toward smart grids. Although the components of smart grids…

  19. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.

  20. Creating a Network Model for the Integration of a Dynamic and Static Supervisory Control and Data Acquisition (SCADA) Test Environment

    DTIC Science & Technology

    2011-03-01

    they can continue to leverage these capabilities (building Smart Grid infrastructure and providing Internet connectivity to every home ) while ensuring...21  Figure 9. Smart Grid Interoperability .............................................................................. 22  Figure 10. Smart ...Grid Integration .................................................................................... 24  Figure 11. National Smart Grid Initiatives

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Youn, Edward; Chynoweth, Joshua

    As Electric Vehicles (EVs) increase, charging infrastructure becomes more important. When during the day there is a power shortage, the charging infrastructure should have the options to either shut off the power to the charging stations or to lower the power to the EVs in order to satisfy the needs of the grid. This paper proposes a design for a smart charging infrastructure capable of providing power to several EVs from one circuit by multiplexing power and providing charge control and safety systems to prevent electric shock. The safety design is implemented in different levels that include both the servermore » and the smart charging stations. With this smart charging infrastructure, the shortage of energy in a local grid could be solved by our EV charging management system.« less

  2. Full Multigrid Flow Solver

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris

    2005-01-01

    FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.

  3. VERCE, Virtual Earthquake and Seismology Research Community in Europe, a new ESFRI initiative integrating data infrastructure, Grid and HPC infrastructures for data integration, data analysis and data modeling in seismology

    NASA Astrophysics Data System (ADS)

    van Hemert, Jano; Vilotte, Jean-Pierre

    2010-05-01

    Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)

  4. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-06-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  5. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less

  6. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain.

    PubMed

    Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar

    2018-01-30

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.

  7. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  8. GRID-Launcher v.1.0.

    NASA Astrophysics Data System (ADS)

    Deniskina, N.; Brescia, M.; Cavuoti, S.; d'Angelo, G.; Laurino, O.; Longo, G.

    GRID-launcher-1.0 was built within the VO-Tech framework, as a software interface between the UK-ASTROGRID and a generic GRID infrastructures in order to allow any ASTROGRID user to launch on the GRID computing intensive tasks from the ASTROGRID Workbench or Desktop. Even though of general application, so far the Grid-Launcher has been tested on a few selected softwares (VONeural-MLP, VONeural-SVM, Sextractor and SWARP) and on the SCOPE-GRID.

  9. The ALICE analysis train system

    NASA Astrophysics Data System (ADS)

    Zimmermann, Markus; ALICE Collaboration

    2015-05-01

    In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.

  10. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    NASA Astrophysics Data System (ADS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-12-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  11. Unlocking the potential of smart grid technologies with behavioral science

    PubMed Central

    Sintov, Nicole D.; Schultz, P. Wesley

    2015-01-01

    Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings. PMID:25914666

  12. Unlocking the potential of smart grid technologies with behavioral science.

    PubMed

    Sintov, Nicole D; Schultz, P Wesley

    2015-01-01

    Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.

  13. Unlocking the potential of smart grid technologies with behavioral science

    DOE PAGES

    Sintov, Nicole D.; Schultz, P. Wesley

    2015-04-09

    Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less

  14. Unlocking the potential of smart grid technologies with behavioral science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintov, Nicole D.; Schultz, P. Wesley

    Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less

  15. Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project

    NASA Astrophysics Data System (ADS)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.

    2009-04-01

    During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has been evaluated through the development of prototypes of two operative applications used by the Italian Civil Protection for Wild Fires Risk Assessment (RISICO) and by the French Civil Protection for Flash Flood Risk Management (SPC-GD). The results of these studies and proof-of-concepts have been used as the basis for the definition of research and innovation strategies aiming to the detailed design and implementation of the infrastructure. In particular the main research themes and topics to be addressed have been identified and detailed. Finally the obstacles to the innovation required for the adoption of this infrastructure and possible strategies to overcome them have been discussed.

  16. Operating a production pilot factory serving several scientific domains

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Würthwein, F.; Andrews, W.; Dost, J. M.; MacNeill, I.; McCrea, A.; Sheripon, E.; Murphy, C. W.

    2011-12-01

    Pilot infrastructures are becoming prominent players in the Grid environment. One of the major advantages is represented by the reduced effort required by the user communities (also known as Virtual Organizations or VOs) due to the outsourcing of the Grid interfacing services, i.e. the pilot factory, to Grid experts. One such pilot factory, based on the glideinWMS pilot infrastructure, is being operated by the Open Science Grid at University of California San Diego (UCSD). This pilot factory is serving multiple VOs from several scientific domains. Currently the three major clients are the analysis operations of the HEP experiment CMS, the community VO HCC, which serves mostly math, biology and computer science users, and the structural biology VO NEBioGrid. The UCSD glidein factory allows the served VOs to use Grid resources distributed over 150 sites in North and South America, in Europe, and in Asia. This paper presents the steps taken to create a production quality pilot factory, together with the challenges encountered along the road.

  17. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov Websites

    Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation

  18. Interoperable PKI Data Distribution in Computational Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Gridmore » Security Infrastructure (GSI).« less

  19. Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks

    PubMed Central

    Honda, Kiyoshi; Shrestha, Aadit; Witayangkurn, Apichon; Chinnachodteeranun, Rassarin; Shimamura, Hiroshi

    2009-01-01

    The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG), which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks. PMID:22574018

  20. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.

  1. FermiGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yocum, D.R.; Berman, E.; Canal, P.

    2007-05-01

    As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.

  2. Towards an advanced e-Infrastructure for Civil Protection applications: Research Strategies and Innovation Guidelines

    NASA Astrophysics Data System (ADS)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.

    2009-04-01

    In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real-Time capabilities, privileging time-of-response instead of accuracy, b) Security services to support complex data policies and trust relationships, c) Interoperability with existing or planned infrastructures (e.g. e-Government, INSPIRE compliant, etc.). Actually these requirements are the main reason why CP applications differ from Earth Science applications. Therefore further research is required to design and implement an advanced e-Infrastructure satisfying those specific requirements. In particular five themes where further research is required were identified: Grid Infrastructure Enhancement, Advanced Middleware for CP Applications, Security and Data Policies, CP Applications Enablement, and Interoperability. For each theme several research topics were proposed and detailed. They are targeted to solve specific problems for the implementation of an effective operational European e-Infrastructure for CP applications.

  3. Security architecture for health grid using ambient intelligence.

    PubMed

    Naqvi, S; Riguidel, M; Demeure, I

    2005-01-01

    To propose a novel approach of incorporating ambient intelligence in the health grid security architecture. Security concerns are severely impeding the grid community effort in spreading its wings in health applications. In this paper, we have proposed a high level approach to incorporate ambient intelligence for health grid security architecture and have argued that this will significantly improve the current state of the grid security paradigm with an enhanced user-friendly environment. We believe that the time is right to shift the onus of traditional security mechanisms onto the new technologies. The incorporation of ambient intelligence in the security architecture of a grid will not only render a security paradigm robust but also provide an attractive vision for the future of computing by bringing the two worlds together. In this article we propose an evolutionary approach of utilizing smart devices for grid security architecture. We argue that such an infrastructure will impart unique features to the existing grid security paradigms by offering fortified and relentless monitoring. This new security architecture will be comprehensive in nature but will not be cumbersome for the users due to its typical characteristics of not prying into their lives and adapting to their needs. We have identified a new paradigm of the security architecture for a health grid that will not only render a security mechanism robust but will also provide the high levels of user-friendliness. As our approach is a first contribution to this problem, a number of other issues for future research remain open. However, the prospects are fascinating.

  4. The GENIUS Grid Portal and robot certificates: a new tool for e-Science

    PubMed Central

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-01-01

    Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747

  5. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    PubMed

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.

  6. A tool for optimization of the production and user analysis on the Grid, C. Grigoras for the ALICE Collaboration

    NASA Astrophysics Data System (ADS)

    Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores

    2011-12-01

    With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.

  7. Prototyping a Web-of-Energy Architecture for Smart Integration of Sensor Networks in Smart Grids Domain

    PubMed Central

    Vernet, David; Corral, Guiomar

    2018-01-01

    Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748

  8. Final Report Feasibility Study for the California Wave Energy Test Center (CalWavesm)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeslee, Samuel Norman; Toman, William I.; Williams, Richard B.

    The California Wave Energy Test Center (CalWave) Feasibility Study project was funded over multiple phases by the Department of Energy to perform an interdisciplinary feasibility assessment to analyze the engineering, permitting, and stakeholder requirements to establish an open water, fully energetic, grid connected, wave energy test center off the coast of California for the purposes of advancing U.S. wave energy research, development, and testing capabilities. Work under this grant included wave energy resource characterization, grid impact and interconnection requirements, port infrastructure and maritime industry capability/suitability to accommodate the industry at research, demonstration and commercial scale, and macro and micro sitingmore » considerations. CalWave Phase I performed a macro-siting and down-selection process focusing on two potential test sites in California: Humboldt Bay and Vandenberg Air Force Base. This work resulted in the Vandenberg Air Force Base site being chosen as the most favorable site based on a peer reviewed criteria matrix. CalWave Phase II focused on four siting location alternatives along the Vandenberg Air Force Base coastline and culminated with a final siting down-selection. Key outcomes from this work include completion of preliminary engineering and systems integration work, a robust turnkey cost estimate, shoreside and subsea hazards assessment, storm wave analysis, lessons learned reports from several maritime disciplines, test center benchmarking as compared to existing international test sites, analysis of existing applicable environmental literature, the completion of a preliminary regulatory, permitting and licensing roadmap, robust interaction and engagement with state and federal regulatory agency personnel and local stakeholders, and the population of a Draft Federal Energy Regulatory Commission (FERC) Preliminary Application Document (PAD). Analysis of existing offshore oil and gas infrastructure was also performed to assess the potential value and re-use scenarios of offshore platform infrastructure and associated subsea power cables and shoreside substations. The CalWave project team was well balanced and was comprised of experts from industry, academia, state and federal regulatory agencies. The result of the CalWave feasibility study finds that the CalWave Test Center has the potential to provide the most viable path to commercialization for wave energy in the United States.« less

  9. Continental-Scale Estimates of Runoff Using Future Climate ...

    EPA Pesticide Factsheets

    Recent runoff events have had serious repercussions to both natural ecosystems and human infrastructure. Understanding how shifts in storm event intensities are expected to change runoff responses are valuable for local, regional, and landscape planning. To address this challenge, relative changes in runoff using predicted future climate conditions were estimated over different biophysical areas for the CONterminous U.S. (CONUS). Runoff was estimated using the Curve Number (CN) developed by the USDA Soil Conservation Service (USDA, 1986). A seamless gridded dataset representing a CN for existing land use/land cover (LULC) across the CONUS was used along with two different storm event grids created specifically for this effort. The two storm event grids represent a 2- and a 100-year, 24-hour storm event under current climate conditions. The storm event grids were generated using a compilation of county-scale Texas USGS Intensity-Duration-Frequency (IDF) data (provided by William Asquith, USGS, Lubbock, Texas), and NOAA Atlas-2 and NOAA Atlas-14 gridded data sets. Future CN runoff was predicted using extreme storm events grids created using a method based on Kao and Ganguly (2011) where precipitation extremes reflect changes in saturated water vapor pressure of the atmosphere in response to temperature changes. The Clausius-Clapeyron relationship establishes that the total water vapor mass of fully saturated air increases with increasing temperature, leading to

  10. Cyberwarfare on the Electricity Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murarka, N.; Ramesh, V.C.

    2000-03-20

    The report analyzes the possibility of cyberwarfare on the electricity infrastructure. The ongoing deregulation of the electricity industry makes the power grid all the more vulnerable to cyber attacks. The report models the power system information system components, models potential threats and protective measures. It therefore offers a framework for infrastructure protection.

  11. Assistive Awareness in Smart Grids

    NASA Astrophysics Data System (ADS)

    Bourazeri, Aikaterini; Almajano, Pablo; Rodriguez, Inmaculada; Lopez-Sanchez, Maite

    The following sections are included: * Introduction * Background * The User-Infrastructure Interface * User Engagement through Assistive Awareness * Research Impact * Serious Games for Smart Grids * Serious Game Technology * Game scenario * Game mechanics * Related Work * Summary and Conclusions

  12. SimWIND: A Geospatial Infrastructure Model for Wind Energy Production and Transmission

    NASA Astrophysics Data System (ADS)

    Middleton, R. S.; Phillips, B. R.; Bielicki, J. M.

    2009-12-01

    Wind is a clean, enduring energy resource with a capacity to satisfy 20% or more of the electricity needs in the United States. A chief obstacle to realizing this potential is the general paucity of electrical transmission lines between promising wind resources and primary load centers. Successful exploitation of this resource will therefore require carefully planned enhancements to the electric grid. To this end, we present the model SimWIND for self-consistent optimization of the geospatial arrangement and cost of wind energy production and transmission infrastructure. Given a set of wind farm sites that satisfy meteorological viability and stakeholder interest, our model simultaneously determines where and how much electricity to produce, where to build new transmission infrastructure and with what capacity, and where to use existing infrastructure in order to minimize the cost for delivering a given amount of electricity to key markets. Costs and routing of transmission line construction take into account geographic and social factors, as well as connection and delivery expenses (transformers, substations, etc.). We apply our model to Texas and consider how findings complement the 2008 Electric Reliability Council of Texas (ERCOT) Competitive Renewable Energy Zones (CREZ) Transmission Optimization Study. Results suggest that integrated optimization of wind energy infrastructure and cost using SimWIND could play a critical role in wind energy planning efforts.

  13. Cybersecurity Awareness in the Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Franklin, Lyndsey; Le Blanc, Katya L.

    2016-07-10

    We report on a series of interviews and observations conducted with control room dispatchers in a bulk electrical system. These dispatchers must react quickly to incidents as they happen in order to ensure the reliability and safe operation of the power grid. They do not have the time to evaluate incidents for signs of cyber-attack as part of their initial response. Cyber-attack detection involves multiple personnel from a variety of roles at both local and regional levels. Smart grid technology will improve detection and defense capabilities of the future grid, however, the current infrastructure remains a mixture of old andmore » new equipment which will continue to operate for some time. Thus, research still needs to focus on strategies for the detection of malicious activity on current infrastructure as well as protection and remediation.« less

  14. Business Case Analysis of the Marine Corps Base Pendleton Virtual Smart Grid

    DTIC Science & Technology

    2017-06-01

    Metering Infrastructure on DOD installations. An examination of five case studies highlights the costs and benefits of the Virtual Smart Grid (VSG...studies highlights the costs and benefits of the Virtual Smart Grid (VSG) developed by Space and Naval Warfare Systems Command for use at Marine Corps...41 A. SMART GRID BENEFITS .....................................................................41 B. SUMMARY OF VSG ESTIMATED COSTS AND BENEFITS

  15. Outlook for grid service technologies within the @neurIST eHealth environment.

    PubMed

    Arbona, A; Benkner, S; Fingberg, J; Frangi, A F; Hofmann, M; Hose, D R; Lonsdale, G; Ruefenacht, D; Viceconti, M

    2006-01-01

    The aim of the @neurIST project is to create an IT infrastructure for the management of all processes linked to research, diagnosis and treatment development for complex and multi-factorial diseases. The IT infrastructure will be developed for one such disease, cerebral aneurysm and subarachnoid haemorrhage, but its core technologies will be transferable to meet the needs of other medical areas. Since the IT infrastructure for @neurIST will need to encompass data repositories, computational analysis services and information systems handling multi-scale, multi-modal information at distributed sites, the natural basis for the IT infrastructure is a Grid Service middleware. The project will adopt a service-oriented architecture because it aims to provide a system addressing the needs of medical researchers, clinicians and health care specialists (and their IT providers/systems) and medical supplier/consulting industries.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Chu, Peter; Gadh, Rajit

    Currently, when Electric Vehicles (EVs) are charging, they only have the option to charge at a selected current or not charge. When during the day there is a power shortage, the charging infrastructure should have the options to either shut off the power to the charging stations or to lower the power to the EVs in order to satisfy the needs of the grid. There is a need for technology that controls the current being disbursed to these electric vehicles. This paper proposes a design for a smart charging infrastructure capable of providing power to several EVs from one circuitmore » by multiplexing power and providing charge control. The smart charging infrastructure includes the server and the smart charging station. With this smart charging infrastructure, the shortage of energy in a local grid could be solved by our EV management system« less

  17. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  18. A Comparison of a Solar Power Satellite Concept to a Concentrating Solar Power System

    NASA Technical Reports Server (NTRS)

    Smitherman, David V.

    2013-01-01

    A comparison is made of a solar power satellite (SPS) concept in geostationary Earth orbit to a concentrating solar power (CSP) system on the ground to analyze overall efficiencies of each infrastructure from solar radiance at 1 AU to conversion and transmission of electrical energy into the power grid on the Earth's surface. Each system is sized for a 1-gigawatt output to the power grid and then further analyzed to determine primary collector infrastructure areas. Findings indicate that even though the SPS concept has a higher end-to-end efficiency, the combined space and ground collector infrastructure is still about the same size as a comparable CSP system on the ground.

  19. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    NASA Astrophysics Data System (ADS)

    Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan

    2012-12-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.

  20. Grids and clouds in the Czech NGI

    NASA Astrophysics Data System (ADS)

    Kundrát, Jan; Adam, Martin; Adamová, Dagmar; Chudoba, Jiří; Kouba, Tomáš; Lokajíček, Miloš; Mikula, Alexandr; Říkal, Václav; Švec, Jan; Vohnout, Rudolf

    2016-09-01

    There are several infrastructure operators within the Czech Republic NGI (National Grid Initiative) which provide users with access to high-performance computing facilities over a grid and cloud interface. This article focuses on those where the primary author has personal first-hand experience. We cover some operational issues as well as the history of these facilities.

  1. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis

    PubMed Central

    Duarte, Afonso M. S.; Psomopoulos, Fotis E.; Blanchet, Christophe; Bonvin, Alexandre M. J. J.; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C.; de Lucas, Jesus M.; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B.

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community. PMID:26157454

  2. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis.

    PubMed

    Duarte, Afonso M S; Psomopoulos, Fotis E; Blanchet, Christophe; Bonvin, Alexandre M J J; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C; de Lucas, Jesus M; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B

    2015-01-01

    With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.

  3. A Framework for Testing Automated Detection, Diagnosis, and Remediation Systems on the Smart Grid

    NASA Technical Reports Server (NTRS)

    Lau, Shing-hon

    2011-01-01

    America's electrical grid is currently undergoing a multi-billion dollar modernization effort aimed at producing a highly reliable critical national infrastructure for power - a Smart Grid. While the goals for the Smart Grid include upgrades to accommodate large quantities of clean, but transient, renewable energy and upgrades to provide customers with real-time pricing information, perhaps the most important objective is to create an electrical grid with a greatly increased robustness.

  4. Processing LHC data in the UK

    PubMed Central

    Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.

    2013-01-01

    The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163

  5. Nbody Simulations and Weak Gravitational Lensing using new HPC-Grid resources: the PI2S2 project

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Comparato, M.

    2008-08-01

    We present the main project of the new grid infrastructure and the researches, that have been already started in Sicily and will be completed by next year. The PI2S2 project of the COMETA consortium is funded by the Italian Ministry of University and Research and will be completed in 2009. Funds are from the European Union Structural Funds for Objective 1 regions. The project, together with a similar project called Trinacria GRID Virtual Laboratory (Trigrid VL), aims to create in Sicily a computational grid for e-science and e-commerce applications with the main goal of increasing the technological innovation of local enterprises and their competition on the global market. PI2S2 project aims to build and develop an e-Infrastructure in Sicily, based on the grid paradigm, mainly for research activity using the grid environment and High Performance Computer systems. As an example we present the first results of a new grid version of FLY a tree Nbody code developed by INAF Astrophysical Observatory of Catania, already published in the CPC program Library, that will be used in the Weak Gravitational Lensing field.

  6. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  7. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  8. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  9. e-Infrastructures for e-Sciences 2013 A CHAIN-REDS Workshop organised under the aegis of the European Commission

    NASA Astrophysics Data System (ADS)

    The CHAIN-REDS Project is organising a workshop on "e-Infrastructures for e-Sciences" focusing on Cloud Computing and Data Repositories under the aegis of the European Commission and in co-location with the International Conference on e-Science 2013 (IEEE2013) that will be held in Beijing, P.R. of China on October 17-22, 2013. The core objective of the CHAIN-REDS project is to promote, coordinate and support the effort of a critical mass of non-European e-Infrastructures for Research and Education to collaborate with Europe addressing interoperability and interoperation of Grids and other Distributed Computing Infrastructures (DCI). From this perspective, CHAIN-REDS will optimise the interoperation of European infrastructures with those present in 6 other regions of the world, both from a development and use point of view, and catering to different communities. Overall, CHAIN-REDS will provide input for future strategies and decision-making regarding collaboration with other regions on e-Infrastructure deployment and availability of related data; it will raise the visibility of e-Infrastructures towards intercontinental audiences, covering most of the world and will provide support to establish globally connected and interoperable infrastructures, in particular between the EU and the developing regions. Organised by IHEP, INFN and Sigma Orionis with the support of all project partners, this workshop will aim at: - Presenting the state of the art of Cloud computing in Europe and in China and discussing the opportunities offered by having interoperable and federated e-Infrastructures; - Exploring the existing initiatives of Data Infrastructures in Europe and China, and highlighting the Data Repositories of interest for the Virtual Research Communities in several domains such as Health, Agriculture, Climate, etc.

  10. Quantifying the Digital Divide: A Scientific Overview of Network Connectivity and Grid Infrastructure in South Asian Countries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Shahryar Muhammad; /SLAC /NUST, Rawalpindi; Cottrell, R.Les

    2007-10-30

    The future of Computing in High Energy Physics (HEP) applications depends on both the Network and Grid infrastructure. South Asian countries such as India and Pakistan are making significant progress by building clusters as well as improving their network infrastructure However to facilitate the use of these resources, they need to manage the issues of network connectivity to be among the leading participants in Computing for HEP experiments. In this paper we classify the connectivity for academic and research institutions of South Asia. The quantitative measurements are carried out using the PingER methodology; an approach that induces minimal ICMP trafficmore » to gather active end-to-end network statistics. The PingER project has been measuring the Internet performance for the last decade. Currently the measurement infrastructure comprises of over 700 hosts in more than 130 countries which collectively represents approximately 99% of the world's Internet-connected population. Thus, we are well positioned to characterize the world's connectivity. Here we present the current state of the National Research and Educational Networks (NRENs) and Grid Infrastructure in the South Asian countries and identify the areas of concern. We also present comparisons between South Asia and other developing as well as developed regions. We show that there is a strong correlation between the Network performance and several Human Development indices.« less

  11. IGI (the Italian Grid initiative) and its impact on the Astrophysics community

    NASA Astrophysics Data System (ADS)

    Pasian, F.; Vuerli, C.; Taffoni, G.

    IGI - the Association for the Italian Grid Infrastructure - has been established as a consortium of 14 different national institutions to provide long term sustainability to the Italian Grid. Its formal predecessor, the Grid.it project, has come to a close in 2006; to extend the benefits of this project, IGI has taken over and acts as the national coordinator for the different sectors of the Italian e-Infrastructure present in EGEE. IGI plans to support activities in a vast range of scientificdisciplines - e.g. Physics, Astrophysics, Biology, Health, Chemistry, Geophysics, Economy, Finance - and any possible extensions to other sectors such as Civil Protection, e-Learning, dissemination in Universities and secondary schools. Among these, the Astrophysics community is active as a user, by porting applications of various kinds, but also as a resource provider in terms of computing power and storage, and as middleware developer.

  12. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  13. Connecting Electric Vehicles to the Grid for Greater Infrastructure

    Science.gov Websites

    with the grid at the Energy Systems Integration Facility. Photo by Dennis Schroeder, NREL As the market serves as a test bed for assessing various EV charging scenarios. Photo by Dennis Schroeder, NREL back to the grid and essentially serve as a mobile power generator. Photo by Dennis Schroeder, NREL

  14. Earth System Grid II, Turning Climate Datasets into Community Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Don

    2006-08-01

    The Earth System Grid (ESG) II project, funded by the Department of Energy’s Scientific Discovery through Advanced Computing program, has transformed climate data into community resources. ESG II has accomplished this goal by creating a virtual collaborative environment that links climate centers and users around the world to models and data via a computing Grid, which is based on the Department of Energy’s supercomputing resources and the Internet. Our project’s success stems from partnerships between climate researchers and computer scientists to advance basic and applied research in the terrestrial, atmospheric, and oceanic sciences. By interfacing with other climate science projects,more » we have learned that commonly used methods to manage and remotely distribute data among related groups lack infrastructure and under-utilize existing technologies. Knowledge and expertise gained from ESG II have helped the climate community plan strategies to manage a rapidly growing data environment more effectively. Moreover, approaches and technologies developed under the ESG project have impacted datasimulation integration in other disciplines, such as astrophysics, molecular biology and materials science.« less

  15. A Comparison Of A Solar Power Satellite Concept To A Concentrating Solar Power System

    NASA Technical Reports Server (NTRS)

    Smitherman, David V.

    2013-01-01

    A comparison is made of a Solar Power Satellite concept in geostationary Earth orbit to a Concentrating Solar Power system on the ground to analyze overall efficiencies of each infrastructure from solar radiance at 1 AU to conversion and transmission of electrical energy into the power grid on the Earth's surface. Each system is sized for a 1-gigawatt output to the power grid and then further analyzed to determine primary collector infrastructure areas. Findings indicate that even though the Solar Power Satellite concept has a higher end-to-end efficiency, that the combined space and ground collector infrastructure is still about the same size as a comparable Concentrating Solar Power system on the ground.

  16. Resilient Military Systems and the Advanced Cyber Threat

    DTIC Science & Technology

    2013-01-01

    systems; intelligence, surveillance, and reconnaissance systems; logistics and human resource systems; and mobile as well as fixed- infrastructure ...significant portions of military and critical infrastructure : power generation, communications, fuel and transportation, emergency services, financial...vulnerabilities in the domestic power grid and critical infrastructure systems.4,5 DoD, and the United States, is extremely reliant on the

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Shepelev, Aleksey; Qiu, Charlie

    With an increased number of Electric Vehicles (EVs) on the roads, charging infrastructure is gaining an ever-more important role in simultaneously meeting the needs of the local distribution grid and of EV users. This paper proposes a mesh network RFID system for user identification and charging authorization as part of a smart charging infrastructure providing charge monitoring and control. The Zigbee-based mesh network RFID provides a cost-efficient solution to identify and authorize vehicles for charging and would allow EV charging to be conducted effectively while observing grid constraints and meeting the needs of EV drivers

  18. Testbeds for Assessing Critical Scenarios in Power Control Systems

    NASA Astrophysics Data System (ADS)

    Dondossola, Giovanna; Deconinck, Geert; Garrone, Fabrizio; Beitollahi, Hakem

    The paper presents a set of control system scenarios implemented in two testbeds developed in the context of the European Project CRUTIAL - CRitical UTility InfrastructurAL Resilience. The selected scenarios refer to power control systems encompassing information and communication security of SCADA systems for grid teleoperation, impact of attacks on inter-operator communications in power emergency conditions, impact of intentional faults on the secondary and tertiary control in power grids with distributed generators. Two testbeds have been developed for assessing the effect of the attacks and prototyping resilient architectures.

  19. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  20. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  1. Wireless Communications in Smart Grid

    NASA Astrophysics Data System (ADS)

    Bojkovic, Zoran; Bakmaz, Bojan

    Communication networks play a crucial role in smart grid, as the intelligence of this complex system is built based on information exchange across the power grid. Wireless communications and networking are among the most economical ways to build the essential part of the scalable communication infrastructure for smart grid. In particular, wireless networks will be deployed widely in the smart grid for automatic meter reading, remote system and customer site monitoring, as well as equipment fault diagnosing. With an increasing interest from both the academic and industrial communities, this chapter systematically investigates recent advances in wireless communication technology for the smart grid.

  2. Framework for Modeling High-Impact, Low-Frequency Power Grid Events to Support Risk-Informed Decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeramany, Arun; Unwin, Stephen D.; Coles, Garill A.

    2015-12-03

    Natural and man-made hazardous events resulting in loss of grid infrastructure assets challenge the electric power grid’s security and resilience. However, the planning and allocation of appropriate contingency resources for such events requires an understanding of their likelihood and the extent of their potential impact. Where these events are of low likelihood, a risk-informed perspective on planning can be problematic as there exists an insufficient statistical basis to directly estimate the probabilities and consequences of their occurrence. Since risk-informed decisions rely on such knowledge, a basis for modeling the risk associated with high-impact low frequency events (HILFs) is essential. Insightsmore » from such a model can inform where resources are most rationally and effectively expended. The present effort is focused on development of a HILF risk assessment framework. Such a framework is intended to provide the conceptual and overarching technical basis for the development of HILF risk models that can inform decision makers across numerous stakeholder sectors. The North American Electric Reliability Corporation (NERC) 2014 Standard TPL-001-4 considers severe events for transmission reliability planning, but does not address events of such severity that they have the potential to fail a substantial fraction of grid assets over a region, such as geomagnetic disturbances (GMD), extreme seismic events, and coordinated cyber-physical attacks. These are beyond current planning guidelines. As noted, the risks associated with such events cannot be statistically estimated based on historic experience; however, there does exist a stable of risk modeling techniques for rare events that have proven of value across a wide range of engineering application domains. There is an active and growing interest in evaluating the value of risk management techniques in the State transmission planning and emergency response communities, some of this interest in the context of grid modernization activities. The availability of a grid HILF risk model, integrated across multi-hazard domains which, when interrogated, can support transparent, defensible and effective decisions, is an attractive prospect among these communities. In this report, we document an integrated HILF risk framework intended to inform the development of risk models. These models would be based on the systematic and comprehensive (to within scope) characterization of hazards to the level of detail required for modeling risk, identification of the stressors associated with the hazards (i.e., the means of impacting grid and supporting infrastructure), characterization of the vulnerability of assets to these stressors and the probabilities of asset compromise, the grid’s dynamic response to the asset failures, and assessment of subsequent severities of consequence with respect to selected impact metrics, such as power outage duration and geographic reach. Specifically, the current framework is being developed to;1. Provide the conceptual and overarching technical paradigms for the development of risk models; 2. Identify the classes of models required to implement the framework - providing examples of existing models, and also identifying where modeling gaps exist; 3. Identify the types of data required, addressing circumstances under which data are sparse and the formal elicitation of informed judgment might be required; and 4. Identify means by which the resultant risk models might be interrogated to form the necessary basis for risk management.« less

  3. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  4. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring

    PubMed Central

    Gharavi, Hamid; Hu, Bin

    2018-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505

  5. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  6. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    PubMed

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  7. Testing as a Service with HammerCloud

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Barrand, Quentin; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel

    2014-06-01

    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.

  8. Performance evaluation of cognitive radio in advanced metering infrastructure communication

    NASA Astrophysics Data System (ADS)

    Hiew, Yik-Kuan; Mohd Aripin, Norazizah; Din, Norashidah Md

    2016-03-01

    Smart grid is an intelligent electricity grid system. A reliable two-way communication system is required to transmit both critical and non-critical smart grid data. However, it is difficult to locate a huge chunk of dedicated spectrum for smart grid communications. Hence, cognitive radio based communication is applied. Cognitive radio allows smart grid users to access licensed spectrums opportunistically with the constraint of not causing harmful interference to licensed users. In this paper, a cognitive radio based smart grid communication framework is proposed. Smart grid framework consists of Home Area Network (HAN) and Advanced Metering Infrastructure (AMI), while AMI is made up of Neighborhood Area Network (NAN) and Wide Area Network (WAN). In this paper, the authors only report the findings for AMI communication. AMI is smart grid domain that comprises smart meters, data aggregator unit, and billing center. Meter data are collected by smart meters and transmitted to data aggregator unit by using cognitive 802.11 technique; data aggregator unit then relays the data to billing center using cognitive WiMAX and TV white space. The performance of cognitive radio in AMI communication is investigated using Network Simulator 2. Simulation results show that cognitive radio improves the latency and throughput performances of AMI. Besides, cognitive radio also improves spectrum utilization efficiency of WiMAX band from 5.92% to 9.24% and duty cycle of TV band from 6.6% to 10.77%.

  9. The International Symposium on Grids and Clouds

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.

  10. Grid Modernization Laboratory Consortium - Testing and Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin; Skare, Paul; Pratt, Rob

    This paper highlights some of the unique testing capabilities and projects being performed at several national laboratories as part of the U. S. Department of Energy Grid Modernization Laboratory Consortium. As part of this effort, the Grid Modernization Laboratory Consortium Testing Network isbeing developed to accelerate grid modernization by enablingaccess to a comprehensive testing infrastructure and creating a repository of validated models and simulation tools that will be publicly available. This work is key to accelerating thedevelopment, validation, standardization, adoption, and deployment of new grid technologies to help meet U. S. energy goals.

  11. Legislation Seeks to Protect Power Grid From Space Weather

    NASA Astrophysics Data System (ADS)

    Tretkoff, Ernie

    2010-05-01

    Proposed legislation would help protect the U.S. power grid against space weather and other threats. The Grid Reliability and Infrastructure Defense Act (GRID Act) would give the Federal Energy Regulatory Commission (FERC) authority to develop and enforce standards for power companies to protect the electric grid from geomagnetic storms and threats such as a terrorist attack using electromagnetic pulse (EMP) weapons. The act unanimously passed the U.S. House Committee on Energy and Commerce in April and will proceed to a vote in the full House of Representatives.

  12. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    NASA Astrophysics Data System (ADS)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  13. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  14. A Study Examining Photovoltaic (PV) Solar Power as an Alternative for the Rebuilding of the Iraqi Electrical Power Generation Infrastructure

    DTIC Science & Technology

    2005-06-01

    Logistics, BA-5590, BB- 390, BB-2590, PVPC, Iraq, Power Grid, Infrastructure, Cost Estimate, Photovoltaic Power Conversion (PVPC), MPPT 16. PRICE...the cost and feasibility of using photovoltaic (PV) solar power to assist in the rebuilding of the Iraqi electrical infrastructure. This project...cost and feasibility of using photovoltaic (PV) solar power to assist in the rebuilding of the Iraqi infrastructure. The project examines available

  15. New York Solar Smart DG Hub-Resilient Solar Project: Economic and Resiliency Impact of PV and Storage on New York Critical Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Kate; Burman, Kari; Simpkins, Travis

    Resilient PV, which is solar paired with storage ('solar-plus-storage'), provides value both during normal grid operation and power outages as opposed to traditional solar PV, which functions only when the electric grid is operating. During normal grid operations, resilient PV systems help host sites generate revenue and/or reduce electricity bill charges. During grid outages, resilient PV provides critical emergency power that can help people in need and ease demand on emergency fuel supplies. The combination of grid interruptions during recent storms, the proliferation of solar PV, and the growing deployment of battery storage technologies has generated significant interest in usingmore » these assets for both economic and resiliency benefits. This report analyzes the technical and economic viability for resilient PV on three critical infrastructure sites in New York City (NYC): a school that is part of a coastal storm shelter system, a fire station, and a NYCHA senior center that serves as a cooling center during heat emergencies. This analysis differs from previous solar-plus-storage studies by placing a monetary value on resiliency and thus, in essence, modeling a new revenue stream for the avoided cost of a power outage. Analysis results show that resilient PV is economically viable for NYC's critical infrastructure and that it may be similarly beneficial to other commercial buildings across the city. This report will help city building owners, managers, and policymakers better understand the economic and resiliency benefits of resilient PV. As NYC fortifies its building stock against future storms of increasing severity, resilient PV can play an important role in disaster response and recovery while also supporting city greenhouse gas emission reduction targets and relieving stress to the electric grid from growing power demands.« less

  16. Low-carbon infrastructure strategies for cities

    NASA Astrophysics Data System (ADS)

    Kennedy, C. A.; Ibrahim, N.; Hoornweg, D.

    2014-05-01

    Reducing greenhouse gas emissions to avert potentially disastrous global climate change requires substantial redevelopment of infrastructure systems. Cities are recognized as key actors for leading such climate change mitigation efforts. We have studied the greenhouse gas inventories and underlying characteristics of 22 global cities. These cities differ in terms of their climates, income, levels of industrial activity, urban form and existing carbon intensity of electricity supply. Here we show how these differences in city characteristics lead to wide variations in the type of strategies that can be used for reducing emissions. Cities experiencing greater than ~1,500 heating degree days (below an 18 °C base), for example, will review building construction and retrofitting for cold climates. Electrification of infrastructure technologies is effective for cities where the carbon intensity of the grid is lower than ~600 tCO2e GWh-1 whereas transportation strategies will differ between low urban density (<~6,000 persons km-2) and high urban density (>~6,000 persons km-2) cities. As nation states negotiate targets and develop policies for reducing greenhouse gas emissions, attention to the specific characteristics of their cities will broaden and improve their suite of options. Beyond carbon pricing, markets and taxation, governments may develop policies and target spending towards low-carbon urban infrastructure.

  17. WISDOM-II: screening against multiple targets implicated in malaria using computational grid infrastructures.

    PubMed

    Kasam, Vinod; Salzemann, Jean; Botha, Marli; Dacosta, Ana; Degliesposti, Gianluca; Isea, Raul; Kim, Doman; Maass, Astrid; Kenyon, Colin; Rastelli, Giulio; Hofmann-Apitius, Martin; Breton, Vincent

    2009-05-01

    Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR), and on a new promising one, glutathione-S-transferase. In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software on computational grids in finding hits against three different targets (PfGST, PfDHFR, PvDHFR (wild type and mutant forms) implicated in malaria. Grid-enabled virtual screening approach is proposed to produce focus compound libraries for other biological targets relevant to fight the infectious diseases of the developing world.

  18. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  19. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE PAGES

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...

    2017-04-24

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  20. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  1. Grid Technology as a Cyber Infrastructure for Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.

    2004-01-01

    This paper describes how grids and grid service technologies can be used to develop an infrastructure for the Earth Science community. This cyberinfrastructure would be populated with a hierarchy of services, including discipline specific services such those needed by the Earth Science community as well as a set of core services that are needed by most applications. This core would include data-oriented services used for accessing and moving data as well as computer-oriented services used to broker access to resources and control the execution of tasks on the grid. The availability of such an Earth Science cyberinfrastructure would ease the development of Earth Science applications. With such a cyberinfrastructure, application work flows could be created to extract data from one or more of the Earth Science archives and then process it by passing it through various persistent services that are part of the persistent cyberinfrastructure, such as services to perform subsetting, reformatting, data mining and map projections.

  2. Making the most of cloud storage - a toolkit for exploitation by WLCG experiments

    NASA Astrophysics Data System (ADS)

    Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea

    2017-10-01

    Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.

  3. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  4. Final Report Feasibility Study for the California Wave Energy Test Center (CalWavesm) - Volume #2 - Appendices #16-17

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooher, Brendan; Toman, William I.; Davy, Doug M.

    The California Wave Energy Test Center (CalWave) Feasibility Study project was funded over multiple phases by the Department of Energy to perform an interdisciplinary feasibility assessment to analyze the engineering, permitting, and stakeholder requirements to establish an open water, fully energetic, grid connected, wave energy test center off the coast of California for the purposes of advancing U.S. wave energy research, development, and testing capabilities. Work under this grant included wave energy resource characterization, grid impact and interconnection requirements, port infrastructure and maritime industry capability/suitability to accommodate the industry at research, demonstration and commercial scale, and macro and micro sitingmore » considerations. CalWave Phase I performed a macro-siting and down-selection process focusing on two potential test sites in California: Humboldt Bay and Vandenberg Air Force Base. This work resulted in the Vandenberg Air Force Base site being chosen as the most favorable site based on a peer reviewed criteria matrix. CalWave Phase II focused on four siting location alternatives along the Vandenberg Air Force Base coastline and culminated with a final siting down-selection. Key outcomes from this work include completion of preliminary engineering and systems integration work, a robust turnkey cost estimate, shoreside and subsea hazards assessment, storm wave analysis, lessons learned reports from several maritime disciplines, test center benchmarking as compared to existing international test sites, analysis of existing applicable environmental literature, the completion of a preliminary regulatory, permitting and licensing roadmap, robust interaction and engagement with state and federal regulatory agency personnel and local stakeholders, and the population of a Draft Federal Energy Regulatory Commission (FERC) Preliminary Application Document (PAD). Analysis of existing offshore oil and gas infrastructure was also performed to assess the potential value and re-use scenarios of offshore platform infrastructure and associated subsea power cables and shoreside substations. The CalWave project team was well balanced and was comprised of experts from industry, academia, state and federal regulatory agencies. The result of the CalWave feasibility study finds that the CalWave Test Center has the potential to provide the most viable path to commercialization for wave energy in the United States.« less

  5. Electric Sector Integration | Energy Analysis | NREL

    Science.gov Websites

    investigates the potential impacts of expanding renewable technology deployment on grid operations and Electric System Flexibility and Storage Impacts on Conventional Generators Transmission Infrastructure Generation Our grid integration studies use state-of-the-art modeling and analysis to evaluate the impacts of

  6. Present and Future Energy Scenario in India

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Bhattacharyya, B.; Gupta, V. K.

    2014-09-01

    India's energy sector is one of the most critical components of an infrastructure that affects India's economic growth and therefore is also one of the largest industries in India. India has the 5th largest electricity generating capacity and is the 6th largest energy consumer amounting for around 3.4 % of global energy consumption. India's energy demand has grown at 3.6 % pa over the past 30 years. The consumption of the energy is directly proportional to the progress of manpower with ever growing population, improvement in the living standard of the humanity and industrialization of the developing countries. Very recently smart grid technology can attribute important role in energy scenario. Smart grid refers to electric power system that enhances grid reliability and efficiency by automatically responding to system disturbances. This paper discusses the new communication infrastructure and scheme designed to integrate data.

  7. Grid infrastructure for automatic processing of SAR data for flood applications

    NASA Astrophysics Data System (ADS)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be executed by different resources of the Grid system. The resulting geospatial services are available in various OGC standards such as KML and WMS. Currently, the Grid infrastructure integrates the resources of several geographically distributed organizations, in particular: Space Research Institute NASU-NSAU (Ukraine) with deployed computational and storage nodes based on Globus Toolkit 4 (htpp://www.globus.org) and gLite 3 (http://glite.web.cern.ch) middleware, access to geospatial data and a Grid portal; Institute of Cybernetics of NASU (Ukraine) with deployed computational and storage nodes (SCIT-1/2/3 clusters) based on Globus Toolkit 4 middleware and access to computational resources (approximately 500 processors); Center of Earth Observation and Digital Earth Chinese Academy of Sciences (CEODE-CAS, China) with deployed computational nodes based on Globus Toolkit 4 middleware and access to geospatial data (approximately 16 processors). We are currently adding new geospatial services based on optical satellite data, namely MODIS. This work is carried out jointly with the CEODE-CAS. Using workflow patterns that were developed for SAR data processing we are building new workflows for optical data processing.

  8. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  9. 75 FR 26206 - Implementing the National Broadband Plan by Studying the Communications Requirements of Electric...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-11

    ... information about electricity infrastructure's current and projected communications requirements, as well as...'s electricity infrastructure need to employ adequate communications technologies that serve their... Smart Grid and the other technologies that will evolve and change how electricity is produced, consumed...

  10. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  11. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations.

    PubMed

    Khalifa, Tarek; Abdrabou, Atef; Shaban, Khaled; Gaouda, A M

    2018-05-11

    Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.

  12. Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations

    PubMed Central

    Khalifa, Tarek; Abdrabou, Atef; Gaouda, A. M.

    2018-01-01

    Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids. PMID:29751633

  13. Energy-Water Microgrid Case Study at the University of Arizona's BioSphere 2

    NASA Astrophysics Data System (ADS)

    Daw, J.; Macknick, J.; Kandt, A.; Giraldez, J.

    2016-12-01

    Microgrids can provide reliable and cost-effective energy services in a variety of conditions and locations. To date, there has been minimal effort invested in developing energy-water microgrids that demonstrate the feasibility and leverage the synergies associated with designing and operating renewable energy and water systems in a coordinated framework. Water and wastewater treatment equipment can be operated in ways to provide ancillary services to the electrical grid and renewable energy can be utilized to power water-related infrastructure, but the potential for co-managed systems has not yet been quantified or fully characterized. Co-management and optimization of energy and water resources could lead to improved reliability and economic operating conditions. Energy-water microgrids could be a promising solution to improve energy and water resource management for islands, rural communities, distributed generation, Defense operations, and many parts of the world lacking critical infrastructure.The National Renewable Energy Laboratory (NREL) and the University of Arizona have been jointly researching energy-water microgrid opportunities through an effort at the university's BioSphere 2 (B2) Earth systems science research facility. B2 is an ideal case study for an energy-water microgrid test site, given its size, its unique mission and operations, the existence and criticality of water and energy infrastructure, and its ability to operate connected-to or disconnected-from the local electrical grid. Moreover, the B2 is a premier facility for undertaking agricultural research, providing an excellent opportunity to evaluate connections and tradeoffs in the food-energy-water nexus. The research effort at B2 identified the technical potential and associated benefits of an energy-water microgrid through the evaluation of energy ancillary services and peak load reductions and quantified the potential for B2 water-related loads to be utilized and modified to provide grid services in the context of an optimized energy-water microgrid. The foundational work performed at B2 also serves a model that can be built upon for identifying relevant energy-water microgrid data, analytical requirements, and operational challenges associated with development of future energy-water microgrids.

  14. 75 FR 6180 - Mission Statement; Secretarial China Clean Energy Business Development Mission; May 16-21, 2010

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... addition, Hong Kong has an efficient, transparent legal system based on common law principles that offer... 2020. The current grid infrastructure system is unable to support greater electricity movement from... sector, including traditional transmission/distribution systems and smart grid technologies, offers huge...

  15. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  16. Enabling campus grids with open science grid technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weitzel, Derek; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condormore » clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billings, Jay J.; Bonior, Jason D.; Evans, Philip G.

    Securely transferring timing information in the electrical grid is a critical component of securing the nation's infrastructure from cyber attacks. One solution to this problem is to use quantum information to securely transfer the timing information across sites. This software provides such an infrastructure using a standard Java webserver that pulls the quantum information from associated hardware.

  18. Context-aware system design

    NASA Astrophysics Data System (ADS)

    Chan, Christine S.; Ostertag, Michael H.; Akyürek, Alper Sinan; Šimunić Rosing, Tajana

    2017-05-01

    The Internet of Things envisions a web-connected infrastructure of billions of sensors and actuation devices. However, the current state-of-the-art presents another reality: monolithic end-to-end applications tightly coupled to a limited set of sensors and actuators. Growing such applications with new devices or behaviors, or extending the existing infrastructure with new applications, involves redesign and redeployment. We instead propose a modular approach to these applications, breaking them into an equivalent set of functional units (context engines) whose input/output transformations are driven by general-purpose machine learning, demonstrating an improvement in compute redundancy and computational complexity with minimal impact on accuracy. In conjunction with formal data specifications, or ontologies, we can replace application-specific implementations with a composition of context engines that use common statistical learning to generate output, thus improving context reuse. We implement interconnected context-aware applications using our approach, extracting user context from sensors in both healthcare and grid applications. We compare our infrastructure to single-stage monolithic implementations with single-point communications between sensor nodes and the cloud servers, demonstrating a reduction in combined system energy by 22-45%, and multiplying the battery lifetime of power-constrained devices by at least 22x, with easy deployment across different architectures and devices.

  19. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  20. Distinction of Concept and Discussion on Construction Idea of Smart Water Grid Project

    NASA Astrophysics Data System (ADS)

    Ye, Y.; Yizi, S., Sr.; Lili, L., Sr.; Sang, X.; Zhai, J.

    2016-12-01

    Smart water grid project includes construction of water physical grid consisting of various flow regulating infrastructures, construction of water information grid in line with the trend of intelligent technology and construction of water management grid featured by system & mechanism construction and systemization of regulation decision-making. It is the integrated platform and comprehensive carrier for water conservancy practices. Currently, there still is dispute over engineering construction idea of smart water grid which, however, represents the future development trend of water management and is increasingly emphasized. The paper, based on distinction of concept of water grid and water grid engineering, explains the concept of water grid intelligentization, actively probes into construction idea of Smart water grid project in our country and presents scientific problems to be solved as well as core technologies to be mastered for smart water grid construction.

  1. Smart Grid Maturity Model: Model Definition. A Framework for Smart Grid Transformation

    DTIC Science & Technology

    2010-09-01

    adoption of more efficient and reliable generation sources and would allow consumer-generated electricity (e.g., solar power and wind) to be connected to...program that pays customers (or credits their accounts) for customer-provided electricity such as from solar panels to the grid or electric vehicles...deployed. CUST-5.3 Plug-and-play customer-based generation (e.g., wind and solar ) is supported. This includes the necessary infrastructure, such

  2. Association rule mining on grid monitoring data to detect error sources

    NASA Astrophysics Data System (ADS)

    Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin

    2010-04-01

    Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.

  3. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  4. Security-Oriented and Load-Balancing Wireless Data Routing Game in the Integration of Advanced Metering Infrastructure Network in Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Fulin; Cao, Yang; Zhang, Jun Jason

    Ensuring flexible and reliable data routing is indispensable for the integration of Advanced Metering Infrastructure (AMI) networks, we propose a secure-oriented and load-balancing wireless data routing scheme. A novel utility function is designed based on security routing scheme. Then, we model the interactive security-oriented routing strategy among meter data concentrators or smart grid meters as a mixed-strategy network formation game. Finally, such problem results in a stable probabilistic routing scheme with proposed distributed learning algorithm. One contributions is that we studied that different types of applications affect the routing selection strategy and the strategy tendency. Another contributions is that themore » chosen strategy of our mixed routing can adaptively to converge to a new mixed strategy Nash equilibrium (MSNE) during the learning process in the smart grid.« less

  5. Driving rural energy access: a second-life application for electric-vehicle batteries

    NASA Astrophysics Data System (ADS)

    Ambrose, Hanjiro; Gershenson, Dimitry; Gershenson, Alexander; Kammen, Daniel

    2014-09-01

    Building rural energy infrastructure in developing countries remains a significant financial, policy and technological challenge. The growth of the electric vehicle (EV) industry will rapidly expand the resource of partially degraded, ‘retired’, but still usable batteries in 2016 and beyond. These batteries can become the storage hubs for community-scale grids in the developing world. We model the resource and performance potential and the technological and economic aspects of the utilization of retired EV batteries in rural and decentralized mini- and micro-grids. We develop and explore four economic scenarios across three battery chemistries to examine the impacts on transport and recycling logistics. We find that EVs sold through 2020 will produce 120-549 GWh in retired storage potential by 2028. Outlining two use scenarios for decentralized systems, we discuss the possible impacts on global electrification rates. We find that used EV batteries can provide a cost-effective and lower environmental impact alternative to existing lead-acid storage systems in these applications.

  6. Renewable energy sources, the internet of things and the third industrial revolution: Smart grid and contemporary information and communication technologies

    NASA Astrophysics Data System (ADS)

    Kitsios, Aristidis; Bousakas, Konstantinos; Salame, Takla; Bogno, Bachirou; Papageorgas, Panagiotis; Vokas, Georgios A.; Mauffay, Fabrice; Petit, Pierre; Aillerie, Michel; Charles, Jean-Pierre

    2017-02-01

    In this paper, the energy efficiency of a contemporary Smart Grid that is based on Distributed Renewable Energy Sources (DRES) is examined under the scope of the communication systems utilized between the energy loads and the energy sources. What is evident is that the Internet of Things (IoT) technologies that are based on the existing Web infrastructure can be heavily introduced in this direction especially when combined with long range low bandwidth networking technologies, power line communication technologies and optimization methodologies for renewable energy generation. The renewable energy generation optimization will be based on devices embedded in the PV panels and the wind power generators, which will rely on bidirectional communications with local gateways and remote control stations for achieving energy efficiency. Smart meters and DRES combined with IoT communications will be the enabling technologies for the ultimate fusion of Internet technology and renewable energy generation realizing the Energy Internet.

  7. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  8. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  9. The Climate-G Portal: a Grid Enabled Scientifc Gateway for Climate Change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Grid portals are web gateways aiming at concealing the underlying infrastructure through a pervasive, transparent, user-friendly, ubiquitous and seamless access to heterogeneous and geographical spread resources (i.e. storage, computational facilities, services, sensors, network, databases). Definitively they provide an enhanced problem-solving environment able to deal with modern, large scale scientific and engineering problems. Scientific gateways are able to introduce a revolution in the way scientists and researchers organize and carry out their activities. Access to distributed resources, complex workflow capabilities, and community-oriented functionalities are just some of the features that can be provided by such a web-based environment. In the context of the EGEE NA4 Earth Science Cluster, Climate-G is a distributed testbed focusing on climate change research topics. The Euro-Mediterranean Center for Climate Change (CMCC) is actively participating in the testbed providing the scientific gateway (Climate-G Portal) to access to the entire infrastructure. The Climate-G Portal has to face important and critical challenges as well as has to satisfy and address key requirements. In the following, the most relevant ones are presented and discussed. Transparency: the portal has to provide a transparent access to the underlying infrastructure preventing users from dealing with low level details and the complexity of a distributed grid environment. Security: users must be authenticated and authorized on the portal to access and exploit portal functionalities. A wide set of roles is needed to clearly assign the proper one to each user. The access to the computational grid must be completely secured, since the target infrastructure to run jobs is a production grid environment. A security infrastructure (based on X509v3 digital certificates) is strongly needed. Pervasivity and ubiquity: the access to the system must be pervasive and ubiquitous. This is easily true due to the nature of the needed web approach. Usability and simplicity: the portal has to provide simple, high level and user friendly interfaces to ease the access and exploitation of the entire system. Coexistence of general purpose and domain oriented services: along with general purpose services (file transfer, job submission, etc.), the portal has to provide domain based services and functionalities. Subsetting of data, visualization of 2D maps around a virtual globe, delivery of maps through OGC compliant interfaces (i.e. Web Map Service - WMS) are just some examples. Since april 2009, about 70 users (85% coming from the climate change community) got access to the portal. A key challenge of this work is the idea to provide users with an integrated working environment, that is a place where scientists can find huge amount of data, complete metadata support, a wide set of data access services, data visualization and analysis tools, easy access to the underlying grid infrastructure and advanced monitoring interfaces.

  10. Addressing Global Warming, Air Pollution, Energy Security, and Jobs with Roadmaps for Changing the All-Purpose Energy Infrastructure of the 50 United States

    NASA Astrophysics Data System (ADS)

    Jacobson, M. Z.

    2014-12-01

    Global warming, air pollution, and energy insecurity are three of the most significant problems facing the world today. This talk discusses the development of technical and economic plans to convert the energy infrastructure of each of the 50 United States to those powered by 100% wind, water, and sunlight (WWS) for all purposes, namely electricity, transportation, industry, and heating/cooling, after energy efficiency measures have been accounted for. The plans call for all new energy to be WWS by 2020, ~80% conversion of existing energy by 2030, and 100% by 2050 through aggressive policy measures and natural transition. Resource availability, footprint and spacing areas required, jobs created versus lost, energy costs, avoided costs from air pollution mortality and morbidity and climate damage, and methods of ensuring reliability of the grid are discussed. Please see http://web.stanford.edu/group/efmh/jacobson/Articles/I/WWS-50-USState-plans.html

  11. Load Segmentation for Convergence of Distribution Automation and Advanced Metering Infrastructure Systems

    NASA Astrophysics Data System (ADS)

    Pamulaparthy, Balakrishna; KS, Swarup; Kommu, Rajagopal

    2014-12-01

    Distribution automation (DA) applications are limited to feeder level today and have zero visibility outside of the substation feeder and reaching down to the low-voltage distribution network level. This has become a major obstacle in realizing many automated functions and enhancing existing DA capabilities. Advanced metering infrastructure (AMI) systems are being widely deployed by utilities across the world creating system-wide communications access to every monitoring and service point, which collects data from smart meters and sensors in short time intervals, in response to utility needs. DA and AMI systems convergence provides unique opportunities and capabilities for distribution grid modernization with the DA system acting as a controller and AMI system acting as feedback to DA system, for which DA applications have to understand and use the AMI data selectively and effectively. In this paper, we propose a load segmentation method that helps the DA system to accurately understand and use the AMI data for various automation applications with a suitable case study on power restoration.

  12. Using Taxonomic Indexing Trees to Efficiently Retrieve SCORM-Compliant Documents in e-Learning Grids

    ERIC Educational Resources Information Center

    Shih, Wen-Chung; Tseng, Shian-Shyong; Yang, Chao-Tung

    2008-01-01

    With the flourishing development of e-Learning, more and more SCORM-compliant teaching materials are developed by institutes and individuals in different sites. In addition, the e-Learning grid is emerging as an infrastructure to enhance traditional e-Learning systems. Therefore, information retrieval schemes supporting SCORM-compliant documents…

  13. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    NASA Astrophysics Data System (ADS)

    Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.

    2008-07-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  14. The smart meter and a smarter consumer: quantifying the benefits of smart meter implementation in the United States.

    PubMed

    Cook, Brendan; Gazzano, Jerrome; Gunay, Zeynep; Hiller, Lucas; Mahajan, Sakshi; Taskan, Aynur; Vilogorac, Samra

    2012-04-23

    The electric grid in the United States has been suffering from underinvestment for years, and now faces pressing challenges from rising demand and deteriorating infrastructure. High congestion levels in transmission lines are greatly reducing the efficiency of electricity generation and distribution. In this paper, we assess the faults of the current electric grid and quantify the costs of maintaining the current system into the future. While the proposed "smart grid" contains many proposals to upgrade the ailing infrastructure of the electric grid, we argue that smart meter installation in each U.S. household will offer a significant reduction in peak demand on the current system. A smart meter is a device which monitors a household's electricity consumption in real-time, and has the ability to display real-time pricing in each household. We conclude that these devices will provide short-term and long-term benefits to utilities and consumers. The smart meter will enable utilities to closely monitor electricity consumption in real-time, while also allowing households to adjust electricity consumption in response to real-time price adjustments.

  15. Strategies, Protections and Mitigations for Electric Grid from Electromagnetic Pulse Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Rita Ann; Frickey, Steven Jay

    2016-01-01

    The mission of DOE’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electricity delivery system, enhance the security and reliability of America’s energy infrastructure and facilitate recovery from disruptions to the energy supply. One of the threats OE is concerned about is a high-altitude electro-magnetic pulse (HEMP) from a nuclear explosion and eletro-magnetic pulse (EMP) or E1 pulse can be generated by EMP weapons. DOE-OE provides federal leadership and technical guidance in addressing electric grid issues. The Idaho National Laboratory (INL) was chosen to conduct the EMP study for DOE-OE due tomore » its capabilities and experience in setting up EMP experiments on the electric grid and conducting vulnerability assessments and developing innovative technology to increase infrastructure resiliency. This report identifies known impacts to EMP threats, known mitigations and effectiveness of mitigations, potential cost of mitigation, areas for government and private partnerships in protecting the electric grid to EMP, and identifying gaps in our knowledge and protection strategies.« less

  16. A secure and efficiently searchable health information architecture.

    PubMed

    Yasnoff, William A

    2016-06-01

    Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A Public Health Grid (PHGrid): Architecture and value proposition for 21st century public health.

    PubMed

    Savel, T; Hall, K; Lee, B; McMullin, V; Miles, M; Stinn, J; White, P; Washington, D; Boyd, T; Lenert, L

    2010-07-01

    This manuscript describes the value of and proposal for a high-level architectural framework for a Public Health Grid (PHGrid), which the authors feel has the capability to afford the public health community a robust technology infrastructure for secure and timely data, information, and knowledge exchange, not only within the public health domain, but between public health and the overall health care system. The CDC facilitated multiple Proof-of-Concept (PoC) projects, leveraging an open-source-based software development methodology, to test four hypotheses with regard to this high-level framework. The outcomes of the four PoCs in combination with the use of the Federal Enterprise Architecture Framework (FEAF) and the newly emerging Federal Segment Architecture Methodology (FSAM) was used to develop and refine a high-level architectural framework for a Public Health Grid infrastructure. The authors were successful in documenting a robust high-level architectural framework for a PHGrid. The documentation generated provided a level of granularity needed to validate the proposal, and included examples of both information standards and services to be implemented. Both the results of the PoCs as well as feedback from selected public health partners were used to develop the granular documentation. A robust high-level cohesive architectural framework for a Public Health Grid (PHGrid) has been successfully articulated, with its feasibility demonstrated via multiple PoCs. In order to successfully implement this framework for a Public Health Grid, the authors recommend moving forward with a three-pronged approach focusing on interoperability and standards, streamlining the PHGrid infrastructure, and developing robust and high-impact public health services. Published by Elsevier Ireland Ltd.

  18. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  19. WLCG scale testing during CMS data challenges

    NASA Astrophysics Data System (ADS)

    Gutsche, O.; Hajdu, C.

    2008-07-01

    The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.

  20. Climate simulations and services on HPC, Cloud and Grid infrastructures

    NASA Astrophysics Data System (ADS)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  1. GEMSS: privacy and security for a medical Grid.

    PubMed

    Middleton, S E; Herveg, J A M; Crazzolara, F; Marvin, D; Poullet, Y

    2005-01-01

    The GEMSS project is developing a secure Grid infrastructure through which six medical simulations services can be invoked. We examine the legal and security framework within which GEMSS operates. We provide a legal qualification to the operations performed upon patient data, in view of EU directive 95/46, when using medical applications on the GEMSS Grid. We identify appropriate measures to ensure security and describe the legal rationale behind our choice of security technology. Our legal analysis demonstrates there must be an identified controller (typically a hospital) of patient data. The controller must then choose a processor (in this context a Grid service provider) that provides sufficient guarantees with respect to the security of their technical and organizational data processing procedures. These guarantees must ensure a level of security appropriate to the risks, with due regard to the state of the art and the cost of their implementation. Our security solutions are based on a public key infrastructure (PKI), transport level security and end-to-end security mechanisms in line with the web service (WS Security, WS Trust and SecureConversation) security specifications. The GEMSS infrastructure ensures a degree of protection of patient data that is appropriate for the health care sector, and is in line with the European directives. We hope that GEMSS will become synonymous with high security data processing, providing a framework by which GEMSS service providers can provide the security guarantees required by hospitals with regard to the processing of patient data.

  2. Initial steps towards a production platform for DNA sequence analysis on the grid.

    PubMed

    Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D

    2010-12-14

    Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/

  3. Grid today, clouds on the horizon

    NASA Astrophysics Data System (ADS)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  4. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  5. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  6. Wide-area, real-time monitoring and visualization system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2011-11-15

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  7. Real-time performance monitoring and management system

    DOEpatents

    Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA

    2007-06-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  8. About the Need of Combining Power Market and Power Grid Model Results for Future Energy System Scenarios

    NASA Astrophysics Data System (ADS)

    Mende, Denis; Böttger, Diana; Löwer, Lothar; Becker, Holger; Akbulut, Alev; Stock, Sebastian

    2018-02-01

    The European power grid infrastructure faces various challenges due to the expansion of renewable energy sources (RES). To conduct investigations on interactions between power generation and the power grid, models for the power market as well as for the power grid are necessary. This paper describes the basic functionalities and working principles of both types of models as well as steps to couple power market results and the power grid model. The combination of these models is beneficial in terms of gaining realistic power flow scenarios in the grid model and of being able to pass back results of the power flow and restrictions to the market model. Focus is laid on the power grid model and possible application examples like algorithms in grid analysis, operation and dynamic equipment modelling.

  9. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  10. High Quality Data for Grid Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather predictionmore » to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.« less

  11. Is there a need for government interventions to adapt energy infrastructures to climate change? A German case study

    NASA Astrophysics Data System (ADS)

    Groth, Markus; Cortekar, Jörg

    2015-04-01

    The option of adapting to climate change is becoming more and more important in climate change policy. Hence, responding to climate change now involves both mitigation to address the cause and adaptation as a response to already ongoing and expected changes. These changes also have relevance for the current and future energy sector in Germany. An energy sector that in the course of the German Energiewende also has to deal with a fundamental shift in energy supply from fossil fuel to renewable energies in the next decades. Thereby it needs to be considered that the energy sector is one critical infrastructure in the European Union that needs to be protected. Critical infrastructures can be defined as organisations or facilities of special importance for the country and its people where failure or functional impairment would lead to severe supply bottlenecks, significant disturbance of public order or other dramatic consequences. Regarding the adaptation to climate change, the main question is, whether adaptation options will be implemented voluntarily by companies or not. This will be the case, when the measure is considered a private good and is economically beneficial. If, on the contrary, the measure is considered a public good, additional incentives are needed. Based on a synthesis of the current knowledge regarding the possible impacts of climate change on the German energy sector along its value-added chain, the paper points out, that the power distribution and the grid infrastructure is consistently attributed the highest vulnerability. Direct physical impacts and damages to the transmission and distribution grids, utility poles, power transformers, and relay stations are expected due to more intense extreme weather events like storms, floods or thunderstorms. Furthermore fundaments of utility poles can be eroded and relay stations or power transformers can be flooded, which might cause short circuits etc. Besides these impacts causing damage to the physical infrastructure, there might also occur efficiency losses in electricity transmission due to very high or very low temperatures. While vulnerabilities in power generation primarily result in efficiency losses, interferences on the grid level could cause power outages with cascade effects influencing other sectors of society and economy. The paper argues that these possible impacts of a changing climate should be taken into account in the upcoming infrastructure projects in the course of the Energiewende. Therefore governmental intervention - like legal obligations or incentives by the use of economic instruments - are for example justifiable regarding measures to adapt the grid infrastructure as a critical infrastructure that needs to be protected against current and future impacts of climate change.

  12. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid.

    PubMed

    Li, Yuancheng; Qiu, Rixuan; Jing, Sitong

    2018-01-01

    Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can't satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy.

  13. Impact of electric vehicles on the IEEE 34 node distribution infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zeming; Shalalfel, Laith; Beshir, Mohammed J.

    With the growing penetration of the electric vehicles to our daily life owing to their economic and environmental benefits, there will be both opportunities and challenges to the utilities when adopting plug-in electric vehicles (PEV) to the distribution network. In this study, a thorough analysis based on real-world project is conducted to evaluate the impacts of electric vehicles infrastructure on the grid relating to system load flow, load factor, and voltage stability. IEEE 34 node test feeder was selected and tested along with different case scenarios utilizing the electrical distribution design (EDD) software to find out the potential impacts tomore » the grid.« less

  14. Impact of electric vehicles on the IEEE 34 node distribution infrastructure

    DOE PAGES

    Jiang, Zeming; Shalalfel, Laith; Beshir, Mohammed J.

    2014-10-01

    With the growing penetration of the electric vehicles to our daily life owing to their economic and environmental benefits, there will be both opportunities and challenges to the utilities when adopting plug-in electric vehicles (PEV) to the distribution network. In this study, a thorough analysis based on real-world project is conducted to evaluate the impacts of electric vehicles infrastructure on the grid relating to system load flow, load factor, and voltage stability. IEEE 34 node test feeder was selected and tested along with different case scenarios utilizing the electrical distribution design (EDD) software to find out the potential impacts tomore » the grid.« less

  15. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    NASA Astrophysics Data System (ADS)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  16. Context-aware access control for pervasive access to process-based healthcare systems.

    PubMed

    Koufi, Vassiliki; Vassilacopoulos, George

    2008-01-01

    Healthcare is an increasingly collaborative enterprise involving a broad range of healthcare services provided by many individuals and organizations. Grid technology has been widely recognized as a means for integrating disparate computing resources in the healthcare field. Moreover, Grid portal applications can be developed on a wireless and mobile infrastructure to execute healthcare processes which, in turn, can provide remote access to Grid database services. Such an environment provides ubiquitous and pervasive access to integrated healthcare services at the point of care, thus improving healthcare quality. In such environments, the ability to provide an effective access control mechanism that meets the requirement of the least privilege principle is essential. Adherence to the least privilege principle requires continuous adjustments of user permissions in order to adapt to the current situation. This paper presents a context-aware access control mechanism for HDGPortal, a Grid portal application which provides access to workflow-based healthcare processes using wireless Personal Digital Assistants. The proposed mechanism builds upon and enhances security mechanisms provided by the Grid Security Infrastructure. It provides tight, just-in-time permissions so that authorized users get access to specific objects according to the current context. These permissions are subject to continuous adjustments triggered by the changing context. Thus, the risk of compromising information integrity during task executions is reduced.

  17. Smart Grid Enabled L2 EVSE for the Commercial Market

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weeks, John; Pugh, Jerry

    In 2011, the DOE issued Funding Opportunity DE-FOA-0000554 as a means of addressing two major task areas identified by the Grid Integration Tech Team (GITT) that would help transition Electric vehicles from a market driven by early adopters and environmental supporters to a market with mainstream volumes. Per DE-FOA-0000554, these tasks were: To reduce the cost of Electric Vehicle Supply Equipment (EVSE), thereby increasing the likelihood of the build out of EV charging infrastructure. The goal of increasing the number of EVSE available was to ease concerns over range anxiety, and promote the adoption of electric vehicles: To allow EVmore » loads to be managed via the smart grid, thereby maintaining power quality, reliability and affordability, while protecting installed distribution equipment. In December of that year, the DOE awarded one of the two contracts targeted toward commercial EVSE to Eaton, and in early 2012, we began in earnest the process of developing a Smart Grid Enable L2 EVSE for the Commercial Market (hereafter known as the DOE Charger). The design of the Smart Grid Enabled L2 EVSE was based primarily on the FOA requirements along with input from the Electric Transportation Infrastructure product line (hereafter ETI) marketing team who aided in development of the customer requirements.« less

  18. IEA Wind TCP Task 26: Impacts of Wind Turbine Technology on the System Value of Wind in Europe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lantz, Eric J.; Riva, Alberto D.; Hethey, Janos

    This report analyzes the impact of different land-based wind turbine designs on grid integration and related system value and cost. This topic has been studied in a number of previous publications, showing the potential benefits of wind turbine technologies that feature higher capacity factors. Building on the existing literature, this study aims to quantify the effects of different land-based wind turbine designs in the context of a projection of the European power system to 2030. This study contributes with insights on the quantitative effects in a likely European market setup, taking into account the effect of existing infrastructure on bothmore » existing conventional and renewable generation capacities. Furthermore, the market effects are put into perspective by comparing cost estimates for deploying different types of turbine design. Although the study focuses on Europe, similar considerations and results can be applied to other power systems with high wind penetration.« less

  19. Water Resources Sustainability in Northwest Mexico: Analysis of Regional Infrastructure Plans under Historical and Climate Change Scenarios

    NASA Astrophysics Data System (ADS)

    Che, D.; Robles-Morua, A.; Mayer, A. S.; Vivoni, E. R.

    2012-12-01

    The arid state of Sonora, Mexico, has embarked on a large water infrastructure project to provide additional water supply and improved sanitation to the growing capital of Hermosillo. The main component of the Sonora SI project involves an interbasin transfer from rural to urban water users that has generated conflicts over water among different social sectors. Through interactions with regional stakeholders from agricultural and water management agencies, we ascertained the need for a long-term assessment of the water resources of one of the system components, the Sonora River Basin (SRB). A semi-distributed, daily watershed model that includes current and proposed reservoir infrastructure was applied to the SRB. This simulation framework allowed us to explore alternative scenarios of water supply from the SRB to Hermosillo under historical (1980-2010) and future (2031-2040) periods that include the impact of climate change. We compared three precipitation forcing scenarios for the historical period: (1) a network of ground observations from Mexican water agencies; (2) gridded fields from the North America Land Data Assimilation System (NLDAS) at 12 km resolution; and (3) gridded fields from the Weather Research and Forecasting (WRF) model at 10 km resolution. These were compared to daily historical observations at two stream gauging stations and two reservoirs to generate confidence in the simulation tools. We then tested the impact of climate change through the use of the A2 emissions scenario and HadCM3 boundary forcing on the WRF simulations of a future period. Our analysis is focused on the combined impact of existing and proposed reservoir infrastructure at two new sites on the water supply management in the SRB under historical and future climate conditions. We also explore the impact of climate variability and change on the bimodal precipitation pattern from winter frontal storms and the summertime North American monsoon and its consequences on water management. Our results are presented in the form of flow duration, reliability and exceedence frequency curves that are commonly used in the water management agencies. Through this effort, we anticipate to build confidence among regional stakeholders in utilizing hydrological models in the development of water infrastructure plans and to foster conversations that address water sustainability issues.

  20. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  1. DRIHM: Distributed Research Infrastructure for Hydro-Meteorology

    NASA Astrophysics Data System (ADS)

    Parodi, A.; Rebora, N.; Kranzlmueller, D.; Schiffers, M.; Clematis, A.; Tafferner, A.; Garrote, L. M.; Llasat Botija, M.; Caumont, O.; Richard, E.; Cros, P.; Dimitrijevic, V.; Jagers, B.; Harpham, Q.; Hooper, R. P.

    2012-12-01

    Hydro-Meteorology Research (HMR) is an area of critical scientific importance and of high societal relevance. It plays a key role in guiding predictions relevant to the safety and prosperity of humans and ecosystems from highly urbanized areas, to coastal zones, and to agricultural landscapes. Of special interest and urgency within HMR is the problem of understanding and predicting the impacts of severe hydro-meteorological events, such as flash-floods and landslides in complex orography areas, on humans and the environment, under the incoming climate change effects. At the heart of this challenge lies the ability to have easy access to hydrometeorological data and models, and facilitate the collaboration between meteorologists, hydrologists, and Earth science experts for accelerated scientific advances in this field. To face these problems the DRIHM (Distributed Research Infrastructure for Hydro-Meteorology) project is developing a prototype e-Science environment to facilitate this collaboration and provide end-to-end HMR services (models, datasets and post-processing tools) at the European level, with the ability to expand to global scale (e.g. cooperation with Earth Cube related initiatives). The objectives of DRIHM are to lead the definition of a common long-term strategy, to foster the development of new HMR models and observational archives for the study of severe hydrometeorological events, to promote the execution and analysis of high-end simulations, and to support the dissemination of predictive models as decision analysis tools. DRIHM combines the European expertise in HMR, in Grid and High Performance Computing (HPC). Joint research activities will improve the efficient use of the European e-Infrastructures, notably Grid and HPC, for HMR modelling and observational databases, model evaluation tool sets and access to HMR model results. Networking activities will disseminate DRIHM results at the European and global levels in order to increase the cohesion of European and possibly worldwide HMR communities and increase the awareness of ICT potential for HMR. Service activities will deploy the end-to-end DRIHM services and tools in support of HMR networks and virtual organizations on top of the existing European e-Infrastructures.

  2. A Testbed Environment for Buildings-to-Grid Cyber Resilience Research and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridhar, Siddharth; Ashok, Aditya; Mylrea, Michael E.

    The Smart Grid is characterized by the proliferation of advanced digital controllers at all levels of its operational hierarchy from generation to end consumption. Such controllers within modern residential and commercial buildings enable grid operators to exercise fine-grained control over energy consumption through several emerging Buildings-to-Grid (B2G) applications. Though this capability promises significant benefits in terms of operational economics and improved reliability, cybersecurity weaknesses in the supporting infrastructure could be exploited to cause a detrimental effect and this necessitates focused research efforts on two fronts. First, the understanding of how cyber attacks in the B2G space could impact grid reliabilitymore » and to what extent. Second, the development and validation of cyber-physical application-specific countermeasures that are complementary to traditional infrastructure cybersecurity mechanisms for enhanced cyber attack detection and mitigation. The PNNL B2G testbed is currently being developed to address these core research needs. Specifically, the B2G testbed combines high-fidelity buildings+grid simulators, industry-grade building automation and Supervisory Control and Data Acquisition (SCADA) systems in an integrated, realistic, and reconfigurable environment capable of supporting attack-impact-detection-mitigation experimentation. In this paper, we articulate the need for research testbeds to model various B2G applications broadly by looking at the end-to-end operational hierarchy of the Smart Grid. Finally, the paper not only describes the architecture of the B2G testbed in detail, but also addresses the broad spectrum of B2G resilience research it is capable of supporting based on the smart grid operational hierarchy identified earlier.« less

  3. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  4. Lights Out: Foreseeable Catastrophic Effects of Geomagnetic Storms on the North American Power Grid and How to Mitigate Them

    DTIC Science & Technology

    2011-08-21

    poultry, pork , beef, fish, and other meat products also are typically automated operations, done on electrically driven processing lines. 53 Food ...Infrastructure ..................................................... 18 Power Outage Impact on Consumables ( Food , Water, Medication...transportation, consumables ( food , water, and medication), and emergency services, are so highly dependent on reliable power supply from the grid, a

  5. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  6. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  7. Grid accounting service: state and future development

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.

    2014-06-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  8. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    NASA Astrophysics Data System (ADS)

    Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.

    2012-02-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  9. Sealife: a semantic grid browser for the life sciences applied to the study of infectious diseases.

    PubMed

    Schroeder, Michael; Burger, Albert; Kostkova, Patty; Stevens, Robert; Habermann, Bianca; Dieng-Kuntz, Rose

    2006-01-01

    The objective of Sealife is the conception and realisation of a semantic Grid browser for the life sciences, which will link the existing Web to the currently emerging eScience infrastructure. The SeaLife Browser will allow users to automatically link a host of Web servers and Web/Grid services to the Web content he/she is visiting. This will be accomplished using eScience's growing number of Web/Grid Services and its XML-based standards and ontologies. The browser will identify terms in the pages being browsed through the background knowledge held in ontologies. Through the use of Semantic Hyperlinks, which link identified ontology terms to servers and services, the SeaLife Browser will offer a new dimension of context-based information integration. In this paper, we give an overview over the different components of the browser and their interplay. This SeaLife Browser will be demonstrated within three application scenarios in evidence-based medicine, literature & patent mining, and molecular biology, all relating to the study of infectious diseases. The three applications vertically integrate the molecule/cell, the tissue/organ and the patient/population level by covering the analysis of high-throughput screening data for endocytosis (the molecular entry pathway into the cell), the expression of proteins in the spatial context of tissue and organs, and a high-level library on infectious diseases designed for clinicians and their patients. For more information see http://www.biote.ctu-dresden.de/sealife.

  10. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  11. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  12. Intrusion detection system using Online Sequence Extreme Learning Machine (OS-ELM) in advanced metering infrastructure of smart grid

    PubMed Central

    Li, Yuancheng; Jing, Sitong

    2018-01-01

    Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can’t satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy. PMID:29485990

  13. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  14. The ACGT Master Ontology and its applications – Towards an ontology-driven cancer research and management system

    PubMed Central

    Brochhausen, Mathias; Spear, Andrew D.; Cocos, Cristian; Weiler, Gabriele; Martín, Luis; Anguita, Alberto; Stenzhorn, Holger; Daskalaki, Evangelia; Schera, Fatima; Schwarz, Ulf; Sfakianakis, Stelios; Kiefer, Stephan; Dörr, Martin; Graf, Norbert; Tsiknakis, Manolis

    2017-01-01

    Objective This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer – Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). Methods ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. Results To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infra-structure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic integration without the need to perform a separate mapping process. PMID:20438862

  15. JTS and its Application in Environmental Protection Applications

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta

    2010-05-01

    The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.

  16. Implementing Production Grids

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Ziobarth, John (Technical Monitor)

    2002-01-01

    We have presented the essence of experience gained in building two production Grids, and provided some of the global context for this work. As the reader might imagine, there were a lot of false starts, refinements to the approaches and to the software, and several substantial integration projects (SRB and Condor integrated with Globus) to get where we are today. However, the point of this paper is to try and make it substantially easier for others to get to the point where Information Power Grids (IPG) and the DOE Science Grids are today. This is what is needed in order to move us toward the vision of a common cyber infrastructure for science. The author would also like to remind the readers that this paper primarily represents the actual experiences that resulted from specific architectural and software choices during the design and implementation of these two Grids. The choices made were dictated by the criteria laid out in section 1. There is a lot more Grid software available today that there was four years ago, and various of these packages are being integrated into IPG and the DOE Grids. However, the foundation choices of Globus, SRB, and Condor would not be significantly different today than they were four years ago. Nonetheless, if the GGF is successful in its work - and we have every reason to believe that it will be - then in a few years we will see that the 28 functions provided by these packages will be defined in terms of protocols and MIS, and there will be several robust implementations available for each of the basic components, especially the Grid Common Services. The impact of the emerging Web Grid Services work is not yet clear. It will likely have a substantial impact on building higher level services, however it is the opinion of the author that this will in no way obviate the need for the Grid Common Services. These are the foundation of Grids, and the focus of almost all of the operational and persistent infrastructure aspects of Grids.

  17. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    NASA Astrophysics Data System (ADS)

    Adakin, A.; Anisenkov, A.; Belov, S.; Chubarov, D.; Kalyuzhny, V.; Kaplin, V.; Korol, A.; Kuchin, N.; Lomakin, S.; Nikultsev, V.; Skovpen, K.; Sukharev, A.; Zaytsev, A.

    2012-12-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  18. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  19. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  20. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  1. Towards a 3d Spatial Urban Energy Modelling Approach

    NASA Astrophysics Data System (ADS)

    Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.

    2013-09-01

    Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies conceptually and practically integrate urban spatial and energy planning approaches. The combined modelling approach that will be developed based on the described sectorial models holds the potential to represent hybrid energy systems coupling distributed generation of electricity with thermal conversion systems.

  2. The International Symposium on Grids and Clouds and the Open Grid Forum

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be addressed while OGF exposed the state of current developments and issues to be resolved if commonalities are to be exploited. Another first is for the Proceedings for 2011, an open access online publishing scheme will ensure these Proceedings will appear more quickly and more people will have access to the results, providing a long-term online archive of the event. The symposium attracted more than 212 participants from 29 countries spanning Asia, Europe and the Americas. Coming so soon after the earthquake and tsunami in Japan, the participation of our Japanese colleagues was particularly appreciated. Keynotes by invited speakers highlighted the impact of distributed computing infrastructures in the social sciences and humanities, high energy physics, earth and life sciences. Plenary sessions entitled Grid Activities in Asia Pacific surveyed the state of grid deployment across 11 Asian countries. Through the parallel sessions, the impact of distributed computing infrastructures in a range of research disciplines was highlighted. Operational procedures, middleware and security aspects were addressed in a dedicated sessions. The symposium was covered online in real-time by the GridCast team from the GridTalk project. A running blog including summarises of specific sessions as well as video interviews with keynote speakers and personalities and photos. As with all regions of the world, grid and cloud computing has to be prove it is adding value to researchers if it is be accepted by them and demonstrate its impact on society as a while if it to be supported by national governments, funding agencies and the general public. ISGC has helped foster the emergence of a strong regional interest in the earth and life sciences, notably for natural disaster mitigation and bioinformatics studies. Prof. Simon C. Lin organised an intense social programme with a gastronomic tour of Taipei culminating with a banquet for all the symposium's participants at the hotel Palais de Chine. I would like to thank all the members of the programme committee, the participants and above all our hosts, Prof. Simon C. Lin and his excellent support team at Academia Sinica. Dr. Bob Jones Programme Chair 1 http://event.twgrid.org/isgc2011/ 2 http://www.gridforum.org/

  3. Towards Dynamic Authentication in the Grid — Secure and Mobile Business Workflows Using GSet

    NASA Astrophysics Data System (ADS)

    Mangler, Jürgen; Schikuta, Erich; Witzany, Christoph; Jorns, Oliver; Ul Haq, Irfan; Wanek, Helmut

    Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement "The Grid is all about sharing." has to be extended by "... and making money out of it!". To allow for the realization of this vision the trust-worthyness of the underlying technology needs to be ensured. This can be achieved by the use of gSET (Gridified Secure Electronic Transaction) as a basic technology for trust management and secure accounting in the presented Grid based workflow. We present a framework, conceptually and technically, from the area of the Mobile-Grid, which justifies the Grid infrastructure as a viable platform to enable commercially successful business workflows.

  4. Design and implementation of spatial knowledge grid for integrated spatial analysis

    NASA Astrophysics Data System (ADS)

    Liu, Xiangnan; Guan, Li; Wang, Ping

    2006-10-01

    Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.

  5. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  6. Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data

    NASA Astrophysics Data System (ADS)

    Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii

    2013-04-01

    Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.

  7. Index-based reactive power compensation scheme for voltage regulation

    NASA Astrophysics Data System (ADS)

    Dike, Damian Obioma

    2008-10-01

    Increasing demand for electrical power arising from deregulation and the restrictions posed to the construction of new transmission lines by environment, socioeconomic, and political issues had led to higher grid loading. Consequently, voltage instability has become a major concern, and reactive power support is vital to enhance transmission grid performance. Improved reactive power support to distressed grid is possible through the application of relatively unfamiliar emerging technologies of "Flexible AC Transmission Systems (FACTS)" devices and "Distributed Energy Resources (DERS)." In addition to these infrastructure issues, a lack of situational awareness by system operators can cause major power outages as evidenced by the August 14, 2003 widespread North American blackout. This and many other recent major outages have highlighted the inadequacies of existing power system indexes. In this work, a novel "Index-based reactive compensation scheme" appropriate for both on-line and off-line computation of grid status has been developed. A new voltage stability index (Ls-index) suitable for long transmission lines was developed, simulated, and compared to the existing two-machine modeled L-index. This showed the effect of long distance power wheeling amongst regional transmission organizations. The dissertation further provided models for index modulated voltage source converters (VSC) and index-based load flow analysis of both FACTS and microgrid interconnected power systems using the Newton-Raphson's load flow model incorporated with multi-FACTS devices. The developed package has been made user-friendly through the embodiment of interactive graphical user interface and implemented on the IEEE 14, 30, and 300 bus systems. The results showed reactive compensation has system wide-effect, provided readily accessible system status indicators, ensured seamless DERs interconnection through new islanding modes and enhanced VSC utilization. These outcomes may contribute to optimal utilization of compensation devices and available transfer capability as well as reduce system outages through better regulation of power operating voltages.

  8. A Community-Based Approach to Leading the Nation in Smart Energy Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2013-12-31

    Project Objectives The AEP Ohio gridSMART® Demonstration Project (Project) achieved the following objectives: • Built a secure, interoperable, and integrated smart grid infrastructure in northeast central Ohio that demonstrated the ability to maximize distribution system efficiency and reliability and consumer use of demand response programs that reduced energy consumption, peak demand, and fossil fuel emissions. • Actively attracted, educated, enlisted, and retained consumers in innovative business models that provided tools and information reducing consumption and peak demand. • Provided the U.S. Department of Energy (DOE) information to evaluate technologies and preferred smart grid business models to be extended nationally. Projectmore » Description Ohio Power Company (the surviving company of a merger with Columbus Southern Power Company), doing business as AEP Ohio (AEP Ohio), took a community-based approach and incorporated a full suite of advanced smart grid technologies for 110,000 consumers in an area selected for its concentration and diversity of distribution infrastructure and consumers. It was organized and aligned around: • Technology, implementation, and operations • Consumer and stakeholder acceptance • Data management and benefit assessment Combined, these functional areas served as the foundation of the Project to integrate commercially available products, innovative technologies, and new consumer products and services within a secure two-way communication network between the utility and consumers. The Project included Advanced Metering Infrastructure (AMI), Distribution Management System (DMS), Distribution Automation Circuit Reconfiguration (DACR), Volt VAR Optimization (VVO), and Consumer Programs (CP). These technologies were combined with two-way consumer communication and information sharing, demand response, dynamic pricing, and consumer products, such as plug-in electric vehicles and smart appliances. In addition, the Project incorporated comprehensive cyber security capabilities, interoperability, and a data assessment that, with grid simulation capabilities, made the demonstration results an adaptable, integrated solution for AEP Ohio and the nation.« less

  9. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749

  10. A bioinformatics knowledge discovery in text application for grid computing.

    PubMed

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.

  11. A Transparent Translation from Legacy System Model into Common Information Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Simpson, Jeffrey; Zhang, Yingchen

    Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms andmore » applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.« less

  12. Cultured Construction: Global Evidence of the Impact of National Values on Renewable Electricity Infrastructure Choice.

    PubMed

    Kaminsky, Jessica A

    2016-02-16

    Renewable electricity is an important tool in the fight against climate change, but globally these technologies are still in the early stages of diffusion. To contribute to our understanding of the factors driving this diffusion, I study relationships between national values (measured by Hofstede's cultural dimensions) and renewable electricity adoption at the national level. Existing data for 66 nations (representing an equal number of developed and developing economies) are used to fuel the analysis. Somewhat dependent on limited available data on controls for grid reliability and the cost of electricity, I discover that three of Hofstede's dimensions (high uncertainty avoidance, low masculinity-femininity, and high individualism-collectivism) have significant exponential relationships with renewable electricity adoption. The dimension of uncertainty avoidance appears particularly appropriate for practical application. Projects or organizations implementing renewable electricity policy, designs, or construction should particularly attend to this cultural dimension. In particular, as the data imply that renewable technologies are being used to manage risk in electricity supply, geographies with unreliable grids are particularly likely to be open to renewable electricity technologies.

  13. Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo

    2010-05-01

    In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an integration on existing solutions. More specifically, the Open Geospatial Consortium (OGC) Web Services (OWS) specifications play a fundamental role in geospatial information sharing (e.g. in INSPIRE Implementing Rules, GEOSS architecture, GMES Services, etc.). On the Grid side, the gLite middleware, developed in the European EGEE (Enabling Grids for E-sciencE) Projects, is widely spread in Europe and beyond, proving its high scalability and it is one of the middleware chosen for the future European Grid Infrastructure (EGI) initiative. Therefore the convergence between OWS and gLite technologies would be desirable for a seamless access to the Grid capabilities through OWS-compliant systems. Anyway, to achieve this harmonization there are some obstacles to overcome. Firstly, a semantics mismatch must be addressed: gLite handle low-level (e.g. close to the machine) concepts like "file", "data", "instruments", "job", etc., while geo-information services handle higher-level (closer to the human) concepts like "coverage", "observation", "measurement", "model", etc. Secondly, an architectural mismatch must be addressed: OWS implements a Web Service-Oriented-Architecture which is stateless, synchronous and with no embedded security (which is demanded to other specs), while gLite implements the Grid paradigm in an architecture which is stateful, asynchronous (even not fully event-based) and with strong embedded security (based on the VO paradigm). In recent years many initiatives and projects have worked out possible approaches for implementing Grid-enabled OWSs. Just to mention some: (i) in 2007 the OGC has signed a Memorandum of Understanding with the Open Grid Forum, "a community of users, developers, and vendors leading the global standardization effort for grid computing."; (ii) the OGC identified "WPS Profiles - Conflation; and Grid processing" as one of the tasks in the Geo Processing Workflow theme of the OWS Phase 6 (OWS-6); (iii) several national, European and international projects investigated different aspects of this integration, developing demonstrators and Proof-of-Concepts; In this context, "gLite enablement of OpenGeospatial Web Services" (G-OWS) is an initiative started in 2008 by the European CYCLOPS, GENESI-DR, and DORII Projects Consortia in order to collect/coordinate experiences on the enablement of OWS on top of the gLite middleware [GOWS]. Currently G-OWS counts ten member organizations from Europe and beyond, and four European Projects involved. It broadened its scope to the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Its operational objectives are the following: i) to contribute to the OGC-OGF initiative; ii) to release a reference implementation as standard gLite APIs (under the gLite software license); iii) to release a reference model (including procedures and guidelines) for OWS Grid-ification, as far as gLite is concerned; iv) to foster and promote the formation of consortiums for participation to projects/initiatives aimed at building Grid-enabled SDIs To achieve this objectives G-OWS bases its activities on two main guiding principles: a) the adoption of a service-oriented architecture based on the information modelling approach, and b) standardization as a means of achieving interoperability (i.e. adoption of standards from ISO TC211, OGC OWS, OGF). In the first year of activity G-OWS has designed a general architectural framework stemming from the FP6 CYCLOPS studies and enriched by the outcomes of other projects and initiatives involved (i.e. FP7 GENESI-DR, FP7 DORII, AIST GeoGrid, etc.). Some proof-of-concepts have been developed to demonstrate the flexibility and scalability of such architectural framework. The G-OWS WG developed implementations of gLite-enabled Web Coverage Service (WCS) and Web Processing Service (WPS), and an implementation of a Shibboleth authentication for gLite-enabled OWS in order to evaluate the possible integration of Web and Grid security models. The presentation will aim to communicate the G-OWS organization, activities, future plans and means to involve the ESSI community. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Foster 2001] I. Foster, C. Kesselman and S. Tuecke, "The Anatomy of the Grid. The International Journal ofHigh Performance Computing Applications", 15(3):200-222, Fall 2001 [GOWS] G-OWS WG, https://www.g-ows.org/, accessed: 15 January 2010

  14. The Road to Success: Importance of Construction on Reconstruction in Conflict-Affected States

    DTIC Science & Technology

    2011-12-01

    provision of infrastructure services , formation of the market, management of the state‟s assets, international relations, and rule of law (p. 5). Both... international capability to speed project completion and raise quality of critical infrastructure development such as the electrical grid. Using...burden, to Washington headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA

  15. Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.

    PubMed

    Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià

    2010-01-01

    The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.

  16. Smart Grid Adoption Likeliness Framework: Comparing Idaho and National Residential Consumers' Perceptions

    NASA Astrophysics Data System (ADS)

    Baiya, Evanson G.

    New energy technologies that provide real-time visibility of the electricity grid's performance, along with the ability to address unusual events in the grid and allow consumers to manage their energy use, are being developed in the United States. Primary drivers for the new technologies include the growing energy demand, tightening environmental regulations, aging electricity infrastructure, and rising consumer demand to become more involved in managing individual energy usage. In the literature and in practice, it is unclear if, and to what extent, residential consumers will adopt smart grid technologies. The purpose of this quantitative study was to examine the relationships between demographic characteristics, perceptions, and the likelihood of adopting smart grid technologies among residential energy consumers. The results of a 31-item survey were analyzed for differences within the Idaho consumers and compared against national consumers. Analysis of variance was used to examine possible differences between the dependent variable of likeliness to adopt smart grid technologies and the independent variables of age, gender, residential ownership, and residential location. No differences were found among Idaho consumers in their likeliness to adopt smart grid technologies. An independent sample t-test was used to examine possible differences between the two groups of Idaho consumers and national consumers in their level of interest in receiving detailed feedback information on energy usage, the added convenience of the smart grid, renewable energy, the willingness to pay for infrastructure costs, and the likeliness to adopt smart grid technologies. The level of interest in receiving detailed feedback information on energy usage was significantly different between the two groups (t = 3.11, p = .0023), while the other variables were similar. The study contributes to technology adoption research regarding specific consumer perceptions and provides a framework that estimates the likeliness of adopting smart grid technologies by residential consumers. The study findings could assist public utility managers and technology adoption researchers as they develop strategies to enable wide-scale adoption of smart grid technologies as a solution to the energy problem. Future research should be conducted among commercial and industrial energy consumers to further validate the findings and conclusions of this research.

  17. Grid-supported Medical Digital Library.

    PubMed

    Kosiedowski, Michal; Mazurek, Cezary; Stroinski, Maciej; Weglarz, Jan

    2007-01-01

    Secure, flexible and efficient storing and accessing digital medical data is one of the key elements for delivering successful telemedical systems. To this end grid technologies designed and developed over the recent years and grid infrastructures deployed with their use seem to provide an excellent opportunity for the creation of a powerful environment capable of delivering tools and services for medical data storage, access and processing. In this paper we present the early results of our work towards establishing a Medical Digital Library supported by grid technologies and discuss future directions of its development. These works are part of the "Telemedycyna Wielkopolska" project aiming to develop a telemedical system for the support of the regional healthcare.

  18. Synergy Between Archives, VO, and the Grid at ESAC

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Alvarez, R.; Gabriel, C.; Osuna, P.; Ott, S.

    2011-07-01

    Over the years, in support to the Science Operations Centers at ESAC, we have set up two Grid infrastructures. These have been built: 1) to facilitate daily research for scientists at ESAC, 2) to provide high computing capabilities for project data processing pipelines (e.g., Herschel), 3) to support science operations activities (e.g., calibration monitoring). Furthermore, closer collaboration between the science archives, the Virtual Observatory (VO) and data processing activities has led to an other Grid use case: the Remote Interface to XMM-Newton SAS Analysis (RISA). This web service-based system allows users to launch SAS tasks transparently to the GRID, save results on http-based storage and visualize them through VO tools. This paper presents real and operational use cases of Grid usages in these contexts

  19. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  20. A technological review on electric vehicle DC charging stations using photovoltaic sources

    NASA Astrophysics Data System (ADS)

    Youssef, Cheddadi; Fatima, Errahimi; najia, Es-sbai; Chakib, Alaoui

    2018-05-01

    Within the next few years, Electrified vehicles are destined to become the essential component of the transport field. Consequently, the charging infrastructure should be developed in the same time. Among this substructure, Charging stations photovoltaic-assisted are attracting a substantial interest due to increased environmental awareness, cost reduction and rise in efficiency of the PV modules. The intention of this paper is to review the technological status of Photovoltaic–Electric vehicle (PV-EV) charging stations during the last decade. The PV-EV charging station is divided into two categories, which are PV-grid and PV-standalone charging systems. From a practical point view, the distinction between the two architectures is the bidirectional inverter, which is added to link the station to the smart grid. The technological infrastructure includes the common hardware components of every station, namely: PV array, dc-dc converter provided with MPPT control, energy storage unit, bidirectional dc charger and inverter. We investigate, compare and evaluate many valuable researches that contain the design and control of PV-EV charging system. Additionally, this concise overview reports the studies that include charging standards, the power converters topologies that focus on the adoption of Vehicle-to grid technology and the control for both PV–grid and PV standalone DC charging systems.

  1. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  2. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  3. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  4. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  5. Optimizing the Prioritization of Natural Disaster Recovery Projects

    DTIC Science & Technology

    2007-03-01

    collection, and basic utility and infrastructure restoration. The restoration of utilities can include temporary bridges, temporary water and sewage lines...interrupted such as in the case of the 9/11 disaster. Perhaps next time our enemies may target our power grid or water systems. It is the duty of...Transportation The amount and type of transportation infrastructure damage a repair project addresses Water The amount and type of water

  6. Designing for Wide-Area Situation Awareness in Future Power Grid Operations

    NASA Astrophysics Data System (ADS)

    Tran, Fiona F.

    Power grid operation uncertainty and complexity continue to increase with the rise of electricity market deregulation, renewable generation, and interconnectedness between multiple jurisdictions. Human operators need appropriate wide-area visualizations to help them monitor system status to ensure reliable operation of the interconnected power grid. We observed transmission operations at a control centre, conducted critical incident interviews, and led focus group sessions with operators. The results informed a Work Domain Analysis of power grid operations, which in turn informed an Ecological Interface Design concept for wide-area monitoring. I validated design concepts through tabletop discussions and a usability evaluation with operators, earning a mean System Usability Scale score of 77 out of 90. The design concepts aim to support an operator's complete and accurate understanding of the power grid state, which operators increasingly require due to the critical nature of power grid infrastructure and growing sources of system uncertainty.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Dasgupta, Dipankar; Ali, Mohammad Hassan

    The important backbone of the smart grid is the cyber/information infrastructure, which is primarily used to communicate with different grid components. A smart grid is a complex cyber physical system containing a numerous and variety number of sources, devices, controllers and loads. Therefore, the smart grid is vulnerable to grid related disturbances. For such dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and Opnet based co-simulated platform to carry out a cyber-intrusion in cyber network for modern power systems and the smart grid. The IEEE 30 bus power system model is used tomore » demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack. Different disturbance situations in the considered test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  8. Grid accounting service: state and future development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levshina, T.; Sehgal, C.; Bockelman, B.

    2014-01-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at Universitymore » of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.« less

  9. Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid

    NASA Astrophysics Data System (ADS)

    Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration

    2014-06-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  10. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  11. Separating Added Value from Hype: Some Experiences and Prognostications

    NASA Astrophysics Data System (ADS)

    Reed, Dan

    2004-03-01

    These are exciting times for the interplay of science and computing technology. As new data archives, instruments and computing facilities are connected nationally and internationally, a new model of distributed scientific collaboration is emerging. However, any new technology brings both opportunities and challenges -- Grids are no exception. In this talk, we will discuss some of the experiences deploying Grid software in production environments, illustrated with experiences from the NSF PACI Alliance, the NSF Extensible Terascale Facility (ETF) and other Grid projects. From these experiences, we derive some guidelines for deployment and some suggestions for community engagement, software development and infrastructure

  12. SMART Grid Study Act of 2013

    THOMAS, 113th Congress

    Rep. Payne, Donald M., Jr. [D-NJ-10

    2013-08-01

    House - 09/06/2013 Referred to the Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  13. Towards resiliency with micro-grids: Portfolio optimization and investment under uncertainty

    NASA Astrophysics Data System (ADS)

    Gharieh, Kaveh

    Energy security and sustained supply of power are critical for community welfare and economic growth. In the face of the increased frequency and intensity of extreme weather conditions which can result in power grid outage, the value of micro-grids to improve the communities' power reliability and resiliency is becoming more important. Micro-grids capability to operate in islanded mode in stressed-out conditions, dramatically decreases the economic loss of critical infrastructure in power shortage occasions. More wide-spread participation of micro-grids in the wholesale energy market in near future, makes the development of new investment models necessary. However, market and price risks in short term and long term along with risk factors' impacts shall be taken into consideration in development of new investment models. This work proposes a set of models and tools to address different problems associated with micro-grid assets including optimal portfolio selection, investment and financing in both community and a sample critical infrastructure (i.e. wastewater treatment plant) levels. The models account for short-term operational volatilities and long-term market uncertainties. A number of analytical methodologies and financial concepts have been adopted to develop the aforementioned models as follows. (1) Capital budgeting planning and portfolio optimization models with Monte Carlo stochastic scenario generation are applied to derive the optimal investment decision for a portfolio of micro-grid assets considering risk factors and multiple sources of uncertainties. (2) Real Option theory, Monte Carlo simulation and stochastic optimization techniques are applied to obtain optimal modularized investment decisions for hydrogen tri-generation systems in wastewater treatment facilities, considering multiple sources of uncertainty. (3) Public Private Partnership (PPP) financing concept coupled with investment horizon approach are applied to estimate public and private parties' revenue shares from a community-level micro-grid project over the course of assets' lifetime considering their optimal operation under uncertainty.

  14. Using OSG Computing Resources with (iLC)Dirac

    NASA Astrophysics Data System (ADS)

    Sailer, A.; Petric, M.; CLICdp Collaboration

    2017-10-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.

  15. Grid-enabled mammographic auditing and training system

    NASA Astrophysics Data System (ADS)

    Yap, M. H.; Gale, A. G.

    2008-03-01

    Effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI: Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists. Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of such a grid based approach.

  16. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca

    2013-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  17. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinquini, Luca; Crichton, Daniel; Miller, Neill

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  18. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    NASA Technical Reports Server (NTRS)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  19. A Smart Power Electronic Multiconverter for the Residential Sector.

    PubMed

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-05-26

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it.

  20. A Smart Power Electronic Multiconverter for the Residential Sector

    PubMed Central

    Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva

    2017-01-01

    The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it. PMID:28587131

  1. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707

  2. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.

  3. mantisGRID: a grid platform for DICOM medical images management in Colombia and Latin America.

    PubMed

    Garcia Ruiz, Manuel; Garcia Chaves, Alvin; Ruiz Ibañez, Carlos; Gutierrez Mazo, Jorge Mario; Ramirez Giraldo, Juan Carlos; Pelaez Echavarria, Alejandro; Valencia Diaz, Edison; Pelaez Restrepo, Gustavo; Montoya Munera, Edwin Nelson; Garcia Loaiza, Bernardo; Gomez Gonzalez, Sebastian

    2011-04-01

    This paper presents the mantisGRID project, an interinstitutional initiative from Colombian medical and academic centers aiming to provide medical grid services for Colombia and Latin America. The mantisGRID is a GRID platform, based on open source grid infrastructure that provides the necessary services to access and exchange medical images and associated information following digital imaging and communications in medicine (DICOM) and health level 7 standards. The paper focuses first on the data abstraction architecture, which is achieved via Open Grid Services Architecture Data Access and Integration (OGSA-DAI) services and supported by the Globus Toolkit. The grid currently uses a 30-Mb bandwidth of the Colombian High Technology Academic Network, RENATA, connected to Internet 2. It also includes a discussion on the relational database created to handle the DICOM objects that were represented using Extensible Markup Language Schema documents, as well as other features implemented such as data security, user authentication, and patient confidentiality. Grid performance was tested using the three current operative nodes and the results demonstrated comparable query times between the mantisGRID (OGSA-DAI) and Distributed mySQL databases, especially for a large number of records.

  4. Battery Electric Vehicle Driving and Charging Behavior Observed Early in The EV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Smart; Stephen Schey

    2012-04-01

    As concern about society's dependence on petroleum-based transportation fuels increases, many see plug-in electric vehicles (PEV) as enablers to diversifying transportation energy sources. These vehicles, which include plug-in hybrid electric vehicles (PHEV), range-extended electric vehicles (EREV), and battery electric vehicles (BEV), draw some or all of their power from electricity stored in batteries, which are charged by the electric grid. In order for PEVs to be accepted by the mass market, electric charging infrastructure must also be deployed. Charging infrastructure must be safe, convenient, and financially sustainable. Additionally, electric utilities must be able to manage PEV charging demand on themore » electric grid. In the Fall of 2009, a large scale PEV infrastructure demonstration was launched to deploy an unprecedented number of PEVs and charging infrastructure. This demonstration, called The EV Project, is led by Electric Transportation Engineering Corporation (eTec) and funded by the U.S. Department of Energy. eTec is partnering with Nissan North America to deploy up to 4,700 Nissan Leaf BEVs and 11,210 charging units in five market areas in Arizona, California, Oregon, Tennessee, and Washington. With the assistance of the Idaho National Laboratory, eTec will collect and analyze data to characterize vehicle consumer driving and charging behavior, evaluate the effectiveness of charging infrastructure, and understand the impact of PEV charging on the electric grid. Trials of various revenue systems for commercial and public charging infrastructure will also be conducted. The ultimate goal of The EV Project is to capture lessons learned to enable the mass deployment of PEVs. This paper is the first in a series of papers documenting the progress and findings of The EV Project. This paper describes key research objectives of The EV Project and establishes the project background, including lessons learned from previous infrastructure deployment and PEV demonstrations. One such previous study was a PHEV demonstration conducted by the U.S. Department of Energy's Advanced Vehicle Testing Activity (AVTA), led by the Idaho National Laboratory (INL). AVTA's PHEV demonstration involved over 250 vehicles in the United States, Canada, and Finland. This paper summarizes driving and charging behavior observed in that demonstration, including the distribution of distance driven between charging events, charging frequency, and resulting proportion of operation charge depleting mode. Charging demand relative to time of day and day of the week will also be shown. Conclusions from the PHEV demonstration will be given which highlight the need for expanded analysis in The EV Project. For example, the AVTA PHEV demonstration showed that in the absence of controlled charging by the vehicle owner or electric utility, the majority of vehicles were charged in the evening hours, coincident with typical utility peak demand. Given this baseline, The EV Project will demonstrate the effects of consumer charge control and grid-side charge management on electricity demand. This paper will outline further analyses which will be performed by eTec and INL to documenting driving and charging behavior of vehicles operated in a infrastructure-rich environment.« less

  5. Applying a Space-Based Security Recovery Scheme for Critical Homeland Security Cyberinfrastructure Utilizing the NASA Tracking and Data Relay (TDRS) Based Space Network

    NASA Technical Reports Server (NTRS)

    Shaw, Harry C.; McLaughlin, Brian; Stocklin, Frank; Fortin, Andre; Israel, David; Dissanayake, Asoka; Gilliand, Denise; LaFontaine, Richard; Broomandan, Richard; Hyunh, Nancy

    2015-01-01

    Protection of the national infrastructure is a high priority for cybersecurity of the homeland. Critical infrastructure such as the national power grid, commercial financial networks, and communications networks have been successfully invaded and re-invaded from foreign and domestic attackers. The ability to re-establish authentication and confidentiality of the network participants via secure channels that have not been compromised would be an important countermeasure to compromise of our critical network infrastructure. This paper describes a concept of operations by which the NASA Tracking and Data Relay (TDRS) constellation of spacecraft in conjunction with the White Sands Complex (WSC) Ground Station host a security recovery system for re-establishing secure network communications in the event of a national or regional cyberattack. Users would perform security and network restoral functions via a Broadcast Satellite Service (BSS) from the TDRS constellation. The BSS enrollment only requires that each network location have a receive antenna and satellite receiver. This would be no more complex than setting up a DIRECTTV-like receiver at each network location with separate network connectivity. A GEO BSS would allow a mass re-enrollment of network nodes (up to nationwide) simultaneously depending upon downlink characteristics. This paper details the spectrum requirements, link budget, notional assets and communications requirements for the scheme. It describes the architecture of such a system and the manner in which it leverages off of the existing secure infrastructure which is already in place and managed by the NASAGSFC Space Network Project.

  6. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herner, K.; Alba Hernandex, A. F.; Bhat, S.

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less

  7. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.

  8. Plug-in hybrid electric vehicles: battery degradation, grid support, emissions, and battery size tradeoffs

    NASA Astrophysics Data System (ADS)

    Peterson, Scott B.

    Plug-in hybrid electric vehicles (PHEVs) may become a substantial part of the transportation fleet in a decade or two. This dissertation investigates battery degradation, and how introducing PHEVs may influence the electricity grid, emissions, and petroleum use in the US. It examines the effects of combined driving and vehicle-to-grid (V2G) usage on lifetime performance of commercial Li-ion cells. The testing shows promising capacity fade performance: more than 95% of the original cell capacity remains after thousands of driving days. Statistical analyses indicate that rapid vehicle motive cycling degraded the cells more than slower, V2G galvanostatic cycling. These data are used to examine the potential economic implications of using vehicle batteries to store grid electricity generated at off-peak hours for off-vehicle use during peak hours. The maximum annual profit with perfect market information and no battery degradation cost ranged from ˜US140 to 250 in the three cities. If measured battery degradation is applied the maximum annual profit decreases to ˜10-120. The dissertation predicts the increase in electricity load and emissions due to vehicle battery charging in PJM and NYISO with the current generators, with a 50/tonne CO2 price, and with existing coal generators retrofitted with 80% CO2 capture. It also models emissions using natural gas or wind+gas. We examined PHEV fleet percentages between 0.4 and 50%. Compared to 2020 CAFE standards, net CO2 emissions in New York are reduced by switching from gasoline to electricity; coal-heavy PJM shows smaller benefits unless coal units are fitted with CCS or replaced with lower CO2 generation. NOX is reduced in both RTOs, but there is upward pressure on SO2 emissions or allowance prices under a cap. Finally the dissertation compares increasing the all-electric range (AER) of PHEVs to installing charging infrastructure. Fuel use was modeled with National Household Travel Survey and Greenhouse Gasses, Regulated Emissions, and Energy Use in Transportation model. It was found that increasing AER of plug-in hybrids was a more cost effective solution to reducing gasoline consumption than installing charging infrastructure. Comparison of results to current subsidy structure shows various options to improve future PHEV or other vehicle subsidy programs.

  9. Trial Implementation of a Multihazard Risk Assessment Framework for High-Impact Low-Frequency Power Grid Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeramany, Arun; Coles, Garill A.; Unwin, Stephen D.

    The Pacific Northwest National Laboratory developed a risk framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions. In this paper, we briefly recap the framework and demonstrate its implementation for seismic and geomagnetic hazards using a benchmark reliability test system. We describe integration of a collection of models implemented to perform hazard analysis, fragility evaluation, consequence estimation, and postevent restoration. We demonstrate the value of the framework as a multihazard power grid risk assessment and management tool. As a result, the research will benefit transmission planners and emergency planners by improving their ability to maintain a resilientmore » grid infrastructure against impacts from major events.« less

  10. Grids: The Top Ten Questions

    DOE PAGES

    Schopf, Jennifer M.; Nitzberg, Bill

    2002-01-01

    The design and implementation of a national computing system and data grid has become a reachable goal from both the computer science and computational science point of view. A distributed infrastructure capable of sophisticated computational functions can bring many benefits to scientific work, but poses many challenges, both technical and socio-political. Technical challenges include having basic software tools, higher-level services, functioning and pervasive security, and standards, while socio-political issues include building a user community, adding incentives for sites to be part of a user-centric environment, and educating funding sources about the needs of this community. This paper details the areasmore » relating to Grid research that we feel still need to be addressed to fully leverage the advantages of the Grid.« less

  11. Trial Implementation of a Multihazard Risk Assessment Framework for High-Impact Low-Frequency Power Grid Events

    DOE PAGES

    Veeramany, Arun; Coles, Garill A.; Unwin, Stephen D.; ...

    2017-08-25

    The Pacific Northwest National Laboratory developed a risk framework for modeling high-impact, low-frequency power grid events to support risk-informed decisions. In this paper, we briefly recap the framework and demonstrate its implementation for seismic and geomagnetic hazards using a benchmark reliability test system. We describe integration of a collection of models implemented to perform hazard analysis, fragility evaluation, consequence estimation, and postevent restoration. We demonstrate the value of the framework as a multihazard power grid risk assessment and management tool. As a result, the research will benefit transmission planners and emergency planners by improving their ability to maintain a resilientmore » grid infrastructure against impacts from major events.« less

  12. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    PubMed

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  13. A Petri Net model for distributed energy system

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of the model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.

  14. International Symposium on Grids and Clouds (ISGC) 2014

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).

  15. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  16. Network information attacks on the control systems of power facilities belonging to the critical infrastructure

    NASA Astrophysics Data System (ADS)

    Loginov, E. L.; Raikov, A. N.

    2015-04-01

    The most large-scale accidents occurred as a consequence of network information attacks on the control systems of power facilities belonging to the United States' critical infrastructure are analyzed in the context of possibilities available in modern decision support systems. Trends in the development of technologies for inflicting damage to smart grids are formulated. A volume matrix of parameters characterizing attacks on facilities is constructed. A model describing the performance of a critical infrastructure's control system after an attack is developed. The recently adopted measures and legislation acts aimed at achieving more efficient protection of critical infrastructure are considered. Approaches to cognitive modeling and networked expertise of intricate situations for supporting the decision-making process, and to setting up a system of indicators for anticipatory monitoring of critical infrastructure are proposed.

  17. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE PAGES

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...

    2017-08-24

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  18. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  19. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  20. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  1. Modern Grid Initiative Distribution Taxonomy Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Chen, Yousu; Chassin, David P.

    2008-11-01

    This is the final report for the development of a toxonomy of prototypical electrical distribution feeders. Two of the primary goals of the Department of Energy's (DOE) Modern Grid Initiative (MGI) are 'to accelerate the modernization of our nation's electricity grid' and to 'support demonstrations of systems of key technologies that can serve as the foundation for an integrated, modern power grid'. A key component to the realization of these goals is the effective implementation of new, as well as existing, 'smart grid technologies'. Possibly the largest barrier that has been identified in the deployment of smart grid technologies ismore » the inability to evaluate how their deployment will affect the electricity infrastructure, both locally and on a regional scale. The inability to evaluate the impacts of these technologies is primarily due to the lack of detailed electrical distribution feeder information. While detailed distribution feeder information does reside with the various distribution utilities, there is no central repository of information that can be openly accessed. The role of Pacific Northwest National Laboratory (PNNL) in the MGI for FY08 was to collect distribution feeder models, in the SynerGEE{reg_sign} format, from electric utilities around the nation so that they could be analyzed to identify regional differences in feeder design and operation. Based on this analysis PNNL developed a taxonomy of 24 prototypical feeder models in the GridLAB-D simulations environment that contain the fundamental characteristics of non-urban core, radial distribution feeders from the various regions of the U.S. Weighting factors for these feeders are also presented so that they can be used to generate a representative sample for various regions within the United States. The final product presented in this report is a toolset that enables the evaluation of new smart grid technologies, with the ability to aggregate their effects to regional and national levels. The distribution feeder models presented in this report are based on actual utility models but do not contain any proprietary or system specific information. As a result, the models discussed in this report can be openly distributed to industry, academia, or any interested entity, in order to facilitate the ability to evaluate smart grid technologies.« less

  2. VOSpace: a Prototype for Grid 2.0

    NASA Astrophysics Data System (ADS)

    Graham, M. J.; Morris, D.; Rixon, G.

    2007-10-01

    As Grid 1.0 was characterized by distributed computation, so Grid 2.0 will be characterized by distributed data and the infrastructure needed to support and exploit it: the emerging success of Amazon S3 is already testimony to this. VOSpace is the IVOA interface standard for accessing distributed data. Although the base definition (VOSpace 1.0) only relates to flat, unconnected data stores, subsequent versions will add additional layers of functionality. In this paper, we consider how incorporating popular web concepts such as folksonomies (tagging), social networking, and data-spaces could lead to a much richer data environment than provided by a traditional collection of networked data stores.

  3. How to keep the Grid full and working with ATLAS production and physics jobs

    NASA Astrophysics Data System (ADS)

    Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration

    2017-10-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  4. Magnetic storms and induction hazards

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Balch, Christopher

    2014-01-01

    Magnetic storms are potentially hazardous to the activities and technological infrastructure of modern civilization. This reality was dramatically demonstrated during the great magnetic storm of March 1989, when surface geoelectric fields, produced by the interaction of the time-varying geomagnetic field with the Earth's electrically conducting interior, coupled onto the overlying Hydro-Québec electric power grid in Canada. Protective relays were tripped, the grid collapsed, and about 9 million people were temporarily left without electricity [Bolduc, 2002].

  5. Smart Grid Risk Management

    NASA Astrophysics Data System (ADS)

    Abad Lopez, Carlos Adrian

    Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices. Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.

  6. Electric Power Infrastructure Reliability and Security (EPIRS) Reseach and Development Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rick Meeker; L. Baldwin; Steinar Dale

    2010-03-31

    Power systems have become increasingly complex and face unprecedented challenges posed by population growth, climate change, national security issues, foreign energy dependence and an aging power infrastructure. Increased demand combined with increased economic and environmental constraints is forcing state, regional and national power grids to expand supply without the large safety and stability margins in generation and transmission capacity that have been the rule in the past. Deregulation, distributed generation, natural and man-made catastrophes and other causes serve to further challenge and complicate management of the electric power grid. To meet the challenges of the 21st century while also maintainingmore » system reliability, the electric power grid must effectively integrate new and advanced technologies both in the actual equipment for energy conversion, transfer and use, and in the command, control, and communication systems by which effective and efficient operation of the system is orchestrated - in essence, the 'smart grid'. This evolution calls for advances in development, integration, analysis, and deployment approaches that ultimately seek to take into account, every step of the way, the dynamic behavior of the system, capturing critical effects due to interdependencies and interaction. This approach is necessary to better mitigate the risk of blackouts and other disruptions and to improve the flexibility and capacity of the grid. Building on prior Navy and Department of Energy investments in infrastructure and resources for electric power systems research, testing, modeling, and simulation at the Florida State University (FSU) Center for Advanced Power Systems (CAPS), this project has continued an initiative aimed at assuring reliable and secure grid operation through a more complete understanding and characterization of some of the key technologies that will be important in a modern electric system, while also fulfilling an education and outreach mission to provide future energy workforce talent and support the electric system stakeholder community. Building upon and extending portions of that research effort, this project has been focused in the following areas: (1) Building high-fidelity integrated power and controls hardware-in-the-loop research and development testbed capabilities (Figure 1). (2) Distributed Energy Resources Integration - (a) Testing Requirements and Methods for Fault Current Limiters, (b) Contributions to the Development of IEEE 1547.7, (c) Analysis of a STATCOM Application for Wind Resource Integration, (d) Development of a Grid-Interactive Inverter with Energy Storage Elements, (e) Simulation-Assisted Advancement of Microgrid Understanding and Applications; (3) Availability of High-Fidelity Dynamic Simulation Tools for Grid Disturbance Investigations; (4) HTS Material Characterization - (a) AC Loss Studies on High Temperature Superconductors, (b) Local Identification of Current-Limiting Mechanisms in Coated Conductors; (5) Cryogenic Dielectric Research; and (6) Workshops, education, and outreach.« less

  7. Utilities Power Change: Engaging Commercial Customers in Workplace Charging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lommele, Stephen; Dafoe, Wendy

    As stewards of an electric grid that is available almost anywhere people park, utilities that support workplace charging are uniquely positioned to help their commercial customers be a part of the rapidly expanding network of charging infrastructure. Utilities understand the distinctive challenges of their customers, have access to technical information about electrical infrastructure, and have deep experience modeling and managing demand for electricity. This case study highlights the experiences of two utilities with workplace charging programs.

  8. Compounded effects of heat waves and droughts over the Western Electricity Grid: spatio-temporal scales of impacts and predictability toward mitigation and adaptation.

    NASA Astrophysics Data System (ADS)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.; Xie, Y.; Wu, D.; Nguyen, T. B.; Fu, T.; Zhou, T.

    2016-12-01

    Heat waves and droughts are projected to be more frequent and intense. We have seen in the past the effects of each of those extreme climate events on electricity demand and constrained electricity generation, challenging power system operations. Our aim here is to understand the compounding effects under historical conditions. We present a benchmark of Western US grid performance under 55 years of historical climate, and including droughts, using 2010-level of water demand and water management infrastructure, and 2010-level of electricity grid infrastructure and operations. We leverage CMIP5 historical hydrology simulations and force a large scale river routing- reservoir model with 2010-level sectoral water demands. The regulated flow at each water-dependent generating plants is processed to adjust water-dependent electricity generation parameterization in a production cost model, that represents 2010-level power system operations with hourly energy demand of 2010. The resulting benchmark includes a risk distribution of several grid performance metrics (unserved energy, production cost, carbon emission) as a function of inter-annual variability in regional water availability and predictability using large scale climate oscillations. In the second part of the presentation, we describe an approach to map historical heat waves onto this benchmark grid performance using a building energy demand model. The impact of the heat waves, combined with the impact of droughts, is explored at multiple scales to understand the compounding effects. Vulnerabilities of the power generation and transmission systems are highlighted to guide future adaptation.

  9. Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.

    PubMed

    Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M

    2006-01-01

    The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.

  10. Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing

    NASA Astrophysics Data System (ADS)

    Chine, Karim

    The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.

  11. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  12. Smart grid as a service: a discussion on design issues.

    PubMed

    Chao, Hung-Lin; Tsai, Chen-Chou; Hsiung, Pao-Ann; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as "smart" as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system.

  13. A Taxonomy on Accountability and Privacy Issues in Smart Grids

    NASA Astrophysics Data System (ADS)

    Naik, Ameya; Shahnasser, Hamid

    2017-07-01

    Cyber-Physical Systems (CPS) are combinations of computation, networking, and physical processes. Embedded computers and networks monitor control the physical processes, which affect computations and vice versa. Two applications of cyber physical systems include health-care and smart grid. In this paper, we have considered privacy aspects of cyber-physical system applicable to smart grid. Smart grid in collaboration with different stockholders can help in the improvement of power generation, communication, circulation and consumption. The proper management with monitoring feature by customers and utility of energy usage can be done through proper transmission and electricity flow; however cyber vulnerability could be increased due to an increased assimilation and linkage. This paper discusses various frameworks and architectures proposed for achieving accountability in smart grids by addressing privacy issues in Advance Metering Infrastructure (AMI). This paper also highlights additional work needed for accountability in more precise specifications such as uncertainty or ambiguity, indistinct, unmanageability, and undetectably.

  14. Smart Grid as a Service: A Discussion on Design Issues

    PubMed Central

    Tsai, Chen-Chou; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as “smart” as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system. PMID:25243214

  15. Changing from computing grid to knowledge grid in life-science grid.

    PubMed

    Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy

    2009-09-01

    Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hightower, Marion Michael; Baca, Michael J.; VanderMey, Carissa

    In June 2016, the Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy (EERE) in collaboration with the Renewable Energy Branch for the Hawaii State Energy Office (HSEO), the Hawaii Community Development Authority (HCDA), the United States Navy (Navy), and Sandia National Laboratories (Sandia) established a project to 1) assess the current functionality of the energy infrastructure at the Kalaeloa Community Development District, and 2) evaluate options to use both existing and new distributed and renewable energy generation and storage resources within advanced microgrid frameworks to cost-effectively enhance energy security and reliability for critical stakeholder needs during bothmore » short-term and extended electric power outages. This report discusses the results of a stakeholder workshop and associated site visits conducted by Sandia in October 2016 to identify major Kalaeloa stakeholder and tenant energy issues, concerns, and priorities. The report also documents information on the performance and cost benefits of a range of possible energy system improvement options including traditional electric grid upgrade approaches, advanced microgrid upgrades, and combined grid/microgrid improvements. The costs and benefits of the different improvement options are presented, comparing options to see how well they address the energy system reliability, sustainability, and resiliency priorities identified by the Kalaeloa stakeholders.« less

  17. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  18. The Emerging Interdependence of the Electric Power Grid & Information and Communication Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taft, Jeffrey D.; Becker-Dippmann, Angela S.

    2015-08-01

    This paper examines the implications of emerging interdependencies between the electric power grid and Information and Communication Technology (ICT). Over the past two decades, electricity and ICT infrastructure have become increasingly interdependent, driven by a combination of factors including advances in sensor, network and software technologies and progress in their deployment, the need to provide increasing levels of wide-area situational awareness regarding grid conditions, and the promise of enhanced operational efficiencies. Grid operators’ ability to utilize new and closer-to-real-time data generated by sensors throughout the system is providing early returns, particularly with respect to management of the transmission system formore » purposes of reliability, coordination, congestion management, and integration of variable electricity resources such as wind generation.« less

  19. A Petri Net model for distributed energy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konopko, Joanna

    2015-12-31

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of themore » model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.« less

  20. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    NASA Astrophysics Data System (ADS)

    Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.

    2014-06-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  1. Laboratory for energy smart systems (LESS).

    DOT National Transportation Integrated Search

    2016-12-01

    The US power grid ageing fast and the societal and environmental pressures for clean energy are increasing more than ever. The ageing power infrastructure poses major limitations on energy reliability and resiliency, especially in lieu of recent extr...

  2. Impact of wind farms with energy storage on transient stability

    NASA Astrophysics Data System (ADS)

    Bowman, Douglas Allen

    Today's energy infrastructure will need to rapidly expand in terms of reliability and flexibility due to aging infrastructure, changing energy market conditions, projected load increases, and system reliability requirements. Over the few decades, several states in the U.S. are now requiring an increase in wind penetration. These requirements will have impacts on grid reliability given the inherent intermittency of wind generation and much research has been completed on the impact of wind on grid reliability. Energy storage has been proposed as a tool to provide greater levels of reliability; however, little research has occurred in the area of wind with storage and its impact on stability given different possible scenarios. This thesis addresses the impact of wind farm penetration on transient stability when energy storage is added. The results show that battery energy storage located at the wind energy site can improve the stability response of the system.

  3. The smart meter and a smarter consumer: quantifying the benefits of smart meter implementation in the United States

    PubMed Central

    2012-01-01

    The electric grid in the United States has been suffering from underinvestment for years, and now faces pressing challenges from rising demand and deteriorating infrastructure. High congestion levels in transmission lines are greatly reducing the efficiency of electricity generation and distribution. In this paper, we assess the faults of the current electric grid and quantify the costs of maintaining the current system into the future. While the proposed “smart grid” contains many proposals to upgrade the ailing infrastructure of the electric grid, we argue that smart meter installation in each U.S. household will offer a significant reduction in peak demand on the current system. A smart meter is a device which monitors a household’s electricity consumption in real-time, and has the ability to display real-time pricing in each household. We conclude that these devices will provide short-term and long-term benefits to utilities and consumers. The smart meter will enable utilities to closely monitor electricity consumption in real-time, while also allowing households to adjust electricity consumption in response to real-time price adjustments. PMID:22540990

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meintz, A.; Markel, T.; Burton, E.

    Analysis has been performed on the Transportation Secure Data Center (TSDC) warehouse of collected GPS second-by-second driving profile data of vehicles in the Atlanta, Chicago, Fresno, Kansas City, Los Angeles, Sacramento, and San Francisco Consolidated Statistical Areas (CSAs) to understand in-motion wireless power transfer introduction scenarios. In this work it has been shown that electrification of 1% of road miles could reduce fuel use by 25% for Hybrid Electric Vehicles (HEVs) in these CSAs. This analysis of strategically located infrastructure offers a promising approach to reduced fuel consumption; however, even the most promising 1% of road miles determined by thesemore » seven analysis scenarios still represent an impressive 2,700 miles of roadway to electrify. Therefore to mitigate the infrastructure capital costs, integration of the grid-tied power electronics in the Wireless Power Transfer (WPT) system at the DC-link to photovoltaic and/or battery storage is suggested. The integration of these resources would allow for the hardware to provide additional revenue through grid services at times of low traffic volumes and conversely at time of high traffic volumes these resources could reduce the peak demand that the WPT system would otherwise add to the grid.« less

  5. UMTS Network Stations

    NASA Astrophysics Data System (ADS)

    Hernandez, C.

    2010-09-01

    The weakness of small island electrical grids implies a handicap for the electrical generation with renewable energy sources. With the intention of maximizing the installation of photovoltaic generators in the Canary Islands, arises the need to develop a solar forecasting system that allows knowing in advance the amount of PV generated electricity that will be going into the grid, from the installed PV power plants installed in the island. The forecasting tools need to get feedback from real weather data in "real time" from remote weather stations. Nevertheless, the transference of this data to the calculation computer servers is very complicated with the old point to point telecommunication systems that, neither allow the transfer of data from several remote weather stations simultaneously nor high frequency of sampling of weather parameters due to slowness of the connection. This one project has developed a telecommunications infrastructure that allows sensorizadas remote stations, to send data of its sensors, once every minute and simultaneously, to the calculation server running the solar forecasting numerical models. For it, the Canary Islands Institute of Technology has added a sophisticated communications network to its 30 weather stations measuring irradiation at strategic sites, areas with high penetration of photovoltaic generation or that have potential to host in the future photovoltaic power plants connected to the grid. In each one of the stations, irradiance and temperature measurement instruments have been installed, over inclined silicon cell, global radiation on horizontal surface and room temperature. Mobile telephone devices have been installed and programmed in each one of the weather stations, which allow the transfer of their data taking advantage of the UMTS service offered by the local telephone operator. Every minute the computer server running the numerical weather forecasting models receives data inputs from 120 instruments distributed over the 30 radiometric stations. As a the result, currently it exist a stable, flexible, safe and economic infrastructure of radiometric stations and telecommunications that allows, on the one hand, to have data in real time from all 30 remote weather stations, and on the other hand allows to communicate with them in order to reprogram them and to carry out maintenance works.

  6. Operational and Strategic Implementation of Dynamic Line Rating for Optimized Wind Energy Generation Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentle, Jake Paul

    2016-12-01

    One primary goal of rendering today’s transmission grid “smarter” is to optimize and better manage its power transfer capacity in real time. Power transfer capacity is affected by three main elements: stability, voltage limits, and thermal ratings. All three are critical, but thermal ratings represent the greatest opportunity to quickly, reliably and economically utilize the grid’s true capacity. With the “Smarter Grid”, new solutions have been sought to give operators a better grasp on real time conditions, allowing them to manage and extend the usefulness of existing transmission infrastructure in a safe and reliable manner. The objective of the INLmore » Wind Program is to provide industry a Dynamic Line Rating (DLR) solution that is state of the art as measured by cost, accuracy and dependability, to enable human operators to make informed decisions and take appropriate actions without human or system overloading and impacting the reliability of the grid. In addition to mitigating transmission line congestion to better integrate wind, DLR also offers the opportunity to improve the grid with optimized utilization of transmission lines to relieve congestion in general. As wind-generated energy has become a bigger part of the nation’s energy portfolio, researchers have learned that wind not only turns turbine blades to generate electricity, but can cool transmission lines and increase transfer capabilities significantly, sometimes up to 60 percent. INL’s DLR development supports EERE and The Wind Energy Technology Office’s goals by informing system planners and grid operators of available transmission capacity, beyond typical Static Line Ratings (SLR). SLRs are based on a fixed set of conservative environmental conditions to establish a limit on the amount of current lines can safely carry without overheating. Using commercially available weather monitors mounted on industry informed custom brackets developed by INL in combination with Computational Fluid Dynamics (CFD) enhanced weather analysis and DLR software, INL’s project offers the potential of safely providing line ampacities up to 40 percent or more above existing SLRs, by using real time information rather than overly conservative SLR.« less

  7. e-Science on Earthquake Disaster Mitigation by EUAsiaGrid

    NASA Astrophysics Data System (ADS)

    Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong

    2010-05-01

    Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the earthquake wave propagation to tsunami mitigation would be feasible once the user community support is in place.

  8. Regional Renewable Energy Cooperatives

    NASA Astrophysics Data System (ADS)

    Hazendonk, P.; Brown, M. B.; Byrne, J. M.; Harrison, T.; Mueller, R.; Peacock, K.; Usher, J.; Yalamova, R.; Kroebel, R.; Larsen, J.; McNaughton, R.

    2014-12-01

    We are building a multidisciplinary research program linking researchers in agriculture, business, earth science, engineering, humanities and social science. Our goal is to match renewable energy supply and reformed energy demands. The program will be focused on (i) understanding and modifying energy demand, (ii) design and implementation of diverse renewable energy networks. Geomatics technology will be used to map existing energy and waste flows on a neighbourhood, municipal, and regional level. Optimal sites and combinations of sites for solar and wind electrical generation (ridges, rooftops, valley walls) will be identified. Geomatics based site and grid analyses will identify best locations for energy production based on efficient production and connectivity to regional grids and transportation. Design of networks for utilization of waste streams of heat, water, animal and human waste for energy production will be investigated. Agriculture, cities and industry produce many waste streams that are not well utilized. Therefore, establishing a renewable energy resource mapping and planning program for electrical generation, waste heat and energy recovery, biomass collection, and biochar, biodiesel and syngas production is critical to regional energy optimization. Electrical storage and demand management are two priorities that will be investigated. Regional scale cooperatives may use electric vehicle batteries and innovations such as pump storage and concentrated solar molten salt heat storage for steam turbine electrical generation. Energy demand management is poorly explored in Canada and elsewhere - our homes and businesses operate on an unrestricted demand. Simple monitoring and energy demand-ranking software can easily reduce peaks demands and move lower ranked uses to non-peak periods, thereby reducing the grid size needed to meet peak demands. Peak demand strains the current energy grid capacity and often requires demand balancing projects and infrastructure that is highly inefficient due to overall low utilization.

  9. An Intelligent Approach to Strengthening of the Rural Electrical Power Supply Using Renewable Energy Resources

    NASA Astrophysics Data System (ADS)

    Robert, F. C.; Sisodia, G. S.; Gopalan, S.

    2017-08-01

    The healthy growth of economy lies in the balance between rural and urban development. Several developing countries have achieved a successful growth of urban areas, yet rural infrastructure has been neglected until recently. The rural electrical grids are weak with heavy losses and low capacity. Renewable energy represents an efficient way to generate electricity locally. However, the renewable energy generation may be limited by the low grid capacity. The current solutions focus on grid reinforcement only. This article presents a model for improving renewable energy integration in rural grids with the intelligent combination of three strategies: 1) grid reinforcement, 2) use of storage and 3) renewable energy curtailments. Such approach provides a solution to integrate a maximum of renewable energy generation on low capacity grids while minimising project cost and increasing the percentage of utilisation of assets. The test cases show that a grid connection agreement and a main inverter sized at 60 kW (resp. 80 kW) can accommodate a 100 kWp solar park (resp. 100 kW wind turbine) with minimal storage.

  10. Development of Armenian-Georgian Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg; Kochiashvili, Nino; Astsatryan, Hrach; Harutyunian, Haik; Magakyan, Tigran; Chargeishvili, Ketevan; Natsvlishvili, Rezo; Kukhianidze, Vasil; Ramishvili, Giorgi; Sargsyan, Lusine; Sinamyan, Parandzem; Kochiashvili, Ia; Mikayelyan, Gor

    2009-10-01

    The Armenian-Georgian Virtual Observatory (ArGVO) project is the first initiative in the world to create a regional VO infrastructure based on national VO projects and regional Grid. The Byurakan and Abastumani Astrophysical Observatories are scientific partners since 1946, after establishment of the Byurakan observatory . The Armenian VO project (ArVO) is being developed since 2005 and is a part of the International Virtual Observatory Alliance (IVOA). It is based on the Digitized First Byurakan Survey (DFBS, the digitized version of famous Markarian survey) and other Armenian archival data. Similarly, the Georgian VO will be created to serve as a research environment to utilize the digitized Georgian plate archives. Therefore, one of the main goals for creation of the regional VO is the digitization of large amounts of plates preserved at the plate stacks of these two observatories. The total amount of plates is more than 100,000 units. Observational programs of high importance have been selected and some 3000 plates will be digitized during the next two years; the priority is being defined by the usefulness of the material for future science projects, like search for new objects, optical identifications of radio, IR, and X-ray sources, study of variability and proper motions, etc. Having the digitized material in VO standards, a VO database through the regional Grid infrastructure will be active. This partnership is being carried out in the framework of the ISTC project A-1606 "Development of Armenian-Georgian Grid Infrastructure and Applications in the Fields of High Energy Physics, Astrophysics and Quantum Physics".

  11. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  12. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    DOE PAGES

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; ...

    2017-03-20

    Increased coupling between critical infrastructure networks, such as power and communication systems, has important implications for the reliability and security of these systems. To understand the effects of power-communication coupling, several researchers have studied models of interdependent networks and reported that increased coupling can increase vulnerability. However, these conclusions come largely from models that have substantially different mechanisms of cascading failure, relative to those found in actual power and communication networks, and that do not capture the benefits of connecting systems with complementary capabilities. In order to understand the importance of these details, this paper compares network vulnerability in simplemore » topological models and in models that more accurately capture the dynamics of cascading in power systems. First, we compare a simple model of topological contagion to a model of cascading in power systems and find that the power grid model shows a higher level of vulnerability, relative to the contagion model. Second, we compare a percolation model of topological cascading in coupled networks to three different models of power networks coupled to communication systems. Again, the more accurate models suggest very different conclusions than the percolation model. In all but the most extreme case, the physics-based power grid models indicate that increased power-communication coupling decreases vulnerability. This is opposite from what one would conclude from the percolation model, in which zero coupling is optimal. Only in an extreme case, in which communication failures immediately cause grid failures, did we find that increased coupling can be harmful. Together, these results suggest design strategies for reducing the risk of cascades in interdependent infrastructure systems.« less

  13. Earth Science community support in the EGI-Inspire Project

    NASA Astrophysics Data System (ADS)

    Schwichtenberg, H.

    2012-04-01

    The Earth Science Grid community is following its strategy of propagating Grid technology to the ES disciplines, setting up interactive collaboration among the members of the community and stimulating the interest of stakeholders on the political level since ten years already. This strategy was described in a roadmap published in an Earth Science Informatics journal. It was applied through different European Grid projects and led to a large Grid Earth Science VRC that covers a variety of ES disciplines; in the end, all of them were facing the same kind of ICT problems. .. The penetration of Grid in the ES community is indicated by the variety of applications, the number of countries in which ES applications are ported, the number of papers in international journals and the number of related PhDs. Among the six virtual organisations belonging to ES, one, ESR, is generic. Three others -env.see-grid-sci.eu, meteo.see-grid-sci.eu and seismo.see-grid-sci.eu- are thematic and regional (South Eastern Europe) for environment, meteorology and seismology. The sixth VO, EGEODE, is for the users of the Geocluster software. There are also ES users in national VOs or VOs related to projects. The services for the ES task in EGI-Inspire concerns the data that are a key part of any ES application. The ES community requires several interfaces to access data and metadata outside of the EGI infrastructure, e.g. by using grid-enabled database interfaces. The data centres have also developed service tools for basic research activities such as searching, browsing and downloading these datasets, but these are not accessible from applications executed on the Grid. The ES task in EGI-Inspire aims to make these tools accessible from the Grid. In collaboration with GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) this task is maintaining and evolving an interface in response to new requirements that will allow data in the GENESI-DR infrastructure to be accessed from EGI resources to enable future research activities by this HUC. The international climate community for IPCC has created the Earth System Grid (ESG) to store and share climate data. There is a need to interface ESG with EGI for climate studies - parametric, regional and impact aspects. Critical points concern the interoperability of security mechanism between both "organisations", data protection policy, data transfer, data storage and data caching. Presenter: Horst Schwichtenberg Co-Authors: Monique Petitdidier (IPSL), Andre Gemünd (SCAI), Wim Som de Cerff (KNMI), Michael Schnell (SCAI)

  14. The global coastline dataset: the observed relation between erosion and sea-level rise

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; Luijendijk, A.; Hagenaars, G.

    2017-12-01

    Erosion of sandy coasts is considered one of the key risks of sea-level rise. Because sandy coastlines of the world are often highly populated, erosive coastline trends result in risk to populations and infrastructure. Most of our understanding of the relation between sea-level rise and coastal erosion is based on local or regional observations and generalizations of numerical and physical experiments. Until recently there was no reliable global scale assessment of the location of sandy coasts and their rate of erosion and accretion. Here we present the global coastline dataset that covers erosion indicators on a local scale with global coverage. The dataset uses our global coastline transects grid defined with an alongshore spacing of 250 m and a cross shore length extending 1 km seaward and 1 km landward. This grid matches up with pre-existing local grids where available. We present the latest results on validation of coastal-erosion trends (based on optical satellites) and classification of sandy versus non-sandy coasts. We show the relation between sea-level rise (based both on tide-gauges and multi-mission satellite altimetry) and observed erosion trends over the last decades, taking into account broken-coastline trends (for example due to nourishments).An interactive web application presents the publicly-accessible results using a backend based on Google Earth Engine. It allows both researchers and stakeholders to use objective estimates of coastline trends, particularly when authoritative sources are not available.

  15. DPM: Future Proof Storage

    NASA Astrophysics Data System (ADS)

    Alvarez, Alejandro; Beche, Alexandre; Furano, Fabrizio; Hellmich, Martin; Keeble, Oliver; Rocha, Ricardo

    2012-12-01

    The Disk Pool Manager (DPM) is a lightweight solution for grid enabled disk storage management. Operated at more than 240 sites it has the widest distribution of all grid storage solutions in the WLCG infrastructure. It provides an easy way to manage and configure disk pools, and exposes multiple interfaces for data access (rfio, xroot, nfs, gridftp and http/dav) and control (srm). During the last year we have been working on providing stable, high performant data access to our storage system using standard protocols, while extending the storage management functionality and adapting both configuration and deployment procedures to reuse commonly used building blocks. In this contribution we cover in detail the extensive evaluation we have performed of our new HTTP/WebDAV and NFS 4.1 frontends, in terms of functionality and performance. We summarize the issues we faced and the solutions we developed to turn them into valid alternatives to the existing grid protocols - namely the additional work required to provide multi-stream transfers for high performance wide area access, support for third party copies, credential delegation or the required changes in the experiment and fabric management frameworks and tools. We describe new functionality that has been added to ease system administration, such as different filesystem weights and a faster disk drain, and new configuration and monitoring solutions based on the industry standards Puppet and Nagios. Finally, we explain some of the internal changes we had to do in the DPM architecture to better handle the additional load from the analysis use cases.

  16. Individualized grid-enabled mammographic training system

    NASA Astrophysics Data System (ADS)

    Yap, M. H.; Gale, A. G.

    2009-02-01

    The PERFORMS self-assessment scheme measures individuals skills in identifying key mammographic features on sets of known cases. One aspect of this is that it allows radiologists' skills to be trained, based on their data from this scheme. Consequently, a new strategy is introduced to provide revision training based on mammographic features that the radiologist has had difficulty with in these sets. To do this requires a lot of random cases to provide dynamic, unique, and up-to-date training modules for each individual. We propose GIMI (Generic Infrastructure in Medical Informatics) middleware as the solution to harvest cases from distributed grid servers. The GIMI middleware enables existing and legacy data to support healthcare delivery, research, and training. It is technology-agnostic, data-agnostic, and has a security policy. The trainee examines each case, indicating the location of regions of interest, and completes an evaluation form, to determine mammographic feature labelling, diagnosis, and decisions. For feedback, the trainee can choose to have immediate feedback after examining each case or batch feedback after examining a number of cases. All the trainees' result are recorded in a database which also contains their trainee profile. A full report can be prepared for the trainee after they have completed their training. This project demonstrates the practicality of a grid-based individualised training strategy and the efficacy in generating dynamic training modules within the coverage/outreach of the GIMI middleware. The advantages and limitations of the approach are discussed together with future plans.

  17. ICT-based hydrometeorology science and natural disaster societal impact assessment

    NASA Astrophysics Data System (ADS)

    Parodi, A.; Clematis, A.; Craig, G. C.; Kranzmueller, D.

    2009-09-01

    In the Lisbon strategy, the 2005 European Council identified knowledge and innovation as the engines of sustainable growth and stated that it is essential to build a fully inclusive information society. In parallel, the World Conference on Disaster Reduction (Hyogo, 2005), defined among its thematic priorities the improvement of international cooperation in hydrometeorology research activities. This was recently confirmed at the joint press conference of the Center for Research on Epidemiology of Disasters (CRED) with the United Nations International Strategy for Disaster Reduction (UNISDR) Secretariat, held on January 2009, where it was noted that flood and storm events are among the natural disasters that most impact human life. Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modelling tools, post processing methodologies and observational data are available. Recent European efforts in developing a platform for e-science, like EGEE (Enabling Grids for E-sciencE), SEE-GRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind, the goal of the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS) project is the promotion of the Grid culture within the European hydrometeorological research community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change.

  18. Developing infrastructure for interconnecting transportation network and electric grid.

    DOT National Transportation Integrated Search

    2011-09-01

    This report is primarily focused on the development of mathematical models that can be used to : support decisions regarding a charging station location and installation problem. The major parts : of developing the models included identification of t...

  19. Integrated Energy System Simulation | Grid Modernization | NREL

    Science.gov Websites

    Systems Integration Facility Control Room. For example, if the goal is to provide heat and electricity to infrastructure-and used when needed. For example, mid-day in early to late spring, sunshine is abundant, but

  20. 75 FR 29338 - Energy Efficiency of Natural Gas Infrastructure and Operations Conference; Final Notice of Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-25

    ... recovery projects and issues associated with fugitive methane. Bruce Hedman, ICF International, on behalf... associated with fugitive methane. Richard D. Murphy, S.V.P. Energy Solutions Services, National Grid, on...

  1. Complex data management for landslide monitoring in emergency conditions

    NASA Astrophysics Data System (ADS)

    Intrieri, Emanuele; Bardi, Federica; Fanti, Riccardo; Gigli, Giovanni; Fidolini, Francesco; Casagli, Nicola; Costanzo, Sandra; Raffo, Antonio; Di Massa, Giuseppe; Versace, Pasquale

    2017-04-01

    Urbanization, especially in mountain areas, can be considered a major cause for high landslide risk because of the increased exposure of elements at risk. Among the elements at risk, important communication routes such as highways, can be classified as critical infrastructures, since their rupture can cause deaths and chain effects with catastrophic damages on society. The resiliency policy involves prevention activities but also, and more importantly, those activities needed to maintain functionality after disruption and promptly alert incoming catastrophes. To tackle these issues, early warning systems are increasingly employed. However, a gap exists between the ever more technologically advanced instruments and the actual capability of exploiting their full potential. This is due to several factors such as the limited internet connectivity with respect to big data transfers, or the impossibility for operators to check a continuous flow of real time information. A ground-based interferometric synthetic aperture radar was installed along the A16 highway (Campania Region, Southern Italy) to monitor an unstable slope threatening this infrastructure. The installation was in an area where the only internet connection available was 3G, with a limit of 2 gigabyte data transfer per month. On the other hand interferometric data are complex numbers organized in a matrix where each pixel contains both phase and amplitude information of the backscattered signal. The radar employed produced a 1001x1001 complex matrix (corresponding to 7 megabytes) every 5 minutes. Therefore there was the need to reduce the massive data flow produced by the radar. For this reason data were locally and automatically elaborated in order to produce, from a complex matrix, a simple ASCII grid containing only the pixel by pixel displacement value, which is derived from the phase information. Then, since interferometry only measures the displacement component projected along the radar line of sight, data needed to be re-projected. This was performed by dividing the ASCII grid by a correction matrix, where every element of the matrix was the percentage of the actual displacement that was measurable by the radar; such percentage can be obtained with trigonometrical arguments knowing the position of the radar and the direction of movement of the landslides (which, in our case, corresponded with the slope direction) thus enabling the calculation of the radar line of sight. To further reduce the size of the grids, they where cropped in order to contain only those pixel where relevant information could be extracted. The ASCII grids where also averaged to reduce noise, so 8-hours and 24-hours averaged grids were obtained. According to the early warning procedures that were defined, during periods characterized by low or null slope movement, only 8-hours and 24-hours data where transferred, together with the last displacement measurement of a reduced number of control points. The transfer was performed after transforming the grids into strings and by sending them through a middleware to the Data Acquisition and Elaboration Centre, where control points displacement values where compared with warning thresholds and the grids where projected on a GIS environment as 2D displacement maps.

  2. Improving Grid Resilience through Informed Decision-making (IGRID)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric

    The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less

  3. Research and Deployment a Hospital Open Software Platform for e-Health on the Grid System at VAST/IAMI

    NASA Astrophysics Data System (ADS)

    van Tuyet, Dao; Tuan, Ngo Anh; van Lang, Tran

    Grid computing has been an increasing topic in recent years. It attracts the attention of many scientists from many fields. As a result, many Grid systems have been built for serving people's demands. At present, many tools for developing the Grid systems such as Globus, gLite, Unicore still developed incessantly. Especially, gLite - the Grid Middleware - was developed by the Europe Community scientific in recent years. Constant growth of Grid technology opened the way for new opportunities in term of information and data exchange in a secure and collaborative context. These new opportunities can be exploited to offer physicians new telemedicine services in order to improve their collaborative capacities. Our platform gives physicians an easy method to use telemedicine environment to manage and share patient's information (such as electronic medical record, images formatted DICOM) between remote locations. This paper presents the Grid Infrastructure based on gLite; some main components of gLite; the challenge scenario in which new applications can be developed to improve collaborative work between scientists; the process of deploying Hospital Open software Platform for E-health (HOPE) on the Grid.

  4. Decentral Smart Grid Control

    NASA Astrophysics Data System (ADS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok, Koen; Widergren, Steve

    Secure, Clean and Efficient Energy is one of the great societal challenges of our time. Electricity as a sustainable energy carrier plays a central role in the most effective transition scenarios towards sustainability. To harness this potential, the current electricity infrastructure needs to be rigorously re-engineered into an integrated and intelligent electricity system: the smart grid. Key elements of the smart grid vision are the coordination mechanisms. In such a system, vast numbers of devices, currently just passively connected to the grid, will become actively involved in system-wide and local coordination tasks. In this light, transactive energy (TE) is emergingmore » as a strong contender for orchestrating the coordinated operation of so many devices.« less

  6. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  7. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  8. From Ions to Wires to the Grid: The Transformational Science of LANL Research in High-Tc Superconducting Tapes and Electric Power Applications

    ScienceCinema

    Marken, Ken

    2018-01-09

    The Department of Energy (DOE) Office of Electricity Delivery and Energy Reliability (OE) has been tasked to lead national efforts to modernize the electric grid, enhance security and reliability of the energy infrastructure, and facilitate recovery from disruptions to energy supplies. LANL has pioneered the development of coated conductors – high-temperature superconducting (HTS) tapes – which permit dramatically greater current densities than conventional copper cable, and enable new technologies to secure the national electric grid. Sustained world-class research from concept, demonstration, transfer, and ongoing industrial support has moved this idea from the laboratory to the commercial marketplace.

  9. Exploring New Models for Utility Distributed Energy Resource Planning and Integration: SMUD and Con Edison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2018-01-23

    As a result of the rapid growth of renewable energy in the United States, the U.S. electric grid is undergoing a monumental shift away from its historical status quo. These changes are occurring at both the centralized and local levels and have been driven by a number of different factors, including large declines in renewable energy costs, federal and state incentives and mandates, and advances in the underlying technology. Higher levels of variable-generation renewable energy, however, may require new and increasingly complex methods for utilities to operate and maintain the grid while also attempting to limit the costly build-out ofmore » supporting grid infrastructure.« less

  10. Air Pollution Monitoring and Mining Based on Sensor Grid in London

    PubMed Central

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-01-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a two-layer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm. PMID:27879895

  11. Air Pollution Monitoring and Mining Based on Sensor Grid in London.

    PubMed

    Ma, Yajie; Richards, Mark; Ghanem, Moustafa; Guo, Yike; Hassard, John

    2008-06-01

    In this paper, we present a distributed infrastructure based on wireless sensors network and Grid computing technology for air pollution monitoring and mining, which aims to develop low-cost and ubiquitous sensor networks to collect real-time, large scale and comprehensive environmental data from road traffic emissions for air pollution monitoring in urban environment. The main informatics challenges in respect to constructing the high-throughput sensor Grid are discussed in this paper. We present a twolayer network framework, a P2P e-Science Grid architecture, and the distributed data mining algorithm as the solutions to address the challenges. We simulated the system in TinyOS to examine the operation of each sensor as well as the networking performance. We also present the distributed data mining result to examine the effectiveness of the algorithm.

  12. The ATLAS Simulation Infrastructure

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2010-09-25

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less

  13. Grid Modernization - Sandia Energy

    Science.gov Websites

    ; Components Compatibility Hydrogen Behavior Quantitative Risk Assessment Technical Reference for Hydrogen Combustion jbei Facilities Algae Testbed Battery Abuse Testing Laboratory Center for Infrastructure Research and Innovation Combustion Research Facility Joint BioEnergy Institute Close Energy Research Programs

  14. Off-grid MEMS sensors configurations for transportation applications.

    DOT National Transportation Integrated Search

    2013-10-01

    The worsening problem of aging and deficient infrastructure in this nation and across the world has demonstrated the need for an improved system to monitor and maintain these structures. The field of structural health monitoring has grown in recent y...

  15. Latvian Education Informatization System LIIS

    ERIC Educational Resources Information Center

    Bicevskis, Janis; Andzans, Agnis; Ikaunieks, Evalds; Medvedis, Inga; Straujums, Uldis; Vezis, Viesturs

    2004-01-01

    The Latvian Education Informatization System LIIS project covers the whole information grid: education content, management, information services, infrastructure and user training at several levels--schools, school boards and Ministry of Education and Science. Informatization is the maintained process of creating the technical, economical and…

  16. High-Efficiency Food Production in a Renewable Energy Based Micro-Grid Power System

    NASA Technical Reports Server (NTRS)

    Bubenheim, David; Meiners, Dennis

    2016-01-01

    Controlled Environment Agriculture (CEA) systems can be used to produce high-quality, desirable food year round, and the fresh produce can positively contribute to the health and well being of residents in communities with difficult supply logistics. While CEA has many positive outcomes for a remote community, the associated high electric demands have prohibited widespread implementation in what is typically already a fully subscribed power generation and distribution system. Recent advances in CEA technologies as well as renewable power generation, storage, and micro-grid management are increasing system efficiency and expanding the possibilities for enhancing community supporting infrastructure without increasing demands for outside supplied fuels. We will present examples of how new lighting, nutrient delivery, and energy management and control systems can enable significant increases in food production efficiency while maintaining high yields in CEA. Examples from Alaskan communities where initial incorporation of renewable power generation, energy storage and grid management techniques have already reduced diesel fuel consumption for electric generation by more than 40% and expanded grid capacity will be presented. We will discuss how renewable power generation, efficient grid management to extract maximum community service per kW, and novel energy storage approaches can expand the food production, water supply, waste treatment, sanitation and other community support services without traditional increases of consumable fuels supplied from outside the community. These capabilities offer communities with a range of choices to enhance their communities. The examples represent a synergy of technology advancement efforts to develop sustainable community support systems for future space-based human habitats and practical implementation of infrastructure components to increase efficiency and enhance health and well being in remote communities today and tomorrow.

  17. High-Efficiency Food Production in a Renewable Energy Based Micro-Grid

    NASA Technical Reports Server (NTRS)

    Bubenheim, David L.

    2017-01-01

    Controlled Environment Agriculture (CEA) systems can be used to produce high-quality, desirable food year round, and the fresh produce can positively contribute to the health and well being of residents in communities with difficult supply logistics. While CEA has many positive outcomes for a remote community, the associated high electric demands have prohibited widespread implementation in what is typically already a fully subscribed power generation and distribution system. Recent advances in CEA technologies as well as renewable power generation, storage, and micro-grid management are increasing system efficiency and expanding the possibilities for enhancing community supporting infrastructure without increasing demands for outside supplied fuels. We will present examples of how new lighting, nutrient delivery, and energy management and control systems can enable significant increases in food production efficiency while maintaining high yields in CEA.Examples from Alaskan communities where initial incorporation of renewable power generation, energy storage and grid management techniques have already reduced diesel fuel consumption for electric generation by more than 40 and expanded grid capacity will be presented. We will discuss how renewable power generation, efficient grid management to extract maximum community service per kW, and novel energy storage approaches can expand the food production, water supply, waste treatment, sanitation and other community support services without traditional increases of consumable fuels supplied from outside the community. These capabilities offer communities with a range of choices to enhance their communities. The examples represent a synergy of technology advancement efforts to develop sustainable community support systems for future space-based human habitats and practical implementation of infrastructure components to increase efficiency and enhance health and well-being in remote communities today and tomorrow.

  18. Can developing countries leapfrog the centralized electrification paradigm?

    DOE PAGES

    Levin, Todd; Thomas, Valerie M.

    2016-02-04

    Due to the rapidly decreasing costs of small renewable electricity generation systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. Furthermore, by looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model.« less

  19. Trustworthy Cyber Infrastructure for the Power Grid (TCIPG) Final Technical Report - November 20, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, William H.; Sauer, Peter W.; Valdes, Alfonso

    The Trustworthy Cyber Infrastructure for the Power Grid project (TCIPG) was funded by DOE and DHS for a period of performance that ran from October 1, 2009 to August 31 2015. The partnership included the University of Illinois at Urbana-Champaign (lead institution) and partner institutions Arizona State University (replacing original partner UC Davis when faculty moved), Dartmouth College, and Washington State University. TCIPG was a unique public-private partnership of government, academia, and industry that was formed to meet the challenge of keeping our power grid secure. TCIPG followed from the earlier NSF-funded TCIP project, which kicked off in 2005. Atmore » that time, awareness of cyber security and resiliency in grid systems (and in control systems in general) was low, and the term “smart grid” was not in wide use. The original partnership was formed from a team of academic researchers with a shared vision for the importance of research in this area, and a commitment to producing more impactful results through early involvement of industry. From the TCIPG standpoint, “industry” meant both utilities (investor-owned as well as cooperatives and municipals) and system vendors (who sell technology to the utility sector). Although TCIPG was a university-led initiative, we have from the start stressed real-world impact and partnership with industry. That has led to real-world adoption of TCIPG technologies within the industry, achieving practical benefits. This report summarizes the achievements of TCIPG over its period of performance.« less

  20. Infrastructure Systems for Advanced Computing in E-science applications

    NASA Astrophysics Data System (ADS)

    Terzo, Olivier

    2013-04-01

    In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snow, Dr., Joel

    This final report is presented by Langston University (LU) for the project entitled "Langston University High Energy Physics" (LUHEP) under the direction of principal investigator (PI) and project director Professor Joel Snow. The project encompassed high energy physics research performed at hadron colliders. The PI is a collaborator on the DZero experiment at Fermi National Accelerator Laboratory in Batavia, IL, USA and the ATLAS experiment at CERN in Geneva, Switzerland and was during the entire project period from April 1, 1999 until May 14, 2012. Both experiments seek to understand the fundamental constituents of the physical universe and the forcesmore » that govern their interactions. In 1999 as member of the Online Systems group for Run 2 the PI developed a cross-platform Python-based, Graphical User Interface (GUI) application for monitoring and control of EPICS based devices for control room use. This served as a model for other developers to enhance and build on for further monitoring and control tasks written in Python. Subsequently the PI created and developed a cross-platform C++ GUI utilizing a networked client-server paradigm and based on ROOT, the object oriented analysis framework from CERN. The GUI served as a user interface to the Examine tasks running in the D\\O\\ control room which monitored the status and integrity of data taking for Run 2. The PI developed the histogram server/control interface to the GUI client for the EXAMINE processes. The histogram server was built from the ROOT framework and was integrated into the D\\O\\ framework used for online monitoring programs and offline analysis. The PI developed the first implementation of displaying histograms dynamically generated by ROOT in a Web Browser. The PI's work resulted in several talks and papers at international conferences and workshops. The PI established computing software infrastructure at LU and U. Oklahoma (OU) to do analysis of DZero production data and produce simulation data for the experiment. Eventually this included the FNAL SAM data grid system, the SAMGrid (SG) infrastructure, and the Open Science Grid software stacks for computing and storage elements. At the end of 2003 Snow took on the role of global Monte Carlo production coordinator for the DØ experiment. A role which continues til this day. In January of 2004 Snow started working with the SAMGrid development team to help debug, deploy, and integrate SAMGrid with DØ Monte Carlo production. Snow installed and configured SG execution and client sites at LUHEP and OUHEP, and a SG scheduler site at LUHEP. The PI developed a python based GUI (DAJ) that acts as a front end for job submission to SAMGrid. The GUI interfaces to the DZero Mone Carlo (MC) request system that uses SAM to manage MC requests by the physics analysis groups. DAJ significantly simplified SG job submission and was deployed in DZero in an effort to increase the user base of SG. The following year was the advent of SAMGrid job submission to the Open Science Grid (OSG) and LHC Computing Grid (LCG) through a forwarding mechanism. The PI oversaw the integration of these grids into the existing production infrastructure. The PI developed an automatic MC (Automc) request processing system capable of operating without user intervention (other than getting grid credentials), and able to submit to any number of sites on various grids. The system manages production at all but 2 sites. The system was deployed at Fermilab and remains operating there today. The PI's work in distributed computing resulted in several talks at international conferences. UTA, OU, and LU were chosen as the collaborating institutions that form the Southwest Tier 2 Center (SWT2) for ATLAS. During the project period the PI contributed to the online and offline software infrastructure through his work with the Run 2 online group, and played a major role in Monte Carlo production for DZero. During the part of the project period in which the PI served as MC production coordinator MC production increased very significantly. In the first year of the PI's tenure as production coordinator production was 159M events and 6.7~TB of data. During the last year of the project period production was 2,342~M events and 262~TB of data. That is a factor of 15 increase in events and 39 in data volume. The increase occurred with improvements in computer hardware and networks, through the use of grid technology on diverse resources, and through increased automation and efficiency of the production process. LU HEP developed and deployed the automatic MC request processing system in use at FNAL. The complementary strategies of automation and grid production served DZero well. Fermilab has recognized LU HEP's contribution to DZero by allowing the PI to devote full time to research activities by appointing him a guest scientist for the last six years of the project period.« less

  2. OOI CyberInfrastructure - Next Generation Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.

    2008-12-01

    Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.

  3. Integrating Solar Power onto the Electric Grid - Bridging the Gap between Atmospheric Science, Engineering and Economics

    NASA Astrophysics Data System (ADS)

    Ghonima, M. S.; Yang, H.; Zhong, X.; Ozge, B.; Sahu, D. K.; Kim, C. K.; Babacan, O.; Hanna, R.; Kurtz, B.; Mejia, F. A.; Nguyen, A.; Urquhart, B.; Chow, C. W.; Mathiesen, P.; Bosch, J.; Wang, G.

    2015-12-01

    One of the main obstacles to high penetrations of solar power is the variable nature of solar power generation. To mitigate variability, grid operators have to schedule additional reliability resources, at considerable expense, to ensure that load requirements are met by generation. Thus despite the cost of solar PV decreasing, the cost of integrating solar power will increase as penetration of solar resources onto the electric grid increases. There are three principal tools currently available to mitigate variability impacts: (i) flexible generation, (ii) storage, either virtual (demand response) or physical devices and (iii) solar forecasting. Storage devices are a powerful tool capable of ensuring smooth power output from renewable resources. However, the high cost of storage is prohibitive and markets are still being designed to leverage their full potential and mitigate their limitation (e.g. empty storage). Solar forecasting provides valuable information on the daily net load profile and upcoming ramps (increasing or decreasing solar power output) thereby providing the grid advance warning to schedule ancillary generation more accurately, or curtail solar power output. In order to develop solar forecasting as a tool that can be utilized by the grid operators we identified two focus areas: (i) develop solar forecast technology and improve solar forecast accuracy and (ii) develop forecasts that can be incorporated within existing grid planning and operation infrastructure. The first issue required atmospheric science and engineering research, while the second required detailed knowledge of energy markets, and power engineering. Motivated by this background we will emphasize area (i) in this talk and provide an overview of recent advancements in solar forecasting especially in two areas: (a) Numerical modeling tools for coastal stratocumulus to improve scheduling in the day-ahead California energy market. (b) Development of a sky imager to provide short term forecasts (0-20 min ahead) to improve optimization and control of equipment on distribution feeders with high penetration of solar. Leveraging such tools that have seen extensive use in the atmospheric sciences supports the development of accurate physics-based solar forecast models. Directions for future research are also provided.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.

    The intersection of technology and economics is where all the Smart Grid benefits arise. If we do one without the other, then utilities and consumers hardly see any enduring benefit at all and the investment made in the underlying infrastructure justified on the basis of those benefits is wasted. (author)

  5. Higher Education Facilities: The SmartGrid Earns a Doctorate in Economics

    ERIC Educational Resources Information Center

    Tysseling, John C.; Zibelman, Audrey; Freifeld, Allen

    2011-01-01

    Most higher education facilities have already accomplished some measure of a "microgrid" investment with building control systems (BCS), energy management systems (EMS), and advanced metering infrastructure (AMI) installations. Available energy production facilities may include boilers, chillers, cogeneration, thermal storage, electrical…

  6. Hypersonic Threats to the Homeland

    DTIC Science & Technology

    2017-03-28

    facilities. This defensive grid initiative can help stimulate R&D for hyper loop transportation and high speed railways for the aging infrastructure...Observe, Orient, Decide, Act (OODA) loop . In a tactical situation a warfighter makes decisions as he or she observes the environment; then the

  7. Overset Grid Methods Applied to Nonlinear Potential Flows

    NASA Technical Reports Server (NTRS)

    Holst, Terry; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The objectives of this viewgraph presentation are to develop Chimera-based potential methodology which is compatible with overflow and overflow infrastructure, creating options for an advanced problem solving environment and to significantly reduce turnaround time for aerodynamic analysis and design (primarily cruise conditions).

  8. Instant provisioning of wavelength service using quasi-circuit optical burst switching

    NASA Astrophysics Data System (ADS)

    Xie, Hongyi; Li, Yanhe; Zheng, Xiaoping; Zhang, Hanyi

    2006-09-01

    Due to the recent outstanding advancement of optical networking technology, pervasive Grid computing will be a feasible option in the near future. As Grid infrastructure, optical networks must be able to handle different Grid traffic patterns with various traffic characteristics as well as different QoS requirements. With current optical switching technology, optical circuit switching is suitable for data-intensive Grid applications while optical burst switching is suitable to submit small Grid jobs. However, there would be high bandwidth short-lived traffic in some emerging Grid applications such as multimedia editing. This kind of traffic couldn't be well supported by both OCS and conventional OBS because of considerable path setup delay and bandwidth waste in OCS and inherent loss in OBS. Quasi-Circuit OBS (QCOBS) is proposed in this paper to address this challenge, providing one-way reserved, nearly lossless, instant provisioned wavelength service in OBS networks. Simulation results show that QCOBS achieves lossless transmission at low and moderate loads, and very low loss probability at high loads with proper guard time configuration.

  9. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  10. Renewable Energy for Rural Schools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, A.C.; Lawand, T.

    2000-11-28

    This publication addresses the need for energy in schools, primarily those schools that are not connected to the electric grid. This guide will apply mostly to primary and secondary schools located in non-electrified areas. In areas where grid power is expensive and unreliable, this guide can be used to examine other energy options to conventional power. The authors' goal is to help the reader to accurately assess a school's energy needs, evaluate appropriate and cost-effective technologies to meet those needs, and to implement an effective infrastructure to install and maintain the hardware.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladendorff, Marlene Z.

    Considerable money and effort has been expended by generation, transmission, and distribution entities in North America to implement the North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards for the bulk electric system. Assumptions have been made that as a result of the implementation of the standards, the grid is more cyber secure than it was pre-NERC CIP, but are there data supporting these claims, or only speculation? Has the implementation of the standards had an effect on the grid? Furthermore, developing a research study to address these and other questions provided surprising results.

  12. Co-Simulation Platform For Characterizing Cyber Attacks in Cyber Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Ali, Mohammad Hassan; Dasgupta, Dipankar

    Smart grid is a complex cyber physical system containing a numerous and variety of sources, devices, controllers and loads. Communication/Information infrastructure is the backbone of the smart grid system where different grid components are connected with each other through this structure. Therefore, the drawbacks of the information technology related issues are also becoming a part of the smart grid. Further, smart grid is also vulnerable to the grid related disturbances. For such a dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and OPNET based co-simulated test bed to carry out a cyber-intrusion inmore » a cyber-network for modern power systems and smart grid. The effect of the cyber intrusion on the physical power system is also presented. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack in the cyber network. Different disturbance situations in the proposed test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, J.; Brooker, A.; Burton, E.

    This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.

  14. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  15. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  16. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Dalla Fina, S.; Dorigo, A.; Frizziero, E.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Mendez Lorenzo, P.; Miccio, V.; Sgaravatto, M.; Traldi, S.; Zangrando, L.

    2010-04-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  17. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  18. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  19. Heat demand mapping and district heating grid expansion analysis: Case study of Velika Gorica

    NASA Astrophysics Data System (ADS)

    Dorotić, Hrvoje; Novosel, Tomislav; Duić, Neven; Pukšec, Tomislav

    2017-10-01

    Highly efficient cogeneration and district heating systems have a significant potential for primary energy savings and the reduction of greenhouse gas emissions through the utilization of a waste heat and renewable energy sources. These potentials are still highly underutilized in most European countries. They also play a key role in the planning of future energy systems due to their positive impact on the increase of integration of intermittent renewable energy sources, for example wind and solar in a combination with power to heat technologies. In order to ensure optimal levels of district heating penetration into an energy system, a comprehensive analysis is necessary to determine the actual demands and the potential energy supply. Economical analysis of the grid expansion by using the GIS based mapping methods hasn't been demonstrated so far. This paper presents a heat demand mapping methodology and the use of its output for the district heating network expansion analysis. The result are showing that more than 59% of the heat demand could be covered by the district heating in the city of Velika Gorica, which is two times more than the present share. The most important reason of the district heating's unfulfilled potential is already existing natural gas infrastructure.

  20. Cost- and reliability-oriented aggregation point association in long-term evolution and passive optical network hybrid access infrastructure for smart grid neighborhood area network

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao; Feng, Lei; Zhou, Fanqin; Wei, Lei; Yu, Peng; Li, Wenjing

    2018-02-01

    With the rapid development of the smart grid, the data aggregation point (AP) in the neighborhood area network (NAN) is becoming increasingly important for forwarding the information between the home area network and wide area network. Due to limited budget, it is unable to use one-single access technology to meet the ongoing requirements on AP coverage. This paper first introduces the wired and wireless hybrid access network with the integration of long-term evolution (LTE) and passive optical network (PON) system for NAN, which allows a good trade-off among cost, flexibility, and reliability. Then, based on the already existing wireless LTE network, an AP association optimization model is proposed to make the PON serve as many APs as possible, considering both the economic efficiency and network reliability. Moreover, since the features of the constraints and variables of this NP-hard problem, a hybrid intelligent optimization algorithm is proposed, which is achieved by the mixture of the genetic, ant colony and dynamic greedy algorithm. By comparing with other published methods, simulation results verify the performance of the proposed method in improving the AP coverage and the performance of the proposed algorithm in terms of convergence.

  1. Report on Integration of Existing Grid Models for N-R HES Interaction Focused on Balancing Authorities for Sub-hour Penalties and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian

    2017-06-01

    This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.

  2. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  3. Multipath Routing of Fragmented Data Transfer in a Smart Grid Environment

    NASA Astrophysics Data System (ADS)

    Borgohain, Tuhin; Borgohain, Amardeep; Borgohain, Rajdeep; Sanyal, Sugata

    2015-02-01

    The purpose of this paper is to do a general survey on the existing communication modes inside a smart grid, the existing security loopholes and their countermeasures. Then we suggest a detailed countermeasure, building upon the Jigsaw based secure data transfer [8] for enhanced security of the data flow inside the communication system of a smart grid. The paper has been written without the consideration of any factor of inoperability between the various security techniques inside a smart grid

  4. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    NASA Astrophysics Data System (ADS)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The so called "Supersite Exploitation Platform" (SSEP) provides scientists an overarching federated e-infrastructure with a very fast access to (i) large volume of data (EO/non-space data), (ii) computing resources (e.g. hybrid cloud/grid), (iii) processing software (e.g. toolboxes, RTMs, retrieval baselines, visualization routines), and (iv) general platform capabilities (e.g. user management and access control, accounting, information portal, collaborative tools, social networks etc.). In this federation each data provider remains in full control of the implementation of its data policy. This presentation outlines the Architecture (technical and services) supporting very heterogeneous science domains as well as the procedures for new-comers to join the Helix Nebula Market Place. Ref.1 http://cds.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf

  5. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  6. HTTP as a Data Access Protocol: Trials with XrootD in CMS’s AAA Project

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B. P.; Kcira, D.; Newman, H.; Vlimant, J.; Hendricks, T. W.; CMS Collaboration

    2017-10-01

    The main goal of the project to demonstrate the ability of using HTTP data federations in a manner analogous to the existing AAA infrastructure of the CMS experiment. An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. As a first step, we are building systems that produce and ingest network data transfers up to 80 Gbps. In collaboration with AAA, HTTP support is enabled at the US redirector and the Caltech testbed. A plugin for CMSSW is being developed for HTTP access based on the DaviX software. It will replace the present fork/exec or curl for HTTP access. In addition, extensions to the XRootD HTTP implementation are being developed to add functionality to it, such as client-based monitoring identifiers. In the future, patches will be developed to better integrate HTTP-over-XRootD with the Open Science Grid (OSG) distribution. First results of the transfer tests using HTTP are presented in this paper together with details about the initial setup.

  7. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  8. Can developing countries leapfrog the centralized electrification paradigm?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Todd; Thomas, Valerie M.

    Due to the rapidly decreasing costs of small renewable electricity generation 'systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. By looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model. (C) 2016 International Energy Initiative. Published by Elsevier Inc. All rights reserved.« less

  9. Virtual Control Systems Environment (VCSE)

    ScienceCinema

    Atkins, Will

    2018-02-14

    Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.

  10. Engaging in cross-border power exchange and trade via the Arab Gulf states power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraser, Hamish; Al-Asaad, Hassan K.

    2008-12-15

    When construction is complete in 2010, an interconnector established among the Gulf states will enhance their electricity infrastructure while increasing reliability and security of power supply. The interconnector will also foster exchanges of energy and facilitate cross-border trade. (author)

  11. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  12. Processing of the WLCG monitoring data using NoSQL

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.

    2014-06-01

    The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.

  13. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  14. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  15. 44 CFR 201.7 - Tribal Mitigation Plans.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of existing and future buildings, infrastructure, and critical facilities located in the identified... effects of each hazard, with particular emphasis on new and existing buildings and infrastructure. (iii...

  16. CALS Infrastructure Analysis. Draft. Volume 21

    DOT National Transportation Integrated Search

    1990-03-01

    This executive overview to the DoD CALS Infrastructure Analysis Report summarizes the Components' current effort to modernize the DoD technical data infrastructure. This infrastructure includes all existing and planned capabilities to acquire, manage...

  17. Regulatory Incentives and Disincentives for Utility Investments in Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kihm, Steve; Beecher, Janice; Lehr, Ronald L.

    Electric power is America's most capital-intensive industry, with more than $100 billion invested each year in energy infrastructure. Investment needs are likely to grow as electric utilities make power systems more reliable and resilient, deploy advanced digital technologies, and facilitate new services to meet some consumers' expectations for greater choice and control. But do current regulatory approaches provide the appropriate incentives for grid modernization investments? This report presents three perspectives: -Financial analyst Steve Kihm begins by explaining that any major investor-owned electric utility that wants to raise capital today can do so at a reasonable cost. The question is whethermore » utility managers want to raise capital for grid modernization. Specifically, they look for investments that create the most value for their existing shareholders. In cases where grid modernization investments are not the best choice in terms of shareholder value, Kihm describes shareholder incentive mechanisms that regulators could consider to encourage such investments when they are in the public interest. -From an institutional perspective, Dr. Janice Beecher finds that the traditional rate-base/rate of return regulatory model provides powerful incentives for utilities to pursue investments, cost control, efficiency and even innovation, and it is well suited to the policy objectives of grid modernization. Prudence of grid modernization investments (fair returns) depends on careful evaluation of the specific asset, and any special incentives (bonus returns) should be used only if they promote economic efficiency consistent with the core goals of economic regulation. According to Beecher, realizing the promises of grid modernization depends on effective implementation of the traditional regulatory model and ratemaking tools to serve the public interest. -Conversely, former commissioner and clean energy consultant Ron Lehr says that rapid electric industry changes require a better alignment of utility investment incentives with changes challenging the electricity sector, emerging grid modernization options and benefits, and public policies. For example, investor-owned utilities typically have an incentive to make capital investments, but rarely to employ expense-based solutions, since utilities do not earn profits on expenses. Further, Lehr cites a variety of factors that stand in the way of creating well targeted and well aligned utility incentives, including litigated regulatory processes. These may be a poor choice for finding the right balance among competing interests, establishing rules of prospective application, justifying demonstrations of new technologies and approaches to meeting emerging consumer demands, and keeping pace with rapid change.« less

  18. Leveraging Our Expertise To Inform International RE Roadmaps | Energy

    Science.gov Websites

    energy targets to support Mexico's renewable energy goal. NREL and its Mexico partners developed the institutions need to take to determine how the electricity infrastructure and systems must change to accommodate high levels of renewables. The roadmap focuses on analysis methodologies-including grid expansion

  19. --No Title--

    Science.gov Websites

    interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and

  20. Hydrogen Infrastructure Testing and Research Facility Video (Text Version)

    Science.gov Websites

    grid integration, continuous code improvement, fuel cell vehicle operation, and renewable hydrogen stations. NRELs research on hydrogen safety provides guidance for safe operation, handling, and use of standards and testing fuel cell and hydrogen components for operation and safety. Building on NRELs Wind-to

  1. FY2017 Electrification Annual Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    During fiscal year 2017 (FY 2017), the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) funded early stage research & development (R&D) projects that address Batteries and Electrification of the U.S. transportation sector. The VTO Electrification Sub-Program is composed of Electric Drive Technologies, and Grid Integration activities. The Electric Drive Technologies group conducts R&D projects that advance Electric Motors and Power Electronics technologies. The Grid and Charging Infrastructure group conducts R&D projects that advance Grid Modernization and Electric Vehicle Charging technologies. This document presents a brief overview of the Electrification Sub-Program and progress reports for its R&D projects. Eachmore » of the progress reports provide a project overview and highlights of the technical results that were accomplished in FY 2017.« less

  2. The vulnerabilities of the power-grid system: renewable microgrids as an alternative source of energy.

    PubMed

    Meyer, Victor; Myres, Charles; Bakshi, Nitin

    2010-03-01

    The objective of this paper is to analyse the vulnerabilities of current power-grid systems and to propose alternatives to using fossil fuel power generation and infrastructure solutions in the form of microgrids, particularly those from renewable energy sources. One of the key potential benefits of microgrids, apart from their inherent sustainability and ecological advantages, is increased resilience. The analysis is targeted towards the context of business process outsourcing in India. However, much of the research on vulnerabilities has been derived from the USA and as such many of the examples cite vulnerabilities in the USA and other developed economies. Nevertheless, the vulnerabilities noted are to a degree common to all grid systems, and so the analysis may be more broadly applicable.

  3. Communication Security for Control Systems in Smart Grid

    NASA Astrophysics Data System (ADS)

    Robles, Rosslin John; Kim, Tai-Hoon

    As an example of Control System, Supervisory Control and Data Acquisition systems can be relatively simple, such as one that monitors environmental conditions of a small office building, or incredibly complex, such as a system that monitors all the activity in a nuclear power plant or the activity of a municipal water system. SCADA systems are basically Process Control Systems, designed to automate systems such as traffic control, power grid management, waste processing etc. Connecting SCADA to the Internet can provide a lot of advantages in terms of control, data viewing and generation. SCADA infrastructures like electricity can also be a part of a Smart Grid. Connecting SCADA to a public network can bring a lot of security issues. To answer the security issues, a SCADA communication security solution is proposed.

  4. Radiosurgery planning supported by the GEMSS grid.

    PubMed

    Fenner, J W; Mehrem, R A; Ganesan, V; Riley, S; Middleton, S E; Potter, K; Walton, L

    2005-01-01

    GEMSS (Grid Enabled Medical Simulation Services IST-2001-37153) is an EU project funded to provide a test bed for Grid-enabled health applications. Its purpose is evaluation of Grid computing in the health sector. The health context imposes particular constraints on Grid infrastructure design, and it is this that has driven the feature set of the middleware. In addition to security, the time critical nature of health applications is accommodated by a Quality of Service component, and support for a well defined business model is also included. This paper documents experience of a GEMSS compliant radiosurgery application running within the Medical Physics department at the Royal Hallamshire Hospital in the UK. An outline of the Grid-enabled RAPT radiosurgery application is presented and preliminary experience of its use in the hospital environment is reported. The performance of the software is compared against GammaPlan (an industry standard) and advantages/disadvantages are highlighted. The RAPT software relies on features of the GEMSS middleware that are integral to the success of this application, and together they provide a glimpse of an enabling technology that can impact upon patient management in the 21st century.

  5. Grid Application Meta-Repository System: Repository Interconnectivity and Cross-domain Application Usage in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen

    Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.

  6. Provably secure time distribution for the electric grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith IV, Amos M; Evans, Philip G; Williams, Brian P

    We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.

  7. Residential Customer Enrollment in Time-based Rate and Enabling Technology Programs: Smart Grid Investment Grant Consumer Behavior Study Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Annika; Cappers, Peter; Goldman, Charles

    2013-05-01

    The U.S. Department of Energy’s (DOE’s) Smart Grid Investment Grant (SGIG) program is working with a subset of the 99 SGIG projects undertaking Consumer Behavior Studies (CBS), which examine the response of mass market consumers (i.e., residential and small commercial customers) to time-varying electricity prices (referred to herein as time-based rate programs) in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort presents an opportunity to advance the electric industry’s understanding of consumer behavior.

  8. Sustainable Energy in Remote Indonesian Grids. Accelerating Project Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsch, Brian; Burman, Kari; Davidson, Carolyn

    2015-06-30

    Sustainable Energy for Remote Indonesian Grids (SERIG) is a U.S. Department of Energy (DOE) funded initiative to support Indonesia’s efforts to develop clean energy and increase access to electricity in remote locations throughout the country. With DOE support, the SERIG implementation team consists of the National Renewable Energy Laboratory (NREL) and Winrock International’s Jakarta, Indonesia office. Through technical assistance that includes techno-economic feasibility evaluation for selected projects, government-to-government coordination, infrastructure assessment, stakeholder outreach, and policy analysis, SERIG seeks to provide opportunities for individual project development and a collective framework for national replication office.

  9. Using fleets of electric-drive vehicles for grid support

    NASA Astrophysics Data System (ADS)

    Tomić, Jasna; Kempton, Willett

    Electric-drive vehicles can provide power to the electric grid when they are parked (vehicle-to-grid power). We evaluated the economic potential of two utility-owned fleets of battery-electric vehicles to provide power for a specific electricity market, regulation, in four US regional regulation services markets. The two battery-electric fleet cases are: (a) 100 Th!nk City vehicle and (b) 252 Toyota RAV4. Important variables are: (a) the market value of regulation services, (b) the power capacity (kW) of the electrical connections and wiring, and (c) the energy capacity (kWh) of the vehicle's battery. With a few exceptions when the annual market value of regulation was low, we find that vehicle-to-grid power for regulation services is profitable across all four markets analyzed. Assuming now more than current Level 2 charging infrastructure (6.6 kW) the annual net profit for the Th!nk City fleet is from US 7000 to 70,000 providing regulation down only. For the RAV4 fleet the annual net profit ranges from US 24,000 to 260,000 providing regulation down and up. Vehicle-to-grid power could provide a significant revenue stream that would improve the economics of grid-connected electric-drive vehicles and further encourage their adoption. It would also improve the stability of the electrical grid.

  10. Scientific Grid activities and PKI deployment in the Cybermedia Center, Osaka University.

    PubMed

    Akiyama, Toyokazu; Teranishi, Yuuichi; Nozaki, Kazunori; Kato, Seiichi; Shimojo, Shinji; Peltier, Steven T; Lin, Abel; Molina, Tomas; Yang, George; Lee, David; Ellisman, Mark; Naito, Sei; Koike, Atsushi; Matsumoto, Shuichi; Yoshida, Kiyokazu; Mori, Hirotaro

    2005-10-01

    The Cybermedia Center (CMC), Osaka University, is a research institution that offers knowledge and technology resources obtained from advanced researches in the areas of large-scale computation, information and communication, multimedia content and education. Currently, CMC is involved in Japanese national Grid projects such as JGN II (Japan Gigabit Network), NAREGI and BioGrid. Not limited to Japan, CMC also actively takes part in international activities such as PRAGMA. In these projects and international collaborations, CMC has developed a Grid system that allows scientists to perform their analysis by remote-controlling the world's largest ultra-high voltage electron microscope located in Osaka University. In another undertaking, CMC has assumed a leadership role in BioGrid by sharing its experiences and knowledge on the system development for the area of biology. In this paper, we will give an overview of the BioGrid project and introduce the progress of the Telescience unit, which collaborates with the Telescience Project led by the National Center for Microscopy and Imaging Research (NCMIR). Furthermore, CMC collaborates with seven Computing Centers in Japan, NAREGI and National Institute of Informatics to deploy PKI base authentication infrastructure. The current status of this project and future collaboration with Grid Projects will be delineated in this paper.

  11. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  12. HappyFace as a generic monitoring tool for HEP experiments

    NASA Astrophysics Data System (ADS)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard

    2015-12-01

    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.

  13. Autonomous Energy Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less

  14. Enabling Object Storage via shims for Grid Middleware

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; De Witt, Shaun; Dewhurst, Alastair; Britton, David; Roy, Gareth; Crooks, David

    2015-12-01

    The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called ”Cloud” storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for ”POSIX-like” filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow.

  15. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  16. Connected Vehicle Infrastructure : Deployment and Funding Overview

    DOT National Transportation Integrated Search

    2018-01-01

    This report reviews existing and proposed legislation relevant to connected vehicle infrastructure (CVI) implementation, identifies existing funding mechanisms for CVI implementation, reviews CVI pilot programs and case studies, and provides an overv...

  17. A Messaging Infrastructure for WLCG

    NASA Astrophysics Data System (ADS)

    Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin

    2011-12-01

    During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.

  18. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  19. ATLAS user analysis on private cloud resources at GoeGrid

    NASA Astrophysics Data System (ADS)

    Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.

    2015-12-01

    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.

  20. Reducing Stator Current Harmonics for a Doubly-Fed Induction Generator Connected to a Distorted Grid

    DTIC Science & Technology

    2013-09-01

    electric grid voltage harmonics, which is a potential obstacle for implementing stable wind -energy systems. Two existing rotor voltage controllers...electric grid voltage harmonics, which is a potential obstacle for implementing stable wind -energy systems. Two existing rotor voltage controllers...speed of the DFIG can be adjusted to optimize turbine efficiency for given wind conditions. A common method for controlling the operating speed is

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magee, Thoman

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), batterymore » storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG. The SGDP enables the efficient, flexible integration of these disparate resources and lays the architectural foundations for future scalability. Con Edison assembled an SGDP team of more than 16 different project partners, including technology vendors, and participating organizations, and the Con Edison team provided overall guidance and project management. Project team members are listed in Table 1-1.« less

  2. Effect of infrastructure design on commons dilemmas in social-ecological system dynamics.

    PubMed

    Yu, David J; Qubbaj, Murad R; Muneepeerakul, Rachata; Anderies, John M; Aggarwal, Rimjhim M

    2015-10-27

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social-ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses.

  3. Effect of infrastructure design on commons dilemmas in social−ecological system dynamics

    PubMed Central

    Yu, David J.; Qubbaj, Murad R.; Muneepeerakul, Rachata; Anderies, John M.; Aggarwal, Rimjhim M.

    2015-01-01

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social−ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses. PMID:26460043

  4. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  5. Expanding access to off-grid rural electrification in Africa: An analysis of community-based micro-grids in Kenya

    NASA Astrophysics Data System (ADS)

    Kirubi, Charles Gathu

    Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily lead to elite capture. Experiences from different micro-grid settings illustrate the manner in which a coincidence of interest between the elites and the rest of members and access to external support can create incentives and mechanisms to enable community-wide access to scarce services, hence mitigating elite capture. Moreover, access to external support was found to increase the likelihood of participation for the relatively poor households. The policy-relevant message from this research is two-fold. In rural areas with suitable sites for micro-hydro power, the potential for community micro-grids appear considerable to the extent that this option would seem to represent "the road not taken" as far as policies and initiatives aimed at expanding RE are concerned in Kenya and other African countries with comparable settings. However, local participatory initiatives not complimented by external technical assistance run a considerable risk of locking rural households into relatively more costly and poor-quality services. By taking advantage of existing and/or building a dense network of local organizations, including micro-finance agencies, the government and development partners can make available to local communities the necessary support---financial, technical or regulatory---essential for efficient design of micro-grids in addition to facilitating equitable distribution of electricity benefits.

  6. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  7. Beacons for supporting lunar landing navigation

    NASA Astrophysics Data System (ADS)

    Theil, Stephan; Bora, Leonardo

    2017-03-01

    Current and future planetary exploration missions involve a landing on the target celestial body. Almost all of these landing missions are currently relying on a combination of inertial and optical sensor measurements to determine the current flight state with respect to the target body and the desired landing site. As soon as an infrastructure at the landing site exists, the requirements as well as conditions change for vehicles landing close to this existing infrastructure. This paper investigates the options for ground-based infrastructure supporting the onboard navigation system and analyzes the impact on the achievable navigation accuracy. For that purpose, the paper starts with an existing navigation architecture based on optical navigation and extends it with measurements to support navigation with ground infrastructure. A scenario of lunar landing is simulated and the provided functions of the ground infrastructure as well as the location with respect to the landing site are evaluated. The results are analyzed and discussed.

  8. Resilient Grid Operational Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasqualini, Donatella

    Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Losmore » Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.

    The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’smore » t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.« less

  10. GOES-R GS Product Generation Infrastructure Operations

    NASA Astrophysics Data System (ADS)

    Blanton, M.; Gundy, J.

    2012-12-01

    GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.

  11. Grid and Cloud for Developing Countries

    NASA Astrophysics Data System (ADS)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  12. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  13. Three-dimensional hybrid grid generation using advancing front techniques

    NASA Technical Reports Server (NTRS)

    Steinbrenner, John P.; Noack, Ralph W.

    1995-01-01

    A new 3-dimensional hybrid grid generation technique has been developed, based on ideas of advancing fronts for both structured and unstructured grids. In this approach, structured grids are first generate independently around individual components of the geometry. Fronts are initialized on these structure grids, and advanced outward so that new cells are extracted directly from the structured grids. Employing typical advancing front techniques, cells are rejected if they intersect the existing front or fail other criteria When no more viable structured cells exist further cells are advanced in an unstructured manner to close off the overall domain, resulting in a grid of 'hybrid' form. There are two primary advantages to the hybrid formulation. First, generating blocks with limited regard to topology eliminates the bottleneck encountered when a multiple block system is used to fully encapsulate a domain. Individual blocks may be generated free of external constraints, which will significantly reduce the generation time. Secondly, grid points near the body (presumably with high aspect ratio) will still maintain a structured (non-triangular or tetrahedral) character, thereby maximizing grid quality and solution accuracy near the surface.

  14. Valuation of Electric Power System Services and Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kintner-Meyer, Michael C. W.; Homer, Juliet S.; Balducci, Patrick J.

    Accurate valuation of existing and new technologies and grid services has been recognized to be important to stimulate investment in grid modernization. Clear, transparent, and accepted methods for estimating the total value (i.e., total benefits minus cost) of grid technologies and services are necessary for decision makers to make informed decisions. This applies to home owners interested in distributed energy technologies, as well as to service providers offering new demand response services, and utility executives evaluating best investment strategies to meet their service obligation. However, current valuation methods lack consistency, methodological rigor, and often the capabilities to identify and quantifymore » multiple benefits of grid assets or new and innovative services. Distributed grid assets often have multiple benefits that are difficult to quantify because of the locational context in which they operate. The value is temporally, operationally, and spatially specific. It varies widely by distribution systems, transmission network topology, and the composition of the generation mix. The Electric Power Research Institute (EPRI) recently established a benefit-cost framework that proposes a process for estimating multiple benefits of distributed energy resources (DERs) and the associated cost. This document proposes an extension of this endeavor that offers a generalizable framework for valuation that quantifies the broad set of values for a wide range of technologies (including energy efficiency options, distributed resources, transmission, and generation) as well as policy options that affect all aspects of the entire generation and delivery system of the electricity infrastructure. The extension includes a comprehensive valuation framework of monetizable and non-monetizable benefits of new technologies and services beyond the traditional reliability objectives. The benefits are characterized into the following categories: sustainability, affordability, and security, flexibility, and resilience. This document defines the elements of a generic valuation framework and process as well as system properties and metrics by which value streams can be derived. The valuation process can be applied to determine the value on the margin of incremental system changes. This process is typically performed when estimating the value of a particular project (e.g., value of a merchant generator, or a distributed photovoltaic (PV) rooftop installation). Alternatively, the framework can be used when a widespread change in the grid operation, generation mix, or transmission topology is to be valued. In this case a comprehensive system analysis is required.« less

  15. Building Geospatial Web Services for Ecological Monitoring and Forecasting

    NASA Astrophysics Data System (ADS)

    Hiatt, S. H.; Hashimoto, H.; Melton, F. S.; Michaelis, A. R.; Milesi, C.; Nemani, R. R.; Wang, W.

    2008-12-01

    The Terrestrial Observation and Prediction System (TOPS) at NASA Ames Research Center is a modeling system that generates a suite of gridded data products in near real-time that are designed to enhance management decisions related to floods, droughts, forest fires, human health, as well as crop, range, and forest production. While these data products introduce great possibilities for assisting management decisions and informing further research, realization of their full potential is complicated by their shear volume and by the need for a necessary infrastructure for remotely browsing, visualizing, and analyzing the data. In order to address these difficulties we have built an OGC-compliant WMS and WCS server based on an open source software stack that provides standardized access to our archive of data. This server is built using the open source Java library GeoTools which achieves efficient I/O and image rendering through Java Advanced Imaging. We developed spatio-temporal raster management capabilities using the PostGrid raster indexation engine. We provide visualization and browsing capabilities through a customized Ajax web interface derived from the kaMap project. This interface allows resource managers to quickly assess ecosystem conditions and identify significant trends and anomalies from within their web browser without the need to download source data or install special software. Our standardized web services also expose TOPS data to a range of potential clients, from web mapping applications to virtual globes and desktop GIS packages. However, support for managing the temporal dimension of our data is currently limited in existing software systems. Future work will attempt to overcome this shortcoming by building time-series visualization and analysis tools that can be integrated with existing geospatial software.

  16. Smart Grid Educational Series | Energy Systems Integration Facility | NREL

    Science.gov Websites

    generation through transmission, all the way to the distribution infrastructure. Download presentation | Text on key takeaways from breakout group discussions. Learn more about the workshop. Text Version Text presentation PDF | Text Version Using MultiSpeak Data Model Standard & Essence Anomaly Detection for ICS

  17. 76 FR 80338 - Secretarial India Infrastructure Business Development Mission, March 25-30, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-23

    .../ from consumers on a near real-time basis and improve system reliability Moving to a smart grid to... technologies in India. The real challenge in the power sector in India lies in managing the upgrading of the....export.gov/newsletter/march2008/initiatives.html for additional information). Expenses for travel...

  18. Development of a 2nd Generation Decision Support Tool to Optimize Resource and Energy Recovery for Municipal Solid Waste

    EPA Science Inventory

    In 2012, EPA’s Office of Research and Development released the MSW decision support tool (MSW-DST) to help identify strategies for more sustainable MSW management. Depending upon local infrastructure, energy grid mix, population density, and waste composition and quantity, the m...

  19. Developing a European grid infrastructure for cancer research: vision, architecture and services

    PubMed Central

    Tsiknakis, M; Rueping, S; Martin, L; Sfakianakis, S; Bucur, A; Sengstag, T; Brochhausen, M; Pucaski, J; Graf, N

    2007-01-01

    Life sciences are currently at the centre of an information revolution. The nature and amount of information now available opens up areas of research that were once in the realm of science fiction. During this information revolution, the data-gathering capabilities have greatly surpassed the data-analysis techniques. Data integration across heterogeneous data sources and data aggregation across different aspects of the biomedical spectrum, therefore, is at the centre of current biomedical and pharmaceutical R&D. This paper reports on original results from the ACGT integrated project, focusing on the design and development of a European Biomedical Grid infrastructure in support of multi-centric, post-genomic clinical trials (CTs) on cancer. Post-genomic CTs use multi-level clinical and genomic data and advanced computational analysis and visualization tools to test hypotheses in trying to identify the molecular reasons for a disease and the stratification of patients in terms of treatment. The paper provides a presentation of the needs of users involved in post-genomic CTs and presents indicative scenarios, which drive the requirements of the engineering phase of the project. Subsequently, the initial architecture specified by the project is presented, and its services are classified and discussed. A range of such key services, including the Master Ontology on sCancer, which lie at the heart of the integration architecture of the project, is presented. Special efforts have been taken to describe the methodological and technological framework of the project, enabling the creation of a legally compliant and trustworthy infrastructure. Finally, a short discussion of the forthcoming work is included, and the potential involvement of the cancer research community in further development or utilization of the infrastructure is described. PMID:22275955

  20. Shock waves on complex networks

    NASA Astrophysics Data System (ADS)

    Mones, Enys; Araújo, Nuno A. M.; Vicsek, Tamás; Herrmann, Hans J.

    2014-05-01

    Power grids, road maps, and river streams are examples of infrastructural networks which are highly vulnerable to external perturbations. An abrupt local change of load (voltage, traffic density, or water level) might propagate in a cascading way and affect a significant fraction of the network. Almost discontinuous perturbations can be modeled by shock waves which can eventually interfere constructively and endanger the normal functionality of the infrastructure. We study their dynamics by solving the Burgers equation under random perturbations on several real and artificial directed graphs. Even for graphs with a narrow distribution of node properties (e.g., degree or betweenness), a steady state is reached exhibiting a heterogeneous load distribution, having a difference of one order of magnitude between the highest and average loads. Unexpectedly we find for the European power grid and for finite Watts-Strogatz networks a broad pronounced bimodal distribution for the loads. To identify the most vulnerable nodes, we introduce the concept of node-basin size, a purely topological property which we show to be strongly correlated to the average load of a node.

  1. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Belforte, S.; Bockelman, B.

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid,more » cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.« less

  2. Grid-based International Network for Flu observation (g-INFO).

    PubMed

    Doan, Trung-Tung; Bernard, Aurélien; Da-Costa, Ana Lucia; Bloch, Vincent; Le, Thanh-Hoa; Legre, Yannick; Maigne, Lydia; Salzemann, Jean; Sarramia, David; Nguyen, Hong-Quang; Breton, Vincent

    2010-01-01

    The 2009 H1N1 outbreak has demonstrated that continuing vigilance, planning, and strong public health research capability are essential defenses against emerging health threats. Molecular epidemiology of influenza virus strains provides scientists with clues about the temporal and geographic evolution of the virus. In the present paper, researchers from France and Vietnam are proposing a global surveillance network based on grid technology: the goal is to federate influenza data servers and deploy automatically molecular epidemiology studies. A first prototype based on AMGA and the WISDOM Production Environment extracts daily from NCBI influenza H1N1 sequence data which are processed through a phylogenetic analysis pipeline deployed on EGEE and AuverGrid e-infrastructures. The analysis results are displayed on a web portal (http://g-info.healthgrid.org) for epidemiologists to monitor H1N1 pandemics.

  3. Intelligent energy allocation strategy for PHEV charging station using gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.

    2014-10-01

    Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.

  4. Long Island Smart Energy Corridor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mui, Ming

    The Long Island Power Authority (LIPA) has teamed with Stony Brook University (Stony Brook or SBU) and Farmingdale State College (Farmingdale or FSC), two branches of the State University of New York (SUNY), to create a “Smart Energy Corridor.” The project, located along the Route 110 business corridor on Long Island, New York, demonstrated the integration of a suite of Smart Grid technologies from substations to end-use loads. The Smart Energy Corridor Project included the following key features: -TECHNOLOGY: Demonstrated a full range of smart energy technologies, including substations and distribution feeder automation, fiber and radio communications backbone, advanced meteringmore » infrastructure (AM”), meter data management (MDM) system (which LIPA implemented outside of this project), field tools automation, customer-level energy management including automated energy management systems, and integration with distributed generation and plug-in hybrid electric vehicles. -MARKETING: A rigorous market test that identified customer response to an alternative time-of-use pricing plan and varying levels of information and analytical support. -CYBER SECURITY: Tested cyber security vulnerabilities in Smart Grid hardware, network, and application layers. Developed recommendations for policies, procedures, and technical controls to prevent or foil cyber-attacks and to harden the Smart Grid infrastructure. -RELIABILITY: Leveraged new Smart Grid-enabled data to increase system efficiency and reliability. Developed enhanced load forecasting, phase balancing, and voltage control techniques designed to work hand-in-hand with the Smart Grid technologies. -OUTREACH: Implemented public outreach and educational initiatives that were linked directly to the demonstration of Smart Grid technologies, tools, techniques, and system configurations. This included creation of full-scale operating models demonstrating application of Smart Grid technologies in business and residential settings. Farmingdale State College held three international conferences on energy and sustainability and Smart Grid related technologies and policies. These conferences, in addition to public seminars increased understanding and acceptance of Smart Grid transformation by the general public, business, industry, and municipalities in the Long Island and greater New York region. - JOB CREATION: Provided training for the Smart Grid and clean energy jobs of the future at both Farmingdale and Stony Brook. Stony Brook focused its “Cradle to Fortune 500” suite of economic development resources on the opportunities emerging from the project, helping to create new technologies, new businesses, and new jobs. To achieve these features, LIPA and its sub-recipients, FSC and SBU, each have separate but complementary objectives. At LIPA, the Smart Energy Corridor (1) meant validating Smart Grid technologies; (2) quantifying Smart Grid costs and benefits; and (3) providing insights into how Smart Grid applications can be better implemented, readily adapted, and replicated in individual homes and businesses. LIPA installed 2,550 AMI meters (exceeding the 500 AMI meters in the original plan), created three “smart” substations serving the Corridor, and installed additional distribution automation elements including two-way communications and digital controls over various feeders and capacitor banks. It gathered and analyzed customer behavior information on how they responded to a new “smart” TOU rate and to various levels of information and analytical tools.« less

  5. Development of stable Grid service at the next generation system of KEKCC

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.

    2017-10-01

    A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.

  6. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  7. A new mix of power for the ESO installations in Chile: greener, more reliable, cheaper

    NASA Astrophysics Data System (ADS)

    Filippi, G.; Tamai, R.; Kalaitzoglou, D.; Wild, W.; Delorme, A.; Rioseco, D.

    2016-07-01

    The highest sky quality demands for astronomical research impose to locate observatories often in areas not easily reached by the existing power infrastructures. At the same time, availability and cost of power is a primary factor for sustainable operations. Power may also be a potential source for CO2 pollution. As part of its green initiatives, ESO is in the process of replacing the power sources for its own, La Silla and Paranal-Armazones, and shared, ALMA, installations in Chile in order to provide them with more reliable, affordable, and smaller CO2 footprint power solutions. The connectivity to the Chilean interconnected power systems (grid) which is to extensively use Non-Conventional Renewable Energy (NCRE) as well as the use of less polluting fuels wherever self-generation cannot be avoided are key building blocks for the solutions selected for every site. In addition, considerations such as the environmental impact and - if required - the partnership with other entities have also to be taken into account. After years of preparatory work to which the Chilean Authorities provided great help and support, ESO has now launched an articulated program to upgrade the existing agreements/facilities in i) the La Silla Observatory, from free to regulated grid client status due to an agreement with a Solar Farm private initiative, in ii) the Paranal-Armazones Observatory, from local generation using liquefied petroleum gas (LPG) to connection to the grid which is to extensively use NCRE, and last but not least, in iii) the ALMA Observatory where ESO participates together with North American and East Asian partners, from replacing the LPG as fuel for the turbine local generation system with the use of less polluting natural gas (NG) supplied by a pipe connection to eliminate the pollution caused by the LPG trucks (currently 1 LPG truck from the VIII region, Bio Bio, to the II region, ALMA and back every day, for a total of 3000km). The technologies used and the status of completion of the different projects, as well as the expected benefits are discussed in this paper.

  8. Wavelet-enabled progressive data Access and Storage Protocol (WASP)

    NASA Astrophysics Data System (ADS)

    Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.

    2015-12-01

    Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.

  9. AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics

    NASA Astrophysics Data System (ADS)

    Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.

    2017-05-01

    We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.

  10. The effect of the NERC CIP standards on the reliability of the North American Bulk Electric System

    DOE PAGES

    Ladendorff, Marlene Z.

    2016-06-01

    Considerable money and effort has been expended by generation, transmission, and distribution entities in North America to implement the North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards for the bulk electric system. Assumptions have been made that as a result of the implementation of the standards, the grid is more cyber secure than it was pre-NERC CIP, but are there data supporting these claims, or only speculation? Has the implementation of the standards had an effect on the grid? Furthermore, developing a research study to address these and other questions provided surprising results.

  11. Belle II grid computing: An overview of the distributed data management system.

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Schram, Malachi; Belle Collaboration, II

    2017-01-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.

  12. Polar lunar power ring: Propulsion energy resource

    NASA Technical Reports Server (NTRS)

    Galloway, Graham Scott

    1990-01-01

    A ring shaped grid of photovoltaic solar collectors encircling a lunar pole at 80 to 85 degrees latitude is proposed as the primary research, development, and construction goal for an initial lunar base. The polar Lunar Power Ring (LPR) is designed to provide continuous electrical power in ever increasing amounts as collectors are added to the ring grid. The LPR can provide electricity for any purpose indefinitely, barring a meteor strike. The associated rail infrastructure and inherently expandable power levels place the LPR as an ideal tool to power an innovative propulsion research facility or a trans-Jovian fleet. The proposed initial output range is 90 Mw to 90 Gw.

  13. Modular AC Nano-Grid with Four-Quadrant Micro-Inverters and High-Efficiency DC-DC Conversion

    NASA Astrophysics Data System (ADS)

    Poshtkouhi, Shahab

    A significant portion of the population in developing countries live in remote communities, where the power infrastructure and the required capital investment to set up local grids do not exist. This is due to the fuel shipment and utilization costs required for fossil fuel based generators, which are traditionally used in these local grids, as well as high upfront costs associated with the centralized Energy Storage Systems (ESS). This dissertation targets modular AC nano-grids for these remote communities developed at minimal capital cost, where the generators are replaced with multiple inverters, connected to either Photovoltaic (PV) or battery modules, which can be gradually added to the nano-grid. A distributed droop-based control architecture is presented for the PV and battery Micro-Inverters (MIV) in order to achieve frequency and voltage stability, as well as active and reactive power sharing. The nano-grid voltage is regulated collectively in either one of four operational regions. Effective load sharing and transient handling are demonstrated experimentally by forming a nano-grid which consists of two custom 500 W MIVs. The MIVs forming the nano-grid have to meet certain requirements. A two-stage MIV architecture and control scheme with four-quadrant power-flow between the nano-grid, the PV/battery and optional short-term storage is presented. The short-term storage is realized using high energy-density Lithium-Ion Capacitor (LIC) technology. A real-time power smoothing algorithm utilizing LIC modules is developed and tested, while the performance of the 100 W MIV is experimentally verified under closed-loop dynamic conditions. Two main limitations of the DAB topology, as the core of the MIV architecture's dc-dc stage, are addressed: 1) This topology demonstrates poor efficiency and limited regulation accuracy at low power. These are improved by introducing a modified topology to operate the DAB in Flyback mode, achieving up to an 8% increase in converter efficiency. 2) The DAB topology needs four digital isolators for driving the active switches on the other side of the isolation boundary. Two Phase-Locked-Loop (PLL) based synchronization schemes are introduced in order to reduce the number of required digital isolators, hence increasing reliability and reducing the implementation costs. One of these schemes is demonstrated on a discrete 150 W DAB prototype, while both of them are implemented on-chip in a 0.18mum 80V BCD process. In addition, the power-stage of the primary-side of a 1 MHz, 50 W DAB converter is fully integrated on the same die. By using such a high switching frequency, the size of passive elements in the DAB is reduced, resulting in further cost reductions for the MIV. The results of this dissertation pave the way for affordable nano-grids with minimal capital cost, reliable performance and reduced complexity.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave; Garzoglio, Gabriele; Kim, Hyunwoo

    As of 2012, a number of US Department of Energy (DOE) National Laboratories have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s Ethernet technology. The ANI network will support DOE's science research programs. A 100 Gb/s network test bed is a key component of the ANI project. The test bed offers the opportunity for early evaluation of 100Gb/s network infrastructure for supporting the high impact data movement typical of science collaborations and experiments. In order to make effective use of thismore » advanced infrastructure, the applications and middleware currently used by the distributed computing systems of large-scale science need to be adapted and tested within the new environment, with gaps in functionality identified and corrected. As a user of the ANI test bed, Fermilab aims to study the issues related to end-to-end integration and use of 100 Gb/s networks for the event simulation and analysis applications of physics experiments. In this paper we discuss our findings from evaluating existing HEP Physics middleware and application components, including GridFTP, Globus Online, etc. in the high-speed environment. These will include possible recommendations to the system administrators, application and middleware developers on changes that would make production use of the 100 Gb/s networks, including data storage, caching and wide area access.« less

  15. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  16. Information-theoretic characterization of dynamic energy systems

    NASA Astrophysics Data System (ADS)

    Bevis, Troy Lawson

    The latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy systems themselves. The need for a power grid that is reliable, secure, and efficient is apparent, and there have been several initiatives to provide such a system. These increases in expectations have led to a growth in the renewable energy sources that are being integrated into the grid, a change that increases efficiency and disperses the generation throughout the system. Although this change in the grid infrastructure is beneficial, it leads to grand challenges in system level control and operation. As the number of sources increases and becomes geographically distributed, the control systems are no longer local to the system. This means that communication networks must be enhanced to support multiple devices that must communicate reliably. A common solution to these new systems is to use wide area networks for the communication network, as opposed to point-to-point communication. Although the wide area network will support a large number of devices, it generally comes with a compromise in the form of latency in the communication system. Now the device controller has latency injected into the feedback loop of the system. Also, renewable energy sources are largely non-dispatchable generation. That is, they are never guaranteed to be online and supplying the demanded energy. As renewable generation is typically modeled as stochastic process, it would useful to include this behavior in the control system algorithms. The combination of communication latency and stochastic sources are compounded by the dynamics of the grid itself. Loads are constantly changing, as well as the sources; this can sometimes lead to a quick change in system states. There is a need for a metric to be able to take into consideration all of the factors detailed above; it needs to be able to take into consideration the amount of information that is available in the system and the rate that the information is losing its value. In a dynamic system, the information is only valid for a length of time, and the controller must be able to take into account the decay of currently held information. This thesis will present the information theory metrics in a way that is useful for application to dynamic energy systems. A test case involving synchronization of several generators is presented for analysis and application of the theory. The objective is to synchronize all the generators and connect them to a common bus. As the phase shift of each generator is a random process, the effects of latency and information decay can be directly observed. The results of the experiments clearly show that the expected outcomes are observed and that entropy and information theory is a valid metric for timing requirement extraction.

  17. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  18. Downscaling seasonal to centennial simulations on distributed computing infrastructures using WRF model. The WRF4G project

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.

    2013-12-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models, or seasonal. WRF4G is been used to run WRF simulations which are contributing to the CORDEX initiative and others projects like SPECS and EUPORIAS. This work is been partially funded by the European Regional Development Fund (ERDF) and the Spanish National R&D Plan 2008-2011 (CGL2011-28864)

  19. A practical approach to virtualization in HEP

    NASA Astrophysics Data System (ADS)

    Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.

    2011-01-01

    In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuruganti, Phani Teja

    The smart grid is a combined process of revitalizing the traditional power grid applications and introducing new applications to improve the efficiency of power generation, transmission and distribution. This can be achieved by leveraging advanced communication and networking technologies. Therefore the selection of the appropriate communication technology for different smart grid applications has been debated a lot in the recent past. After comparing different possible technologies, a recent research study has arrived at a conclusion that the 3G cellular technology is the right choice for distribution side smart grid applications like smart metering, advanced distribution automation and demand response managementmore » system. In this paper, we argue that the current 3G/4G cellular technologies are not an appropriate choice for smart grid distribution applications and propose a Hybrid Spread Spectrum (HSS) based Advanced Metering Infrastructure (AMI) as one of the alternatives to 3G/4G technologies. We present a preliminary PHY and MAC layer design of a HSS based AMI network and evaluate their performance using matlab and NS2 simulations. Also, we propose a time hierarchical scheme that can significantly reduce the volume of random access traffic generated during blackouts and the delay in power outage reporting.« less

  1. Intelligent Interoperable Agent Toolkit (I2AT)

    DTIC Science & Technology

    2005-02-01

    Agents, Agent Infrastructure, Intelligent Agents 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY ...CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION OF ABSTRACT UNCLASSIFIED 20. LIMITATION OF ABSTRACT UL NSN 7540-01...those that occur while the submarine is submerged. Using CoABS Grid/Jini service discovery events backed up with a small amount of internal bookkeeping

  2. Fuzzy architecture assessment for critical infrastructure resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, George

    2012-12-01

    This paper presents an approach for the selection of alternative architectures in a connected infrastructure system to increase resilience of the overall infrastructure system. The paper begins with a description of resilience and critical infrastructure, then summarizes existing approaches to resilience, and presents a fuzzy-rule based method of selecting among alternative infrastructure architectures. This methodology includes considerations which are most important when deciding on an approach to resilience. The paper concludes with a proposed approach which builds on existing resilience architecting methods by integrating key system aspects using fuzzy memberships and fuzzy rule sets. This novel approach aids the systemsmore » architect in considering resilience for the evaluation of architectures for adoption into the final system architecture.« less

  3. Application of Approximate Pattern Matching in Two Dimensional Spaces to Grid Layout for Biochemical Network Maps

    PubMed Central

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    Background For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. Results We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Conclusions Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html. PMID:22679486

  4. Application of approximate pattern matching in two dimensional spaces to grid layout for biochemical network maps.

    PubMed

    Inoue, Kentaro; Shimozono, Shinichi; Yoshida, Hideaki; Kurata, Hiroyuki

    2012-01-01

    For visualizing large-scale biochemical network maps, it is important to calculate the coordinates of molecular nodes quickly and to enhance the understanding or traceability of them. The grid layout is effective in drawing compact, orderly, balanced network maps with node label spaces, but existing grid layout algorithms often require a high computational cost because they have to consider complicated positional constraints through the entire optimization process. We propose a hybrid grid layout algorithm that consists of a non-grid, fast layout (preprocessor) algorithm and an approximate pattern matching algorithm that distributes the resultant preprocessed nodes on square grid points. To demonstrate the feasibility of the hybrid layout algorithm, it is characterized in terms of the calculation time, numbers of edge-edge and node-edge crossings, relative edge lengths, and F-measures. The proposed algorithm achieves outstanding performances compared with other existing grid layouts. Use of an approximate pattern matching algorithm quickly redistributes the laid-out nodes by fast, non-grid algorithms on the square grid points, while preserving the topological relationships among the nodes. The proposed algorithm is a novel use of the pattern matching, thereby providing a breakthrough for grid layout. This application program can be freely downloaded from http://www.cadlive.jp/hybridlayout/hybridlayout.html.

  5. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  6. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  7. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  8. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  9. Dynamically induced cascading failures in power grids.

    PubMed

    Schäfer, Benjamin; Witthaut, Dirk; Timme, Marc; Latora, Vito

    2018-05-17

    Reliable functioning of infrastructure networks is essential for our modern society. Cascading failures are the cause of most large-scale network outages. Although cascading failures often exhibit dynamical transients, the modeling of cascades has so far mainly focused on the analysis of sequences of steady states. In this article, we focus on electrical transmission networks and introduce a framework that takes into account both the event-based nature of cascades and the essentials of the network dynamics. We find that transients of the order of seconds in the flows of a power grid play a crucial role in the emergence of collective behaviors. We finally propose a forecasting method to identify critical lines and components in advance or during operation. Overall, our work highlights the relevance of dynamically induced failures on the synchronization dynamics of national power grids of different European countries and provides methods to predict and model cascading failures.

  10. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  11. The QUANTGRID Project (RO)—Quantum Security in GRID Computing Applications

    NASA Astrophysics Data System (ADS)

    Dima, M.; Dulea, M.; Petre, M.; Petre, C.; Mitrica, B.; Stoica, M.; Udrea, M.; Sterian, R.; Sterian, P.

    2010-01-01

    The QUANTGRID Project, financed through the National Center for Programme Management (CNMP-Romania), is the first attempt at using Quantum Crypted Communications (QCC) in large scale operations, such as GRID Computing, and conceivably in the years ahead in the banking sector and other security tight communications. In relation with the GRID activities of the Center for Computing & Communications (Nat.'l Inst. Nucl. Phys.—IFIN-HH), the Quantum Optics Lab. (Nat.'l Inst. Plasma and Lasers—INFLPR) and the Physics Dept. (University Polytechnica—UPB) the project will build a demonstrator infrastructure for this technology. The status of the project in its incipient phase is reported, featuring tests for communications in classical security mode: socket level communications under AES (Advanced Encryption Std.), both proprietary code in C++ technology. An outline of the planned undertaking of the project is communicated, highlighting its impact in quantum physics, coherent optics and information technology.

  12. The equal load-sharing model of cascade failures in power grids

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; De Sanctis Lucentini, Pier Giorgio

    2016-11-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing power demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ;super-grids;.

  13. Abruptness of Cascade Failures in Power Grids

    NASA Astrophysics Data System (ADS)

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ``super-grids''.

  14. Abruptness of cascade failures in power grids.

    PubMed

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-15

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".

  15. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  16. A study of the feasibility of pneumatic transport of municipal solid waste and recyclables in Manhattan using existing transportation infrastructure.

    DOT National Transportation Integrated Search

    2013-07-01

    This study explored possibilities for using existing transportation infrastructure for the cost-effective : installation of pneumatic waste-collection technology in Manhattan. If shown to be economically and : operationally feasible, reducing the num...

  17. The Volume Grid Manipulator (VGM): A Grid Reusability Tool

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    This document is a manual describing how to use the Volume Grid Manipulation (VGM) software. The code is specifically designed to alter or manipulate existing surface and volume structured grids to improve grid quality through the reduction of grid line skewness, removal of negative volumes, and adaption of surface and volume grids to flow field gradients. The software uses a command language to perform all manipulations thereby offering the capability of executing multiple manipulations on a single grid during an execution of the code. The command language can be input to the VGM code by a UNIX style redirected file, or interactively while the code is executing. The manual consists of 14 sections. The first is an introduction to grid manipulation; where it is most applicable and where the strengths of such software can be utilized. The next two sections describe the memory management and the manipulation command language. The following 8 sections describe simple and complex manipulations that can be used in conjunction with one another to smooth, adapt, and reuse existing grids for various computations. These are accompanied by a tutorial section that describes how to use the commands and manipulations to solve actual grid generation problems. The last two sections are a command reference guide and trouble shooting sections to aid in the use of the code as well as describe problems associated with generated scripts for manipulation control.

  18. EUDAT: A New Cross-Disciplinary Data Infrastructure For Science

    NASA Astrophysics Data System (ADS)

    Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter

    2013-04-01

    In recent years significant investments have been made by the European Commission and European member states to create a pan-European e-Infrastructure supporting multiple research communities. As a result, a European e-Infrastructure ecosystem is currently taking shape, with communication networks, distributed grids and HPC facilities providing European researchers from all fields with state-of-the-art instruments and services that support the deployment of new research facilities on a pan-European level. However, the accelerated proliferation of data - newly available from powerful new scientific instruments, simulations and the digitization of existing resources - has created a new impetus for increasing efforts and investments in order to tackle the specific challenges of data management, and to ensure a coherent approach to research data access and preservation. EUDAT is a pan-European initiative that started in October 2011 and which aims to help overcome these challenges by laying out the foundations of a Collaborative Data Infrastructure (CDI) in which centres offering community-specific support services to their users could rely on a set of common data services shared between different research communities. Although research communities from different disciplines have different ambitions and approaches - particularly with respect to data organization and content - they also share many basic service requirements. This commonality makes it possible for EUDAT to establish common data services, designed to support multiple research communities, as part of this CDI. During the first year, EUDAT has been reviewing the approaches and requirements of a first subset of communities from linguistics (CLARIN), solid earth sciences (EPOS), climate sciences (ENES), environmental sciences (LIFEWATCH), and biological and medical sciences (VPH), and shortlisted four generic services to be deployed as shared services on the EUDAT infrastructure. These services are data replication from site to site, data staging to compute facilities, metadata, and easy storage. A number of enabling services such as distributed authentication and authorization, persistent identifiers, hosting of services, workspaces and centre registry were also discussed. The services being designed in EUDAT will thus be of interest to a broad range of communities that lack their own robust data infrastructures, or that are simply looking for additional storage and/or computing capacities to better access, use, re-use, and preserve their data. The first pilots were completed in 2012 and a pre-production ready operational infrastructure, comprised of five sites (RZG, CINECA, SARA, CSC, FZJ), offering 480TB of online storage and 4PB of near-line (tape) storage, initially serving four user communities (ENES, EPOS, CLARIN, VPH) was established. These services shall be available to all communities in a production environment by 2014. Although EUDAT has initially focused on a subset of research communities, it aims to engage with other communities interested in adapting their solutions or contributing to the design of the infrastructure. Discussions with other research communities - belonging to the fields of environmental sciences, biomedical science, physics, social sciences and humanities - have already begun and are following a pattern similar to the one we adopted with the initial communities. The next step will consist of integrating representatives from these communities into the existing pilots and task forces so as to include them in the process of designing the services and, ultimately, shaping the future CDI.

  19. Airborne biological hazards and urban transport infrastructure: current challenges and future directions.

    PubMed

    Nasir, Zaheer Ahmad; Campos, Luiza Cintra; Christie, Nicola; Colbeck, Ian

    2016-08-01

    Exposure to airborne biological hazards in an ever expanding urban transport infrastructure and highly diverse mobile population is of growing concern, in terms of both public health and biosecurity. The existing policies and practices on design, construction and operation of these infrastructures may have severe implications for airborne disease transmission, particularly, in the event of a pandemic or intentional release of biological of agents. This paper reviews existing knowledge on airborne disease transmission in different modes of transport, highlights the factors enhancing the vulnerability of transport infrastructures to airborne disease transmission, discusses the potential protection measures and identifies the research gaps in order to build a bioresilient transport infrastructure. The unification of security and public health research, inclusion of public health security concepts at the design and planning phase, and a holistic system approach involving all the stakeholders over the life cycle of transport infrastructure hold the key to mitigate the challenges posed by biological hazards in the twenty-first century transport infrastructure.

  20. The climate impacts of bioenergy systems depend on market and regulatory policy contexts.

    PubMed

    Lemoine, Derek M; Plevin, Richard J; Cohn, Avery S; Jones, Andrew D; Brandt, Adam R; Vergara, Sintana E; Kammen, Daniel M

    2010-10-01

    Biomass can help reduce greenhouse gas (GHG) emissions by displacing petroleum in the transportation sector, by displacing fossil-based electricity, and by sequestering atmospheric carbon. Which use mitigates the most emissions depends on market and regulatory contexts outside the scope of attributional life cycle assessments. We show that bioelectricity's advantage over liquid biofuels depends on the GHG intensity of the electricity displaced. Bioelectricity that displaces coal-fired electricity could reduce GHG emissions, but bioelectricity that displaces wind electricity could increase GHG emissions. The electricity displaced depends upon existing infrastructure and policies affecting the electric grid. These findings demonstrate how model assumptions about whether the vehicle fleet and bioenergy use are fixed or free parameters constrain the policy questions an analysis can inform. Our bioenergy life cycle assessment can inform questions about a bioenergy mandate's optimal allocation between liquid fuels and electricity generation, but questions about the optimal level of bioenergy use require analyses with different assumptions about fixed and free parameters.

Top