NASA Astrophysics Data System (ADS)
Hey, Tony
2002-08-01
After defining what is meant by the term 'e-Science', this talk will survey the activity on e-Science and Grids in Europe. The two largest initiatives in Europe are the European Commission's portfolio of Grid projects and the UK e-Science program. The EU under its R Framework Program are funding nearly twenty Grid projects in a wide variety of application areas. These projects are in varying stages of maturity and this talk will focus on a subset that have most significant progress. These include the EU DataGrid project led by CERN and two projects - EuroGrid and Grip - that evolved from the German national Unicore project. A summary of the other EU Grid projects will be included. The UK e-Science initiative is a 180M program entirely focused on e-Science applications requiring resource sharing, a virtual organization and a Grid infrastructure. The UK program is unique for three reasons: (1) the program covers all areas of science and engineering; (2) all of the funding is devoted to Grid application and middleware development and not to funding major hardware platforms; and (3) there is an explicit connection with industry to produce robust and secure industrial-strength versions of Grid middleware that could be used in business-critical applications. A part of the funding, around 50M, but requiring an additional 'matching' $30M from industry in collaborative projects, forms the UK e-Science 'Core Program'. It is the responsibility of the Core Program to identify and support a set of generic middleware requirements that have emerged from a requirements analysis of the e-Science application projects. This has led to a much more data-centric vision for 'the Grid' in the UK in which access to HPC facilities forms only one element. More important for the UK projects are issues such as enabling access and federation of scientific data held in files, relational databases and other archives. Automatic annotation of data generated by high throughput experiments with XML-based metadata is seen as a key step towards developing higher-level Grid services for information retrieval and knowledge discovery. The talk will conclude with a survey of other Grid initiatives across Europe and look at possible future European projects.
Spaceflight Operations Services Grid (SOSG) Project
NASA Technical Reports Server (NTRS)
Bradford, Robert; Lisotta, Anthony
2004-01-01
The motivation, goals, and objectives of the Space Operations Services Grid Project (SOSG) are covered in this viewgraph presentation. The goals and objectives of SOSG include: 1) Developing a grid-enabled prototype providing Space-based ground operations end user services through a collaborative effort between NASA, academia, and industry to assess the technical and cost feasibility of implementation of Grid technologies in the Space Operations arena; 2) Provide to space operations organizations and processes, through a single secure portal(s), access to all the information technology (Grid and Web based) services necessary for program/project development, operations and the ultimate creation of new processes, information and knowledge.
NREL: International Activities - Energy Access
experience with off-grid solutions to support mini and microgrid projects, policies, and programs that are prohibitively expensive. Investment interest in mini and microgrids for energy access has been growing among Quality Assurance Framework (QAF) for mini-grids was developed to address the root challenges to providing
DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphrey, Marty, A
The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio
This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.
SARS Grid--an AG-based disease management and collaborative platform.
Hung, Shu-Hui; Hung, Tsung-Chieh; Juang, Jer-Nan
2006-01-01
This paper describes the development of the NCHC's Severe Acute Respiratory Syndrome (SARS) Grid project-An Access Grid (AG)-based disease management and collaborative platform that allowed for SARS patient's medical data to be dynamically shared and discussed between hospitals and doctors using AG's video teleconferencing (VTC) capabilities. During the height of the SARS epidemic in Asia, SARS Grid and the SARShope website significantly curved the spread of SARS by helping doctors manage the in-hospital and in-home care of quarantined SARS patients through medical data exchange and the monitoring of the patient's symptoms. Now that the SARS epidemic has ended, the primary function of the SARS Grid project is that of a web-based informatics tool to increase pubic awareness of SARS and other epidemic diseases. Additionally, the SARS Grid project can be viewed and further studied as an outstanding model of epidemic disease prevention and/or containment.
The GridPP DIRAC project - DIRAC for non-LHC communities
NASA Astrophysics Data System (ADS)
Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.
2015-12-01
The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.
RGLite, an interface between ROOT and gLite—proof on the grid
NASA Astrophysics Data System (ADS)
Malzacher, P.; Manafov, A.; Schwarz, K.
2008-07-01
Using the gLitePROOF package it is possible to perform PROOF-based distributed data analysis on the gLite Grid. The LHC experiments managed to run globally distributed Monte Carlo productions on the Grid, now the development of tools for data analysis is in the foreground. To grant access interfaces must be provided. The ROOT/PROOF framework is used as a starting point. Using abstract ROOT classes (TGrid, ...) interfaces can be implemented, via which Grid access from ROOT can be accomplished. A concrete implementation exists for the ALICE Grid environment AliEn. Within the D-Grid project an interface to the common Grid middleware of all LHC experiments, gLite, has been created. Therefore it is possible to query Grid File Catalogues from ROOT for the location of the data to be analysed. Grid jobs can be submitted into a gLite based Grid. The status of the jobs can be asked for, and their results can be obtained.
Modelling noise propagation using Grid Resources. Progress within GDI-Grid
NASA Astrophysics Data System (ADS)
Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut
2010-05-01
Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation. This immense intensive calculation needs to be performed for a major part of European landscape. A LINUX version of the commercial LimA software for noise mapping analysis has been implemented on a test cluster within the German D-GRID computer network. Results and performance indicators will be presented. The presentation is an extension to last-years presentation "Spatial Data Infrastructures and Grid Computing: the GDI-Grid project" that described the gridification concept developed in the GDI-Grid project and provided an overview of the conceptual gaps between Grid Computing and Spatial Data Infrastructures. Results from the GDI-Grid project are incorporated in the OGC-OGF (Open Grid Forum) collaboration efforts as well as the OGC WPS 2.0 standards working group developing the next major version of the WPS specification.
Earth Science community support in the EGI-Inspire Project
NASA Astrophysics Data System (ADS)
Schwichtenberg, H.
2012-04-01
The Earth Science Grid community is following its strategy of propagating Grid technology to the ES disciplines, setting up interactive collaboration among the members of the community and stimulating the interest of stakeholders on the political level since ten years already. This strategy was described in a roadmap published in an Earth Science Informatics journal. It was applied through different European Grid projects and led to a large Grid Earth Science VRC that covers a variety of ES disciplines; in the end, all of them were facing the same kind of ICT problems. .. The penetration of Grid in the ES community is indicated by the variety of applications, the number of countries in which ES applications are ported, the number of papers in international journals and the number of related PhDs. Among the six virtual organisations belonging to ES, one, ESR, is generic. Three others -env.see-grid-sci.eu, meteo.see-grid-sci.eu and seismo.see-grid-sci.eu- are thematic and regional (South Eastern Europe) for environment, meteorology and seismology. The sixth VO, EGEODE, is for the users of the Geocluster software. There are also ES users in national VOs or VOs related to projects. The services for the ES task in EGI-Inspire concerns the data that are a key part of any ES application. The ES community requires several interfaces to access data and metadata outside of the EGI infrastructure, e.g. by using grid-enabled database interfaces. The data centres have also developed service tools for basic research activities such as searching, browsing and downloading these datasets, but these are not accessible from applications executed on the Grid. The ES task in EGI-Inspire aims to make these tools accessible from the Grid. In collaboration with GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) this task is maintaining and evolving an interface in response to new requirements that will allow data in the GENESI-DR infrastructure to be accessed from EGI resources to enable future research activities by this HUC. The international climate community for IPCC has created the Earth System Grid (ESG) to store and share climate data. There is a need to interface ESG with EGI for climate studies - parametric, regional and impact aspects. Critical points concern the interoperability of security mechanism between both "organisations", data protection policy, data transfer, data storage and data caching. Presenter: Horst Schwichtenberg Co-Authors: Monique Petitdidier (IPSL), Andre Gemünd (SCAI), Wim Som de Cerff (KNMI), Michael Schnell (SCAI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Grid-supported Medical Digital Library.
Kosiedowski, Michal; Mazurek, Cezary; Stroinski, Maciej; Weglarz, Jan
2007-01-01
Secure, flexible and efficient storing and accessing digital medical data is one of the key elements for delivering successful telemedical systems. To this end grid technologies designed and developed over the recent years and grid infrastructures deployed with their use seem to provide an excellent opportunity for the creation of a powerful environment capable of delivering tools and services for medical data storage, access and processing. In this paper we present the early results of our work towards establishing a Medical Digital Library supported by grid technologies and discuss future directions of its development. These works are part of the "Telemedycyna Wielkopolska" project aiming to develop a telemedical system for the support of the regional healthcare.
Integration of advanced technologies to enhance problem-based learning over distance: Project TOUCH.
Jacobs, Joshua; Caudell, Thomas; Wilks, David; Keep, Marcus F; Mitchell, Steven; Buchanan, Holly; Saland, Linda; Rosenheimer, Julie; Lozanoff, Beth K; Lozanoff, Scott; Saiki, Stanley; Alverson, Dale
2003-01-01
Distance education delivery has increased dramatically in recent years as a result of the rapid advancement of communication technology. The National Computational Science Alliance's Access Grid represents a significant advancement in communication technology with potential for distance medical education. The purpose of this study is to provide an overview of the TOUCH project (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) with special emphasis on the process of problem-based learning case development for distribution over the Access Grid. The objective of the TOUCH project is to use emerging Internet-based technology to overcome geographic barriers for delivery of tutorial sessions to medical students pursuing rotations at remote sites. The TOUCH project also is aimed at developing a patient simulation engine and an immersive virtual reality environment to achieve a realistic health care scenario enhancing the learning experience. A traumatic head injury case is developed and distributed over the Access Grid as a demonstration of the TOUCH system. Project TOUCH serves as an example of a computer-based learning system for developing and implementing problem-based learning cases within the medical curriculum, but this system should be easily applied to other educational environments and disciplines involving functional and clinical anatomy. Future phases will explore PC versions of the TOUCH cases for increased distribution. Copyright 2003 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
techno-economic studies of model projects involving grid-tied and off-grid implementations of renewable economic power systems. Research Interests Power sector transformation in diverse socio-economic systems Tailoring energy access for remote communities to their economic aspirations Concepts of societal "
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... integrated U.S. natural gas pipeline system. GLLC notes that due to the Gulf LNG Terminal's direct access to multiple major interstate pipelines and indirect access to the national gas pipeline grid, the Project's... possible impacts that the Export Project might have on natural gas supply and pricing. Navigant's analysis...
Progress on the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Box, Dennis; Boyd, Joseph; Dykstra, Dave; Garzoglio, Gabriele; Herner, Kenneth; Kirby, Michael; Kreymer, Arthur; Levshina, Tanya; Mhashilkar, Parag; Sharma, Neha
2015-12-01
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.
Data management and analysis for the Earth System Grid
NASA Astrophysics Data System (ADS)
Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.
2008-07-01
The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.
Life Cycle Assessment of Solar Photovoltaic Microgrid Systems in Off-Grid Communities.
Bilich, Andrew; Langham, Kevin; Geyer, Roland; Goyal, Love; Hansen, James; Krishnan, Anjana; Bergesen, Joseph; Sinha, Parikhit
2017-01-17
Access to a reliable source of electricity creates significant benefits for developing communities. Smaller versions of electricity grids, known as microgrids, have been developed as a solution to energy access problems. Using attributional life cycle assessment, this project evaluates the environmental and energy impacts of three photovoltiac (PV) microgrids compared to other energy options for a model village in Kenya. When normalized per kilowatt hour of electricity consumed, PV microgrids, particularly PV-battery systems, have lower impacts than other energy access solutions in climate change, particulate matter, photochemical oxidants, and terrestrial acidification. When compared to small-scale diesel generators, PV-battery systems save 94-99% in the above categories. When compared to the marginal electricity grid in Kenya, PV-battery systems save 80-88%. Contribution analysis suggests that electricity and primary metal use during component, particularly battery, manufacturing are the largest contributors to overall PV-battery microgrid impacts. Accordingly, additional savings could be seen from changing battery manufacturing location and ensuring end of life recycling. Overall, this project highlights the potential for PV microgrids to be feasible, adaptable, long-term energy access solutions, with health and environmental advantages compared to traditional electrification options.
Enhancing Discovery, Search, and Access of NASA Hydrological Data by Leveraging GEOSS
NASA Technical Reports Server (NTRS)
Teng, William L.
2015-01-01
An ongoing NASA-funded project has removed a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series) for selected variables of the North American and Global Land Data Assimilation Systems (NLDAS and GLDAS, respectively) and other EOSDIS (Earth Observing System Data Information System) data sets (e.g., precipitation, soil moisture). These time series (data rods) are pre-generated. Data rods Web services are accessible through the CUAHSI Hydrologic Information System (HIS) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) but are not easily discoverable by users of other non-NASA data systems. The Global Earth Observation System of Systems (GEOSS) is a logical mechanism for providing access to the data rods. An ongoing GEOSS Water Services project aims to develop a distributed, global registry of water data, map, and modeling services cataloged using the standards and procedures of the Open Geospatial Consortium and the World Meteorological Organization. The ongoing data rods project has demonstrated the feasibility of leveraging the GEOSS infrastructure to help provide access to time series of model grid information or grids of information over a geographical domain for a particular time interval. A recently-begun, related NASA-funded ACCESS-GEOSS project expands on these prior efforts. Current work is focused on both improving the performance of the generation of on-the-fly (OTF) data rods and the Web interfaces from which users can easily discover, search, and access NASA data.
mantisGRID: a grid platform for DICOM medical images management in Colombia and Latin America.
Garcia Ruiz, Manuel; Garcia Chaves, Alvin; Ruiz Ibañez, Carlos; Gutierrez Mazo, Jorge Mario; Ramirez Giraldo, Juan Carlos; Pelaez Echavarria, Alejandro; Valencia Diaz, Edison; Pelaez Restrepo, Gustavo; Montoya Munera, Edwin Nelson; Garcia Loaiza, Bernardo; Gomez Gonzalez, Sebastian
2011-04-01
This paper presents the mantisGRID project, an interinstitutional initiative from Colombian medical and academic centers aiming to provide medical grid services for Colombia and Latin America. The mantisGRID is a GRID platform, based on open source grid infrastructure that provides the necessary services to access and exchange medical images and associated information following digital imaging and communications in medicine (DICOM) and health level 7 standards. The paper focuses first on the data abstraction architecture, which is achieved via Open Grid Services Architecture Data Access and Integration (OGSA-DAI) services and supported by the Globus Toolkit. The grid currently uses a 30-Mb bandwidth of the Colombian High Technology Academic Network, RENATA, connected to Internet 2. It also includes a discussion on the relational database created to handle the DICOM objects that were represented using Extensible Markup Language Schema documents, as well as other features implemented such as data security, user authentication, and patient confidentiality. Grid performance was tested using the three current operative nodes and the results demonstrated comparable query times between the mantisGRID (OGSA-DAI) and Distributed mySQL databases, especially for a large number of records.
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Progress on the FabrIc for Frontier Experiments project at Fermilab
Box, Dennis; Boyd, Joseph; Dykstra, Dave; ...
2015-12-23
The FabrIc for Frontier Experiments (FIFE) project is an ambitious, major-impact initiative within the Fermilab Scientific Computing Division designed to lead the computing model for Fermilab experiments. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying needs and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of Open Science Grid solutions for high throughput computing, data management, database access and collaboration within experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid sites along with dedicated and commercialmore » cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including new job submission services, software and reference data distribution through CVMFS repositories, flexible data transfer client, and access to opportunistic resources on the Open Science Grid. Hence, the progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken a leading role in the definition of the computing model for Fermilab experiments, aided in the design of computing for experiments beyond Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide« less
Sustainable Energy in Remote Indonesian Grids. Accelerating Project Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirsch, Brian; Burman, Kari; Davidson, Carolyn
2015-06-30
Sustainable Energy for Remote Indonesian Grids (SERIG) is a U.S. Department of Energy (DOE) funded initiative to support Indonesia’s efforts to develop clean energy and increase access to electricity in remote locations throughout the country. With DOE support, the SERIG implementation team consists of the National Renewable Energy Laboratory (NREL) and Winrock International’s Jakarta, Indonesia office. Through technical assistance that includes techno-economic feasibility evaluation for selected projects, government-to-government coordination, infrastructure assessment, stakeholder outreach, and policy analysis, SERIG seeks to provide opportunities for individual project development and a collective framework for national replication office.
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
The QuakeSim Project: Web Services for Managing Geophysical Data and Applications
NASA Astrophysics Data System (ADS)
Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet
2008-04-01
We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.
Privacy protection in HealthGrid: distributing encryption management over the VO.
Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente
2006-01-01
Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Introducing MCgrid 2.0: Projecting cross section calculations on grids
NASA Astrophysics Data System (ADS)
Bothmann, Enrico; Hartland, Nathan; Schumann, Steffen
2015-11-01
MCgrid is a software package that provides access to interpolation tools for Monte Carlo event generator codes, allowing for the fast and flexible variation of scales, coupling parameters and PDFs in cutting edge leading- and next-to-leading-order QCD calculations. We present the upgrade to version 2.0 which has a broader scope of interfaced interpolation tools, now providing access to fastNLO, and features an approximated treatment for the projection of MC@NLO-type calculations onto interpolation grids. MCgrid 2.0 also now supports the extended information provided through the HepMC event record used in the recent SHERPA version 2.2.0. The additional information provided therein allows for the support of multi-jet merged QCD calculations in a future update of MCgrid.
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Smart Grid Demonstration Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Craig; Carroll, Paul; Bell, Abigail
The National Rural Electric Cooperative Association (NRECA) organized the NRECA-U.S. Department of Energy (DOE) Smart Grid Demonstration Project (DE-OE0000222) to install and study a broad range of advanced smart grid technologies in a demonstration that spanned 23 electric cooperatives in 12 states. More than 205,444 pieces of electronic equipment and more than 100,000 minor items (bracket, labels, mounting hardware, fiber optic cable, etc.) were installed to upgrade and enhance the efficiency, reliability, and resiliency of the power networks at the participating co-ops. The objective of this project was to build a path for other electric utilities, and particularly electrical cooperatives,more » to adopt emerging smart grid technology when it can improve utility operations, thus advancing the co-ops’ familiarity and comfort with such technology. Specifically, the project executed multiple subprojects employing a range of emerging smart grid technologies to test their cost-effectiveness and, where the technology demonstrated value, provided case studies that will enable other electric utilities—particularly electric cooperatives— to use these technologies. NRECA structured the project according to the following three areas: Demonstration of smart grid technology; Advancement of standards to enable the interoperability of components; and Improvement of grid cyber security. We termed these three areas Technology Deployment Study, Interoperability, and Cyber Security. Although the deployment of technology and studying the demonstration projects at coops accounted for the largest portion of the project budget by far, we see our accomplishments in each of the areas as critical to advancing the smart grid. All project deliverables have been published. Technology Deployment Study: The deliverable was a set of 11 single-topic technical reports in areas related to the listed technologies. Each of these reports has already been submitted to DOE, distributed to co-ops, and posted for universal access at www.nreca.coop/smartgrid. This research is available for widespread distribution to both cooperative members and non-members. These reports are listed in Table 1.2. Interoperability: The deliverable in this area was the advancement of the MultiSpeak™ interoperability standard from version 4.0 to version 5.0, and improvement in the MultiSpeak™ documentation to include more than 100 use cases. This deliverable substantially expanded the scope and usability of MultiSpeak, ™ the most widely deployed utility interoperability standard, now in use by more than 900 utilities. MultiSpeak™ documentation can be accessed only at www.multispeak.org. Cyber Security: NRECA’s starting point was to develop cyber security tools that incorporated succinct guidance on best practices. The deliverables were: cyber security extensions to MultiSpeak,™ which allow more security message exchanges; a Guide to Developing a Cyber Security and Risk Mitigation Plan; a Cyber Security Risk Mitigation Checklist; a Cyber Security Plan Template that co-ops can use to create their own cyber security plans; and Security Questions for Smart Grid Vendors.« less
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
The BioGRID interaction database: 2013 update.
Chatr-Aryamontri, Andrew; Breitkreutz, Bobby-Joe; Heinicke, Sven; Boucher, Lorrie; Winter, Andrew; Stark, Chris; Nixon, Julie; Ramage, Lindsay; Kolas, Nadine; O'Donnell, Lara; Reguly, Teresa; Breitkreutz, Ashton; Sellam, Adnane; Chen, Daici; Chang, Christie; Rust, Jennifer; Livstone, Michael; Oughtred, Rose; Dolinski, Kara; Tyers, Mike
2013-01-01
The Biological General Repository for Interaction Datasets (BioGRID: http//thebiogrid.org) is an open access archive of genetic and protein interactions that are curated from the primary biomedical literature for all major model organism species. As of September 2012, BioGRID houses more than 500 000 manually annotated interactions from more than 30 model organisms. BioGRID maintains complete curation coverage of the literature for the budding yeast Saccharomyces cerevisiae, the fission yeast Schizosaccharomyces pombe and the model plant Arabidopsis thaliana. A number of themed curation projects in areas of biomedical importance are also supported. BioGRID has established collaborations and/or shares data records for the annotation of interactions and phenotypes with most major model organism databases, including Saccharomyces Genome Database, PomBase, WormBase, FlyBase and The Arabidopsis Information Resource. BioGRID also actively engages with the text-mining community to benchmark and deploy automated tools to expedite curation workflows. BioGRID data are freely accessible through both a user-defined interactive interface and in batch downloads in a wide variety of formats, including PSI-MI2.5 and tab-delimited files. BioGRID records can also be interrogated and analyzed with a series of new bioinformatics tools, which include a post-translational modification viewer, a graphical viewer, a REST service and a Cytoscape plugin.
Surfer: An Extensible Pull-Based Framework for Resource Selection and Ranking
NASA Technical Reports Server (NTRS)
Zolano, Paul Z.
2004-01-01
Grid computing aims to connect large numbers of geographically and organizationally distributed resources to increase computational power; resource utilization, and resource accessibility. In order to effectively utilize grids, users need to be connected to the best available resources at any given time. As grids are in constant flux, users cannot be expected to keep up with the configuration and status of the grid, thus they must be provided with automatic resource brokering for selecting and ranking resources meeting constraints and preferences they specify. This paper presents a new OGSI-compliant resource selection and ranking framework called Surfer that has been implemented as part of NASA's Information Power Grid (IPG) project. Surfer is highly extensible and may be integrated into any grid environment by adding information providers knowledgeable about that environment.
Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less
Grid Technology as a Cyber Infrastructure for Earth Science Applications
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2004-01-01
This paper describes how grids and grid service technologies can be used to develop an infrastructure for the Earth Science community. This cyberinfrastructure would be populated with a hierarchy of services, including discipline specific services such those needed by the Earth Science community as well as a set of core services that are needed by most applications. This core would include data-oriented services used for accessing and moving data as well as computer-oriented services used to broker access to resources and control the execution of tasks on the grid. The availability of such an Earth Science cyberinfrastructure would ease the development of Earth Science applications. With such a cyberinfrastructure, application work flows could be created to extract data from one or more of the Earth Science archives and then process it by passing it through various persistent services that are part of the persistent cyberinfrastructure, such as services to perform subsetting, reformatting, data mining and map projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Gail-Joon
The project seeks an innovative framework to enable users to access and selectively share resources in distributed environments, enhancing the scalability of information sharing. We have investigated secure sharing & assurance approaches for ad-hoc collaboration, focused on Grids, Clouds, and ad-hoc network environments.
Using OSG Computing Resources with (iLC)Dirac
NASA Astrophysics Data System (ADS)
Sailer, A.; Petric, M.; CLICdp Collaboration
2017-10-01
CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.
The judgement of simultaneous commutation failure in HVDC about hierarchical connection to AC grid
NASA Astrophysics Data System (ADS)
Li, Ming; Song, Xinli; Huang, Daoshan; Liu, Wenzhuo; Zhao, Shutao; Ye, Xiaohui; Meng, Hang
2017-09-01
The hierarchical connection to AC grid at inverter sides in UHVDC has been take in several projects. This paper introduced the frame of the connection mode in hierarchical access system and compared it with the traditional one at the case of HVDC-Cigre. Then the criterion of commutation failure according to the same valves current was deduced. In order to verify the accuracy of the criterion, this paper used PSD-BPA (Bonneville Power Administration) to simulate the setting voltage drop in the East China power grid and certified the correctness of the formula.
NASA Astrophysics Data System (ADS)
Kirubi, Charles Gathu
Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily lead to elite capture. Experiences from different micro-grid settings illustrate the manner in which a coincidence of interest between the elites and the rest of members and access to external support can create incentives and mechanisms to enable community-wide access to scarce services, hence mitigating elite capture. Moreover, access to external support was found to increase the likelihood of participation for the relatively poor households. The policy-relevant message from this research is two-fold. In rural areas with suitable sites for micro-hydro power, the potential for community micro-grids appear considerable to the extent that this option would seem to represent "the road not taken" as far as policies and initiatives aimed at expanding RE are concerned in Kenya and other African countries with comparable settings. However, local participatory initiatives not complimented by external technical assistance run a considerable risk of locking rural households into relatively more costly and poor-quality services. By taking advantage of existing and/or building a dense network of local organizations, including micro-finance agencies, the government and development partners can make available to local communities the necessary support---financial, technical or regulatory---essential for efficient design of micro-grids in addition to facilitating equitable distribution of electricity benefits.
UNOSAT: the First University Brazilian Nanosatellite
NASA Astrophysics Data System (ADS)
Stancato, F.; Oliveira, E. M. Manhas M., Jr.; Mendes, L. H.; Oliveira, G.
2002-01-01
In 2000 it was created in the Universidade Norte do Paraná, UNOPAR University, an educational undergraduate aerospace group called SPACE. During the 51st International Astronautical Congress in Brazil, the participant students got in contact with different small satellite programs from different universities. Motivated by these contacts they began the a nanosatellite project feasibility study. Contacts were made in the beginning of 2001 to see a launch possibility as secondary payload at the third qualification flight test of the VLS, the Brazilian launcher. Soon came the positive answer and the project began. A very simple nanosatellite project was chosen. The mission was going to download 5 parameters telemetry and one voice message. The different project parts were divided among the thirteen undergraduate students: structure, radio link, solar panels, energy controller module, telemetry and instrumentation. They were also responsible for the systems tests, integration and follow final tests. An outreach activity was also made. An local broadcast radio company did an competition among its listeners to select the message and the voice of the first Brazilian who would "speech" from the space. It was an unusual way also to get some funding from a sponsor who got free media. It is shown the program management strategy, what had worked and what not, funding strategy and the educational benefits.- The launch is scheduled to the 2002 second semester.
An Experimental Framework for Executing Applications in Dynamic Grid Environments
NASA Technical Reports Server (NTRS)
Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.
Review of the development of multi-terminal HVDC and DC power grid
NASA Astrophysics Data System (ADS)
Chen, Y. X.
2017-11-01
Traditional power equipment, power-grid structures, and operation technology are becoming increasingly powerless with the large-scale renewable energy access to the grid. Thus, we must adopt new technologies, new equipment, and new grid structure to satisfy future requirements in energy patterns. Accordingly, the multiterminal direct current (MTDC) transmission system is receiving increasing attention. This paper starts with a brief description of current developments in MTDC worldwide. The MTDC project, which has been placed into practical operation, is introduced by the Italian-Corsica-Sardinian three-terminal high-voltage DC (HVDC) project. We then describe the basic characteristics and regulations of multiterminal DC transmission. The current mainstream of several control methods are described. In the third chapter, the key to the development of MTDC system or hardware and software technology that restricts the development of multiterminal DC transmission is discussed. This chapter focuses on the comparison of double-ended HVDC and multiterminal HVDC in most aspects and subsequently elaborates the key and difficult point of MTDC development. Finally, this paper summarizes the prospect of a DC power grid. In a few decades, China can build a strong cross-strait AC-DC hybrid power grid.
Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models
Blanc, Élodie
2017-01-26
This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less
Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanc, Élodie
This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less
3MdB: the Mexican Million Models database
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.
2014-10-01
The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.
Multi-objective Optimization of Solar-driven Hollow-fiber Membrane Distillation Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nenoff, Tina M.; Moore, Sarah E.; Mirchandani, Sera
Securing additional water sources remains a primary concern for arid regions in both the developed and developing world. Climate change is causing fluctuations in the frequency and duration of precipitation, which can be can be seen as prolonged droughts in some arid areas. Droughts decrease the reliability of surface water supplies, which forces communities to find alternate primary water sources. In many cases, ground water can supplement the use of surface supplies during periods of drought, reducing the need for above-ground storage without sacrificing reliability objectives. Unfortunately, accessible ground waters are often brackish, requiring desalination prior to use, and underdevelopedmore » infrastructure and inconsistent electrical grid access can create obstacles to groundwater desalination in developing regions. The objectives of the proposed project are to (i) mathematically simulate the operation of hollow fiber membrane distillation systems and (ii) optimize system design for off-grid treatment of brackish water. It is anticipated that methods developed here can be used to supply potable water at many off-grid locations in semi-arid regions including parts of the Navajo Reservation. This research is a collaborative project between Sandia and the University of Arizona.« less
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats
NASA Astrophysics Data System (ADS)
Erickson, T. A.; Koziol, B. W.; Rood, R. B.
2011-12-01
The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.
NASA Astrophysics Data System (ADS)
Arndt, Jan Erik; Schenke, Hans Werner; Jakobsson, Martin; Nitsche, Frank O.; Buys, Gwen; Goleby, Bruce; Rebesco, Michele; Bohoyo, Fernando; Hong, Jongkuk; Black, Jenny; Greku, Rudolf; Udintsev, Gleb; Barrios, Felipe; Reynoso-Peralta, Walter; Taisei, Morishita; Wigley, Rochelle
2013-06-01
International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0 is a new digital bathymetric model (DBM) portraying the seafloor of the circum-Antarctic waters south of 60°S. IBCSO is a regional mapping project of the General Bathymetric Chart of the Oceans (GEBCO). The IBCSO Version 1.0 DBM has been compiled from all available bathymetric data collectively gathered by more than 30 institutions from 15 countries. These data include multibeam and single-beam echo soundings, digitized depths from nautical charts, regional bathymetric gridded compilations, and predicted bathymetry. Specific gridding techniques were applied to compile the DBM from the bathymetric data of different origin, spatial distribution, resolution, and quality. The IBCSO Version 1.0 DBM has a resolution of 500 × 500 m, based on a polar stereographic projection, and is publicly available together with a digital chart for printing from the project website (www.ibcso.org) and at
NREL Partnership Develops Off-Grid Energy Access through Quality Assurance
Framework for Mini-Grids | Integrated Energy Solutions | NREL Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed with the Global Lighting
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Box, D.; Boyd, J.; Di Benedetto, V.
2016-01-01
The FabrIc for Frontier Experiments (FIFE) project is an initiative within the Fermilab Scientific Computing Division designed to steer the computing model for non-LHC Fermilab experiments across multiple physics areas. FIFE is a collaborative effort between experimenters and computing professionals to design and develop integrated computing models for experiments of varying size, needs, and infrastructure. The major focus of the FIFE project is the development, deployment, and integration of solutions for high throughput computing, data management, database access and collaboration management within an experiment. To accomplish this goal, FIFE has developed workflows that utilize Open Science Grid compute sites alongmore » with dedicated and commercial cloud resources. The FIFE project has made significant progress integrating into experiment computing operations several services including a common job submission service, software and reference data distribution through CVMFS repositories, flexible and robust data transfer clients, and access to opportunistic resources on the Open Science Grid. The progress with current experiments and plans for expansion with additional projects will be discussed. FIFE has taken the leading role in defining the computing model for Fermilab experiments, aided in the design of experiments beyond those hosted at Fermilab, and will continue to define the future direction of high throughput computing for future physics experiments worldwide.« less
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Globally Gridded Satellite (GridSat) Observations for Climate Studies
NASA Technical Reports Server (NTRS)
Knapp, Kenneth R.; Ansari, Steve; Bain, Caroline L.; Bourassa, Mark A.; Dickinson, Michael J.; Funk, Chris; Helms, Chip N.; Hennon, Christopher C.; Holmes, Christopher D.; Huffman, George J.;
2012-01-01
Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them: there is no central archive of geostationary data for all international satellites, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multi-satellite climate studies. The International Satellite Cloud Climatology Project set the stage for overcoming these issues by archiving a subset of the full resolution geostationary data at approx.10 km resolution at 3 hourly intervals since 1983. Recent efforts at NOAA s National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in the netCDF format using standards that permit a wide variety of tools and libraries to quickly and easily process the data. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.
An infrastructure for the integration of geoscience instruments and sensors on the Grid
NASA Astrophysics Data System (ADS)
Pugliese, R.; Prica, M.; Kourousias, G.; Del Linz, A.; Curri, A.
2009-04-01
The Grid, as a computing paradigm, has long been in the attention of both academia and industry[1]. The distributed and expandable nature of its general architecture result to scalability and more efficient utilisation of the computing infrastructures. The scientific community, including that of geosciences, often handles problems with very high requirements in data processing, transferring, and storing[2,3]. This has raised the interest on Grid technologies but these are often viewed solely as an access gateway to HPC. Suitable Grid infrastructures could provide the geoscience community with additional benefits like those of sharing, remote access and control of scientific systems. These systems can be scientific instruments, sensors, robots, cameras and any other device used in geosciences. The solution for practical, general, and feasible Grid-enabling of such devices requires non-intrusive extensions on core parts of the current Grid architecture. We propose an extended version of an architecture[4] that can serve as the solution to the problem. The solution we propose is called Grid Instrument Element (IE) [5]. It is an addition to the existing core Grid parts; the Computing Element (CE) and the Storage Element (SE) that serve the purposes that their name suggests. The IE that we will be referring to, and the related technologies have been developed in the EU project on the Deployment of Remote Instrumentation Infrastructure (DORII1). In DORII, partners of various scientific communities including those of Earthquake, Environmental science, and Experimental science, have adopted the technology of the Instrument Element in order to integrate to the Grid their devices. The Oceanographic and coastal observation and modelling Mediterranean Ocean Observing Network (OGS2), a DORII partner, is in the process of deploying the above mentioned Grid technologies on two types of observational modules: Argo profiling floats and a novel Autonomous Underwater Vehicle (AUV). In this paper i) we define the need for integration of instrumentation in the Grid, ii) we introduce the solution of the Instrument Element, iii) we demonstrate a suitable end-user web portal for accessing Grid resources, iv) we describe from the Grid-technological point of view the process of the integration to the Grid of two advanced environmental monitoring devices. References [1] M. Surridge, S. Taylor, D. De Roure, and E. Zaluska, "Experiences with GRIA—Industrial Applications on a Web Services Grid," e-Science and Grid Computing, First International Conference on e-Science and Grid Computing, 2005, pp. 98-105. [2] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, "The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets," Journal of Network and Computer Applications, vol. 23, 2000, pp. 187-200. [3] B. Allcock, J. Bester, J. Bresnahan, A.L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, "Data management and transfer in high-performance computational grid environments," Parallel Computing, vol. 28, 2002, pp. 749-771. [4] E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, and S. Traldi, "Instrument Element: A New Grid component that Enables the Control of Remote Instrumentation," Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)-Volume 00, IEEE Computer Society Washington, DC, USA, 2006. [5] R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, and M. Prica, "A Web-based Tool for Collaborative Access to Scientific Instruments in Cyberinfrastructures." 1 The DORII project is supported by the European Commission within the 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-213110. URL: http://www.dorii.eu 2 Istituto Nazionale di Oceanografia e di Geofisica Sperimentale. URL: http://www.ogs.trieste.it
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
Manual actuator. [for spacecraft exercising machines
NASA Technical Reports Server (NTRS)
Gause, R. L.; Glenn, C. G. (Inventor)
1974-01-01
An actuator for an exercising machine employable by a crewman aboard a manned spacecraft is presented. The actuator is characterized by a force delivery arm projected from a rotary imput shaft of an exercising machine and having a force input handle extended orthogonally from its distal end. The handle includes a hand-grip configured to be received within the palm of the crewman's hand and a grid pivotally supported for angular displacement between a first position, wherein the grid is disposed in an overlying juxtaposition with the hand-grip, and a second position, angularly displaced from the first position, for affording access to the hand-grip, and a latching mechanism fixed to the sole of a shoe worn by the crewman for latching the shoe to the grid when the grid is in the first position.
Energy Transitions | Integrated Energy Solutions | NREL
clean energy access to remote populations across West Africa. NREL Supports Effort to Take Distributed develops and implements pilot projects to accelerate the development of distributed photovoltaics Renewable Energy into India's Electric Grid Volume 1 Volume 2 Designing Distributed Generation in Mexico
OpenZika: An IBM World Community Grid Project to Accelerate Zika Virus Drug Discovery
Perryman, Alexander L.; Horta Andrade, Carolina
2016-01-01
The Zika virus outbreak in the Americas has caused global concern. To help accelerate this fight against Zika, we launched the OpenZika project. OpenZika is an IBM World Community Grid Project that uses distributed computing on millions of computers and Android devices to run docking experiments, in order to dock tens of millions of drug-like compounds against crystal structures and homology models of Zika proteins (and other related flavivirus targets). This will enable the identification of new candidates that can then be tested in vitro, to advance the discovery and development of new antiviral drugs against the Zika virus. The docking data is being made openly accessible so that all members of the global research community can use it to further advance drug discovery studies against Zika and other related flaviviruses. PMID:27764115
OpenZika: An IBM World Community Grid Project to Accelerate Zika Virus Drug Discovery.
Ekins, Sean; Perryman, Alexander L; Horta Andrade, Carolina
2016-10-01
The Zika virus outbreak in the Americas has caused global concern. To help accelerate this fight against Zika, we launched the OpenZika project. OpenZika is an IBM World Community Grid Project that uses distributed computing on millions of computers and Android devices to run docking experiments, in order to dock tens of millions of drug-like compounds against crystal structures and homology models of Zika proteins (and other related flavivirus targets). This will enable the identification of new candidates that can then be tested in vitro, to advance the discovery and development of new antiviral drugs against the Zika virus. The docking data is being made openly accessible so that all members of the global research community can use it to further advance drug discovery studies against Zika and other related flaviviruses.
Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S
The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less
Sino/American cooperation for rural electrification in China
NASA Astrophysics Data System (ADS)
Wallace, William L.; Tsuo, Y. Simon
1997-02-01
Rapid growth in economic development, coupled with the absence of an electric grid in large areas of the rural countryside, have created a need for new energy sources both in urban centers and rural areas in China. There is a very large need for new sources of energy for rural electrification in China as represented by 120 million people in remote regions who do not have access to an electric grid and by over 300 coastal islands in China that are unelectrified. In heavily populated regions in China where there is an electric grid, there are still severe shortages of electric power and limited access to the grid by village populations. In order to meet energy demands in rural China, renewable energy in the form of solar, wind, and biomass resources are being utilized as a cost effective alternative to grid extension and use of diesel and gasoline generators. An Energy Efficiency and Renewable Energy Protocol Agreement was signed by the U.S. Department of Energy with the Chinese State Science and Technology Commission in Beijing in February, 1995. Under this agreement, projects using photovoltaics for rural electrification are being conducted in Gansu Province in western China and Inner Mongolia in northern China, providing the basis for much wider deployment and use of photovoltaics for meeting the growing rural energy demands of China.
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia
2006-01-01
The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.
Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences
NASA Astrophysics Data System (ADS)
Schissel, D. P.
2004-11-01
The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.
High resolution global gridded data for use in population studies
NASA Astrophysics Data System (ADS)
Lloyd, Christopher T.; Sorichetta, Alessandro; Tatem, Andrew J.
2017-01-01
Recent years have seen substantial growth in openly available satellite and other geospatial data layers, which represent a range of metrics relevant to global human population mapping at fine spatial scales. The specifications of such data differ widely and therefore the harmonisation of data layers is a prerequisite to constructing detailed and contemporary spatial datasets which accurately describe population distributions. Such datasets are vital to measure impacts of population growth, monitor change, and plan interventions. To this end the WorldPop Project has produced an open access archive of 3 and 30 arc-second resolution gridded data. Four tiled raster datasets form the basis of the archive: (i) Viewfinder Panoramas topography clipped to Global ADMinistrative area (GADM) coastlines; (ii) a matching ISO 3166 country identification grid; (iii) country area; (iv) and slope layer. Further layers include transport networks, landcover, nightlights, precipitation, travel time to major cities, and waterways. Datasets and production methodology are here described. The archive can be downloaded both from the WorldPop Dataverse Repository and the WorldPop Project website.
High resolution global gridded data for use in population studies.
Lloyd, Christopher T; Sorichetta, Alessandro; Tatem, Andrew J
2017-01-31
Recent years have seen substantial growth in openly available satellite and other geospatial data layers, which represent a range of metrics relevant to global human population mapping at fine spatial scales. The specifications of such data differ widely and therefore the harmonisation of data layers is a prerequisite to constructing detailed and contemporary spatial datasets which accurately describe population distributions. Such datasets are vital to measure impacts of population growth, monitor change, and plan interventions. To this end the WorldPop Project has produced an open access archive of 3 and 30 arc-second resolution gridded data. Four tiled raster datasets form the basis of the archive: (i) Viewfinder Panoramas topography clipped to Global ADMinistrative area (GADM) coastlines; (ii) a matching ISO 3166 country identification grid; (iii) country area; (iv) and slope layer. Further layers include transport networks, landcover, nightlights, precipitation, travel time to major cities, and waterways. Datasets and production methodology are here described. The archive can be downloaded both from the WorldPop Dataverse Repository and the WorldPop Project website.
Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data
NASA Astrophysics Data System (ADS)
Koranda, Scott
2004-03-01
The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.
Globally Gridded Satellite observations for climate studies
Knapp, K.R.; Ansari, S.; Bain, C.L.; Bourassa, M.A.; Dickinson, M.J.; Funk, Chris; Helms, C.N.; Hennon, C.C.; Holmes, C.D.; Huffman, G.J.; Kossin, J.P.; Lee, H.-T.; Loew, A.; Magnusdottir, G.
2011-01-01
Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them that no central archive of geostationary data for all international satellites exists, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multisatellite climate studies. The International Satellite Cloud Climatology Project (ISCCP) set the stage for overcoming these issues by archiving a subset of the full-resolution geostationary data at ~10-km resolution at 3-hourly intervals since 1983. Recent efforts at NOAA's National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in Network Common Data Format (netCDF) using standards that permit a wide variety of tools and libraries to process the data quickly and easily. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.
Earth System Grid II (ESG): Turning Climate Model Datasets Into Community Resources
NASA Astrophysics Data System (ADS)
Williams, D.; Middleton, D.; Foster, I.; Nevedova, V.; Kesselman, C.; Chervenak, A.; Bharathi, S.; Drach, B.; Cinquni, L.; Brown, D.; Strand, G.; Fox, P.; Garcia, J.; Bernholdte, D.; Chanchio, K.; Pouchard, L.; Chen, M.; Shoshani, A.; Sim, A.
2003-12-01
High-resolution, long-duration simulations performed with advanced DOE SciDAC/NCAR climate models will produce tens of petabytes of output. To be useful, this output must be made available to global change impacts researchers nationwide, both at national laboratories and at universities, other research laboratories, and other institutions. To this end, we propose to create a new Earth System Grid, ESG-II - a virtual collaborative environment that links distributed centers, users, models, and data. ESG-II will provide scientists with virtual proximity to the distributed data and resources that they require to perform their research. The creation of this environment will significantly increase the scientific productivity of U.S. climate researchers by turning climate datasets into community resources. In creating ESG-II, we will integrate and extend a range of Grid and collaboratory technologies, including the DODS remote access protocols for environmental data, Globus Toolkit technologies for authentication, resource discovery, and resource access, and Data Grid technologies developed in other projects. We will develop new technologies for (1) creating and operating "filtering servers" capable of performing sophisticated analyses, and (2) delivering results to users. In so doing, we will simultaneously contribute to climate science and advance the state of the art in collaboratory technology. We expect our results to be useful to numerous other DOE projects. The three-year R&D program will be undertaken by a talented and experienced team of computer scientists at five laboratories (ANL, LBNL, LLNL, NCAR, ORNL) and one university (ISI), working in close collaboration with climate scientists at several sites.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Maidment, D. R.; Rodell, M.; Strub, R. F.; Arctur, D. K.; Ames, D. P.; Vollmer, B.; Seiler, E.
2014-12-01
An ongoing NASA-funded project has removed a longstanding barrier to accessing NASA data (i.e., accessing archived time-step array data as point-time series) for selected variables of the North American and Global Land Data Assimilation Systems (NLDAS and GLDAS, respectively) and other EOSDIS (Earth Observing System Data Information System) data sets. These time series ("data rods") are pre-generated or generated on-the-fly (OTF), leveraging the NASA Simple Subset Wizard (SSW), a gateway to NASA data centers. Data rods Web services are accessible through the CUAHSI Hydrologic Information System (HIS) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) but are not easily discoverable by users of other non-NASA data systems. The Global Earth Observation System of Systems (GEOSS) is a logical mechanism for providing access to the data rods, both pre-generated and OTF. There is an ongoing series of multi-organizational GEOSS Architecture Implementation Pilots, now in Phase-7 (AIP-7) and with a strong water sub-theme, that is aimed at the GEOSS Water Strategic Target "to produce [by 2015] comprehensive sets of data and information products to support decision-making for efficient management of the world's water resources, based on coordinated, sustained observations of the water cycle on multiple scales." The aim of this "GEOSS Water Services" project is to develop a distributed, global registry of water data, map, and modeling services catalogued using the standards and procedures of the Open Geospatial Consortium and the World Meteorological Organization. This project has already demonstrated that the GEOSS infrastructure can be leveraged to help provide access to time series of model grid information (e.g., NLDAS, GLDAS) or grids of information over a geographical domain for a particular time interval. A new NASA-funded project was begun, to expand on these early efforts to enhance the discovery, search, and access of NASA data by non-NASA users, comprising the following key aspects: 1. Leverage SSW API and EOS Clearing House (ECHO) 2. Register data rods services in GEOSS 3. Develop Web Feature Services (WFS) for data rods 4. Enhance metadata in WFS 5. Make non-NASA data visible to NASA users by leveraging SSW 6. Develop hydrological use cases to guide project deployment and serve as metrics
Integrated Access to Solar Observations With EGSO
NASA Astrophysics Data System (ADS)
Csillaghy, A.
2003-12-01
{\\b Co-Authors}: J.Aboudarham (2), E.Antonucci (3), R.D.Bentely (4), L.Ciminiera (5), A.Finkelstein (4), J.B.Gurman(6), F.Hill (7), D.Pike (8), I.Scholl (9), V.Zharkova and the EGSO development team {\\b Institutions}: (2) Observatoire de Paris-Meudon (France); (3) INAF - Istituto Nazionale di Astrofisica (Italy); (4) University College London (U.K.); (5) Politecnico di Torino (Italy), (6) NASA Goddard Space Flight Center (USA); (7) National Solar Observatory (USA); (8) Rutherford Appleton Lab. (U.K.); (9) Institut d'Astrophysique Spatial, Universite de Paris-Sud (France) ; (10) University of Bradford (U.K) {\\b Abstract}: The European Grid of Solar Observations is the European contribution to the deployment of a virtual solar observatory. The project is funded under the Information Society Technologies (IST) thematic programme of the European Commission's Fifth Framework. EGSO started in March 2002 and will last until March 2005. The project is categorized as a computer science effort. Evidently, a fair amount of issues it addresses are general to grid projects. Nevertheless, EGSO is also of benefit to the application domains, including solar physics, space weather, climate physics and astrophysics. With EGSO, researchers as well as the general public can access and combine solar data from distributed archives in an integrated virtual solar resource. Users express queries based on various search parameters. The search possibilities of EGSO extend the search possibilities of traditional data access systems. For instance, users can formulate a query to search for simultaneous observations of a specific solar event in a given number of wavelengths. In other words, users can search for observations on the basis of events and phenomena, rather than just time and location. The software architecture consists of three collaborating components: a consumer, a broker and a provider. The first component, the consumer, organizes the end user interaction and controls requests submitted to the grid. The consumer is thus in charge of tasks such as request handling, request composition, data visualization and data caching. The second component, the provider, is dedicated to data providing and processing. It links the grid to individual data providers and data centers. The third component, the broker, collects information about providers and allows consumers to perform the searches on the grid. Each component can exist in multiple instances. This follows a basic grid concept: The failure or unavailability of a single component will not generate a failure of the whole system, as other systems will take over the processing of requests. The architecture relies on a global data model for the semantics. The data model is in some way the brains of the grid. It provides a description of the information entities available within the grid, as well as a description of their relationships. EGSO is now in the development phase. A demonstration (www.egso.org/demo) is provided to get an idea about how the system will function once the project is completed. The demonstration focuses on retrieving data needed to determine the energy released in the solar atmosphere during the impulsive phase of flares. It allows finding simultaneous observations in the visible, UV, Soft X-rays, hard X-rays, gamma-rays, and radio. The types of observations that can be specified are images at high space and time resolutions as well as integrated emission and spectra from a yet limited set of instruments, including the NASA spacecraft TRACE, SOHO, RHESSI, and the ground-based observatories Phoenix-2 in Switzerland and Meudon Observatory in France
Global Soil Information Facilities - Component Worldgrids.org
NASA Astrophysics Data System (ADS)
Reuter, H. I.; Hengl, T.
2012-04-01
GSIF (Global Soil Information Facilities) is ISRIC's framework for production of open soil data. It has been inspired by global environmental data initiatives (e.g. oneGeology, GBIF). The main practical motivation for GSIF is to build cyber-infrastructure to collate legacy (i.e., historic) soil data currently under threat of being lost forever and to generate new soil information. The objective of the component worldgrids is a (de)-central repository for collecting, storing, accessing and interacting with gridded data sets of global soil covariate data for production mapping, while being part of a larger GSIF. It is the physical implementation of the expectation that ISRIC would lead and coordinate a project to assemble a core data set of global environmental covariates to (partly) support local efforts to produce global soil property maps. Currently over 100 layers with a 5 and 1 km resolution with a global coverage can be accessed via www.worldgrids.org. Three different functionalities are implemented to extract data in an OGC complained matter: i) single point overlay ii) mass point overlay; iii) zone grid overlay with reporting of different statistical parameters. The presentation will focus on datasets, functionalities, access via the R-project and ArcGIS globalsoilmap.net Toolbox as well on future enhancements to the worldgrids platform.
Approach to sustainable e-Infrastructures - The case of the Latin American Grid
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Diacovo, Ramon; Brasileiro, Francisco; Carvalho, Diego; Dutra, Inês; Faerman, Marcio; Gavillet, Philippe; Hoeger, Herbert; Lopez Pourailly, Maria Jose; Marechal, Bernard; Garcia, Rafael Mayo; Neumann Ciuffo, Leandro; Ramos Pollan, Paul; Scardaci, Diego; Stanton, Michael
2010-05-01
The EELA (E-Infrastructure shared between Europe and Latin America) and EELA-2 (E-science grid facility for Europe and Latin America) projects, co-funded by the European Commission under FP6 and FP7, respectively, have been successful in building a high capacity, production-quality, scalable Grid Facility for a wide spectrum of applications (e.g. Earth & Life Sciences, High energy physics, etc.) from several European and Latin American User Communities. This paper presents the 4-year experience of EELA and EELA-2 in: • Providing each Member Institution the unique opportunity to benefit of a huge distributed computing platform for its research activities, in particular through initiatives such as OurGrid which proposes a so-called Opportunistic Grid Computing well adapted to small and medium Research Laboratories such as most of those of Latin America and Africa; • Developing a realistic strategy to ensure the long-term continuity of the e-Infrastructure in the Latin American continent, beyond the term of the EELA-2 project, in association with CLARA and collaborating with EGI. Previous interactions between EELA and African Grid members at events such as the IST Africa'07, 08 and 09, the International Conference on Open Access'08 and EuroAfriCa-ICT'08, to which EELA and EELA-2 contributed, have shown that the e-Infrastructure situation in Africa compares well with the Latin American one. This means that African Grids are likely to face the same problems that EELA and EELA-2 experienced, especially in getting the necessary User and Decision Makers support to create NGIs and, later, a possible continent-wide African Grid Initiative (AGI). The hope is that the EELA-2 endeavour towards sustainability as described in this presentation could help the progress of African Grids.
OGC and Grid Interoperability in enviroGRIDS Project
NASA Astrophysics Data System (ADS)
Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas
2010-05-01
EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
Barriers and Solutions to Smart Water Grid Development.
Cheong, So-Min; Choi, Gye-Woon; Lee, Ho-Sun
2016-03-01
This limited review of smart water grid (SWG) development, challenges, and solutions provides an initial assessment of early attempts at operating SWGs. Though the cost and adoption issues are critical, potential benefits of SWGs such as efficient water conservation and distribution sustain the development of SWGs around the world. The review finds that the keys to success are the new regulations concerning data access and ownership to solve problems of security and privacy; consumer literacy to accept and use SWGs; active private sector involvement to coordinate SWG development; government-funded pilot projects and trial centers; and integration with sustainable water management.
Barriers and Solutions to Smart Water Grid Development
NASA Astrophysics Data System (ADS)
Cheong, So-Min; Choi, Gye-Woon; Lee, Ho-Sun
2016-03-01
This limited review of smart water grid (SWG) development, challenges, and solutions provides an initial assessment of early attempts at operating SWGs. Though the cost and adoption issues are critical, potential benefits of SWGs such as efficient water conservation and distribution sustain the development of SWGs around the world. The review finds that the keys to success are the new regulations concerning data access and ownership to solve problems of security and privacy; consumer literacy to accept and use SWGs; active private sector involvement to coordinate SWG development; government-funded pilot projects and trial centers; and integration with sustainable water management.
Partners | Integrated Energy Solutions | NREL
Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed with the Africa to develop a Quality Assurance Framework for isolated mini-grids. NREL Enhances Energy Resiliency Partnership Develops Off-Grid Energy Access through Quality Assurance Framework for Mini-Grids NREL has teamed
Virtual patient simulator for distributed collaborative medical education.
Caudell, Thomas P; Summers, Kenneth L; Holten, Jim; Hakamata, Takeshi; Mowafi, Moad; Jacobs, Joshua; Lozanoff, Beth K; Lozanoff, Scott; Wilks, David; Keep, Marcus F; Saiki, Stanley; Alverson, Dale
2003-01-01
Project TOUCH (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) investigates the feasibility of using advanced technologies to enhance education in an innovative problem-based learning format currently being used in medical school curricula, applying specific clinical case models, and deploying to remote sites/workstations. The University of New Mexico's School of Medicine and the John A. Burns School of Medicine at the University of Hawai'i face similar health care challenges in providing and delivering services and training to remote and rural areas. Recognizing that health care needs are local and require local solutions, both states are committed to improving health care delivery to their unique populations by sharing information and experiences through emerging telehealth technologies by using high-performance computing and communications resources. The purpose of this study is to describe the deployment of a problem-based learning case distributed over the National Computational Science Alliance's Access Grid. Emphasis is placed on the underlying technical components of the TOUCH project, including the virtual reality development tool Flatland, the artificial intelligence-based simulation engine, the Access Grid, high-performance computing platforms, and the software that connects them all. In addition, educational and technical challenges for Project TOUCH are identified. Copyright 2003 Wiley-Liss, Inc.
Exploring New Methods of Displaying Bit-Level Quality and Other Flags for MODIS Data
NASA Technical Reports Server (NTRS)
Khalsa, Siri Jodha Singh; Weaver, Ron
2003-01-01
The NASA Distributed Active Archive Center (DAAC) at the National Snow and Ice Data Center (NSIDC) archives and distributes snow and sea ice products derived from the MODerate resolution Imaging Spectroradiometer (MODIS) on board NASA's Terra and Aqua satellites. All MODIS standard products are in the Earth Observing System version of the Hierarchal Data Format (HDF-EOS). The MODIS science team has packed a wealth of information into each HDF-EOS file. In addition to the science data arrays containing the geophysical product, there are often pixel-level Quality Assurance arrays which are important for understanding and interpreting the science data. Currently, researchers are limited in their ability to access and decode information stored as individual bits in many of the MODIS science products. Commercial and public domain utilities give users access, in varying degrees, to the elements inside MODIS HDF-EOS files. However, when attempting to visualize the data, users are confronted with the fact that many of the elements actually represent eight different 1-bit arrays packed into a single byte array. This project addressed the need for researchers to access bit-level information inside MODIS data files. In an previous NASA-funded project (ESDIS Prototype ID 50.0) we developed a visualization tool tailored to polar gridded HDF-EOS data set. This tool,called the Polar researchers to access, geolocate, visualize, and subset data that originate from different sources and have different spatial resolutions but which are placed on a common polar grid. The bit-level visualization function developed under this project was added to PHDIS, resulting in a versatile tool that serves a variety of needs. We call this the EOS Imaging Tool.
Spaceflight Operations Services Grid (SOSG)
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Thigpen, William W.
2004-01-01
In an effort to adapt existing space flight operations services to new emerging Grid technologies we are developing a Grid-based prototype space flight operations Grid. This prototype is based on the operational services being provided to the International Space Station's Payload operations located at the Marshall Space Flight Center, Alabama. The prototype services will be Grid or Web enabled and provided to four user communities through portal technology. Users will have the opportunity to assess the value and feasibility of Grid technologies to their specific areas or disciplines. In this presentation descriptions of the prototype development, User-based services, Grid-based services and status of the project will be presented. Expected benefits, findings and observations (if any) to date will also be discussed. The focus of the presentation will be on the project in general, status to date and future plans. The End-use services to be included in the prototype are voice, video, telemetry, commanding, collaboration tools and visualization among others. Security is addressed throughout the project and is being designed into the Grid technologies and standards development. The project is divided into three phases. Phase One establishes the baseline User-based services required for space flight operations listed above. Phase Two involves applying Gridlweb technologies to the User-based services and development of portals for access by users. Phase Three will allow NASA and end users to evaluate the services and determine the future of the technology as applied to space flight operational services. Although, Phase One, which includes the development of the quasi-operational User-based services of the prototype, development will be completed by March 2004, the application of Grid technologies to these services will have just begun. We will provide status of the Grid technologies to the individual User-based services. This effort will result in an extensible environment that incorporates existing and new spaceflight services into a standards-based framework providing current and future NASA programs with cost savings and new and evolvable methods to conduct science. This project will demonstrate how the use of new programming paradigms such as web and grid services can provide three significant benefits to the cost-effective delivery of spaceflight services. They will enable applications to operate more efficiently by being able to utilize pooled resources. They will also permit the reuse of common services to rapidly construct new and more powerful applications. Finally they will permit easy and secure access to services via a combination of grid and portal technology by a distributed user community consisting of NASA operations centers, scientists, the educational community and even the general population as outreach. The approach will be to deploy existing mission support applications such as the Telescience Resource Kit (TReK) and new applications under development, such as the Grid Video Distribution System (GViDS), together with existing grid applications and services such as high-performance computing and visualization services provided by NASA s Information Power Grid (IPG) in the MSFC s Payload Operations Integration Center (POIC) HOSC Annex. Once the initial applications have been moved to the grid, a process will begin to apply the new programming paradigms to integrate them where possible. For example, with GViDS, instead of viewing the Distribution service as an application that must run on a single node, the new approach is to build it such that it can be dispatched across a pool of resources in response to dynamic loads. To make this a reality, reusable services will be critical, such as a brokering service to locate appropriate resource within the pool. This brokering service can then be used by other applications such as the TReK. To expand further, if the GViDS application is constructed using a services-based mel, then other applications such as the Video Auditorium can then use GViDS as a service to easily incorporate these video streams into a collaborative conference. Finally, as these applications are re-factored into this new services-based paradigm, the construction of portals to integrate them will be a simple process. As a result, portals can be tailored to meet the requirements of specific user communities.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-08
... turbines and associated facilities and access roads, maintenance of the wind turbines and associated... Area). The Plan Area is adjacent to existing energy-producing facilities, most notably wind turbine.../California Independent System Operator power grid. Up to 59 wind turbines would be built in the Plan Area...
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Grids for Dummies: Featuring Earth Science Data Mining Application
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2002-01-01
This viewgraph presentation discusses the concept and advantages of linking computers together into data grids, an emerging technology for managing information across institutions, and potential users of data grids. The logistics of access to a grid, including the use of the World Wide Web to access grids, and security concerns are also discussed. The potential usefulness of data grids to the earth science community is also discussed, as well as the Global Grid Forum, and other efforts to establish standards for data grids.
NASA Astrophysics Data System (ADS)
Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob
2007-07-01
The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.
National Centers for Environmental Prediction
/NDAS Output Fields (contents, format, grid specs, output frequency, archive): The NWP model The horizontal output grid The vertical grid Access to fields Anonymous FTP Access Permanent Tape Archive
High resolution global gridded data for use in population studies
Lloyd, Christopher T.; Sorichetta, Alessandro; Tatem, Andrew J.
2017-01-01
Recent years have seen substantial growth in openly available satellite and other geospatial data layers, which represent a range of metrics relevant to global human population mapping at fine spatial scales. The specifications of such data differ widely and therefore the harmonisation of data layers is a prerequisite to constructing detailed and contemporary spatial datasets which accurately describe population distributions. Such datasets are vital to measure impacts of population growth, monitor change, and plan interventions. To this end the WorldPop Project has produced an open access archive of 3 and 30 arc-second resolution gridded data. Four tiled raster datasets form the basis of the archive: (i) Viewfinder Panoramas topography clipped to Global ADMinistrative area (GADM) coastlines; (ii) a matching ISO 3166 country identification grid; (iii) country area; (iv) and slope layer. Further layers include transport networks, landcover, nightlights, precipitation, travel time to major cities, and waterways. Datasets and production methodology are here described. The archive can be downloaded both from the WorldPop Dataverse Repository and the WorldPop Project website. PMID:28140386
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yocum, D.R.; Berman, E.; Canal, P.
2007-05-01
As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.
Action Research to Improve Methods of Delivery and Feedback in an Access Grid Room Environment
ERIC Educational Resources Information Center
McArthur, Lynne C.; Klass, Lara; Eberhard, Andrew; Stacey, Andrew
2011-01-01
This article describes a qualitative study which was undertaken to improve the delivery methods and feedback opportunity in honours mathematics lectures which are delivered through Access Grid Rooms. Access Grid Rooms are facilities that provide two-way video and audio interactivity across multiple sites, with the inclusion of smart boards. The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, R.E.; Tiner, P.F.; Williams, J.K.
1992-08-01
An inventory of surface debris in designated grid blocks at the White Wing Scrap Yard [Waste Area Grouping 11 (WAG 11)] was conducted intermittently from February through June 1992 by members of the Measurement Applications and Development Group, Health and Safety Research Division, Oak Ridge National Laboratory (ORNL) at the request of ORNL Environmental Restoration (ER) Program personnel. The objectives of this project are outlined in the following four phases: (1) estimate the amount (volume) and type (e.g., glass, metal and plastics) of surface waste material in 30 designated grid blocks (100- by 100-ft grids); (2) conduct limited air samplingmore » for organic chemical pollutants at selected locations (e.g., near drums, in holes, or other potentially contaminated areas); (3) conduct a walkover gamma radiation scan extending outward (approximately 50 ft) beyond the proposed location of the WAG 11 perimeter fence; and (4) recommend one grid block as a waste staging area. This recommendation is based on location and accessibility for debris staging/transport activities and on low levels of gamma radiation in the grid block.« less
Surface debris inventory at White Wing Scrap Yard, Oak Ridge Reservation, Oak Ridge, Tennessee
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, R.E.; Tiner, P.F.; Williams, J.K.
1992-08-01
An inventory of surface debris in designated grid blocks at the White Wing Scrap Yard [Waste Area Grouping 11 (WAG 11)] was conducted intermittently from February through June 1992 by members of the Measurement Applications and Development Group, Health and Safety Research Division, Oak Ridge National Laboratory (ORNL) at the request of ORNL Environmental Restoration (ER) Program personnel. The objectives of this project are outlined in the following four phases: (1) estimate the amount (volume) and type (e.g., glass, metal and plastics) of surface waste material in 30 designated grid blocks (100- by 100-ft grids); (2) conduct limited air samplingmore » for organic chemical pollutants at selected locations (e.g., near drums, in holes, or other potentially contaminated areas); (3) conduct a walkover gamma radiation scan extending outward (approximately 50 ft) beyond the proposed location of the WAG 11 perimeter fence; and (4) recommend one grid block as a waste staging area. This recommendation is based on location and accessibility for debris staging/transport activities and on low levels of gamma radiation in the grid block.« less
NASA Technical Reports Server (NTRS)
Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne
2004-01-01
The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.
A Review on Development Practice of Smart Grid Technology in China
NASA Astrophysics Data System (ADS)
Han, Liu; Chen, Wei; Zhuang, Bo; Shen, Hongming
2017-05-01
Smart grid has become an inexorable trend of energy and economy development worldwide. Since the development of smart grid was put forward in China in 2009, we have obtained abundant research results and practical experiences as well as extensive attention from international community in this field. This paper reviews the key technologies and demonstration projects on new energy connection forecasts; energy storage; smart substations; disaster prevention and reduction for power transmission lines; flexible DC transmission; distribution automation; distributed generation access and micro grid; smart power consumption; the comprehensive demonstration of power distribution and utilization; smart power dispatching and control systems; and the communication networks and information platforms of China, systematically, on the basis of 5 fields, i.e., renewable energy integration, smart power transmission and transformation, smart power distribution and consumption, smart power dispatching and control systems and information and communication platforms. Meanwhile, it also analyzes and compares with the developmental level of similar technologies abroad, providing an outlook on the future development trends of various technologies.
Making the most of cloud storage - a toolkit for exploitation by WLCG experiments
NASA Astrophysics Data System (ADS)
Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea
2017-10-01
Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.
Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S
2011-06-06
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.
Haidar, Ali N.; Zasada, Stefan J.; Coveney, Peter V.; Abdallah, Ali E.; Beckles, Bruce; Jones, Mike A. S.
2011-01-01
We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. PMID:22670214
GEMSS: grid-infrastructure for medical service provision.
Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R
2005-01-01
The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.
Grid-enabled mammographic auditing and training system
NASA Astrophysics Data System (ADS)
Yap, M. H.; Gale, A. G.
2008-03-01
Effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI: Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists. Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of such a grid based approach.
Earth System Grid and EGI interoperability
NASA Astrophysics Data System (ADS)
Raciazek, J.; Petitdidier, M.; Gemuend, A.; Schwichtenberg, H.
2012-04-01
The Earth Science data centers have developed a data grid called Earth Science Grid Federation (ESGF) to give the scientific community world wide access to CMIP5 (Coupled Model Inter-comparison Project 5) climate data. The CMIP5 data will permit to evaluate the impact of climate change in various environmental and societal areas, such as regional climate, extreme events, agriculture, insurance… The ESGF grid provides services like searching, browsing and downloading of datasets. At the security level, ESGF data access is protected by an authentication mechanism. An ESGF trusted X509 Short-Lived EEC certificate with the correct roles/attributes is required to get access to the data in a non-interactive way (e.g. from a worker node). To access ESGF from EGI (i.e. by earth science applications running on EGI infrastructure), the security incompatibility between the two grids is the challenge: the EGI proxy certificate is not ESGF trusted nor it contains the correct roles/attributes. To solve this problem, we decided to use a Credential Translation Service (CTS) to translate the EGI X509 proxy certificate into the ESGF Short-Lived EEC certificate (the CTS will issue ESGF certificates based on EGI certificate authentication). From the end user perspective, the main steps to use the CTS are: the user binds his two identities (EGI and ESGF) together in the CTS using the CTS web interface (this steps has to be done only once) and then request an ESGF Short-Lived EEC certificate every time is needed, using a command-line tools. The implementation of the CTS is on-going. It is based on the open source MyProxy software stack, which is used in many grid infrastructures. On the client side, the "myproxy-logon" command-line tools is used to request the certificate translation. A new option has been added to "myproxy-logon" to select the original certificate (in our case, the EGI one). On the server side, MyProxy server operates in Certificate Authority mode, with a new module to store and manage identity pairs. Many European teams are working on the impact of climate change and face the problem of a lack of compute resources in connection with large data sets. This work between the ES VRC in EGI-Inspire and ESGF will be important to facilitate the exploitation of the CMIP5 data on EGI.
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Lawrence, Bryan; Lowe, Dominic; Norton, Peter; Pascoe, Stephen
2010-05-01
CEDA (Centre for Environmental Data Archival) based at STFC Rutherford Appleton Laboratory is host to the BADC (British Atmospheric Data Centre) and NEODC (NERC Earth Observation Data Centre) with data holdings of over half a Petabyte. In the coming months this figure is set to increase by over one Petabyte through the BADC's role as one of three data centres to host the CMIP5 (Coupled Model Intercomparison Project Phase 5) core archive of climate model data. Quite apart from the problem of managing the storage of such large volumes there is the challenge of collating the data together from the modelling centres around the world and enabling access to these data for the user community. An infrastructure to support this is being developed under the US Earth System Grid (ESG) and related projects bringing together participating organisations together in a federation. The ESG architecture defines Gateways, the web interfaces that enable users to access data and data serving applications organised into Data Nodes. The BADC has been working in collaboration with US Earth System Grid team and other partners to develop a security system to restrict access to data. This provides single sign-on via both OpenID and PKI based means and uses role based authorisation facilitated by SAML and OpenID based interfaces for attribute retrieval. This presentation will provide an overview of the access control architecture and look at how this has been implemented for CEDA. CEDA has developed an expertise in data access and information services over several years through a number of projects to develop and enhance these capabilities. Participation in CMIP5 comes at a time when a number of other software development activities are coming to fruition. New services are in the process of being deployed alongside services making up the system for ESG. The security system must apply access control across this heterogeneous environment of different data services and technologies. One strand of the development efforts within CEDA has been the NDG (NERC Datagrid) Security system. This system has been extended to interoperate with ESG, greatly assisted by the standards based approach adopted for the ESG security architecture. Drawing from experience from previous projects the decision was taken to refactor the NDG Security software into a component based architecture to enable a separation of concerns between access control and the functionality of a given application being protected. Such an approach is only possible through a generic interface. At CEDA, this has been realised in the Python programming language using the WSGI (Web Server Gateway Interface) specification. A parallel Java filter based implementation is also under development with our US partners for use with the THREDDS Data Server. Using such technologies applications and middleware can be assembled into custom configurations to meet different requirements. In the case of access control, NDG Security middleware can be layered over the top of existing applications without the need to modify them. A RESTful approach to the application of authorisation policy has been key in this approach. We explore the practical implementation of such a scheme alongside the application of the ESG security architecture to CEDA's OGC web services implementation COWS.
Physicists Get INSPIREd: INSPIRE Project and Grid Applications
NASA Astrophysics Data System (ADS)
Klem, Jukka; Iwaszkiewicz, Jan
2011-12-01
INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.
The Open Science Grid - Support for Multi-Disciplinary Team Science - the Adolescent Years
NASA Astrophysics Data System (ADS)
Bauerdick, Lothar; Ernst, Michael; Fraser, Dan; Livny, Miron; Pordes, Ruth; Sehgal, Chander; Würthwein, Frank; Open Science Grid
2012-12-01
As it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their computing environment through new concepts, advanced tools and deep experience. We discuss examples of these including: the pilot-job overlay concepts and technologies now in use throughout OSG and delivering 1.4 Million CPU hours/day; the role of campus infrastructures- built out from concepts of sharing across multiple local faculty clusters (made good use of already by many of the HEP Tier-2 sites in the US); the work towards the use of clouds and access to high throughput parallel (multi-core and GPU) compute resources; and the progress we are making towards meeting the data management and access needs of non-HEP communities with general tools derived from the experience of the parochial tools in HEP (integration of Globus Online, prototyping with IRODS, investigations into Wide Area Lustre). We will also review our activities and experiences as HTC Service Provider to the recently awarded NSF XD XSEDE project, the evolution of the US NSF TeraGrid project, and how we are extending the reach of HTC through this activity to the increasingly broad national cyberinfrastructure. We believe that a coordinated view of the HPC and HTC resources in the US will further expand their impact on scientific discovery.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
Emergency Response and the International Charter Space and Major Disasters
NASA Astrophysics Data System (ADS)
Jones, B.; Lamb, R.
2011-12-01
Responding to catastrophic natural disasters requires information. When the flow of information on the ground is interrupted by crises such as earthquakes, landslides, volcanoes, hurricanes, and floods, satellite imagery and aerial photographs become invaluable tools in revealing post-disaster conditions and in aiding disaster response and recovery efforts. USGS is a global clearinghouse for remotely sensed disaster imagery. It is also a source of innovative products derived from satellite imagery that can provide unique overviews as well as important details about the impacts of disasters. Repeatedly, USGS and its resources have proven their worth in assisting with disaster recovery activities in the United States and abroad. USGS has a well-established role in emergency response in the United States. It works closely with the Federal Emergency Management Agency (FEMA) by providing first responders with satellite and aerial images of disaster-impacted sites and products developed from those images. The combination of the USGS image archive, coupled with its global data transfer capability and on-site science staff, was instrumental in the USGS becoming a participating agency in the International Charter Space and Major Disasters. This participation provides the USGS with access to international members and their space agencies, to information on European and other global member methodology in disaster response, and to data from satellites operated by Charter member countries. Such access enhances the USGS' ability to respond to global emergencies and to disasters that occur in the United States (US). As one example, the Charter agencies provided imagery to the US for over 4 months in response to the Gulf oil spill. The International Charter mission is to provide a unified system of space data acquisition and delivery to those affected by natural or man-made disasters. Each member space agency has committed resources to support the provisions of the Charter and thus is helping to mitigate the effects of disasters on human life and property. The International Charter has been in formal operation since November 1, 2000. An Authorized User can call a single number to request the mobilization of satellite imagery and associated ground station support of the Charter's member agencies to obtain data and information on a disaster occurrence. The International Charter is supported by Argentinean, Canadian, European, Indian, Japanese, Chinese, Brazilian, Korean, Russian and U.S. satellite operators, as well as through U.S. and foreign commercial satellite firms and consortia. These operators can provide a wide variety of imagery and information under various environmental conditions (including, in some instances, through cloud cover and darkness). The Charter works in close cooperation with the intergovernmental Group on Earth Observations (GEO), and with United Nations bodies such as the UN Office of Outer Space Affairs (UN OOSA) and the UN Institute for Training and Research (UNITAR) Operational Satellite Applications Programe (UNOSAT). Both UN OOSA and UNOSAT are authorized to request data from Charter members in response to a UN emergency. Sentinel Asia is also allowed to request activations on behalf of its member states. These organizations play an important role in maximizing the Charter's use.
Quality Assurance Framework for Mini-Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baring-Gould, Ian; Burman, Kari; Singh, Mohit
Providing clean and affordable energy services to the more than 1 billion people globally who lack access to electricity is a critical driver for poverty reduction, economic development, improved health, and social outcomes. More than 84% of populations without electricity are located in rural areas where traditional grid extension may not be cost-effective; therefore, distributed energy solutions such as mini-grids are critical. To address some of the root challenges of providing safe, quality, and financially viable mini-grid power systems to remote customers, the U.S. Department of Energy (DOE) teamed with the National Renewable Energy Laboratory (NREL) to develop a Qualitymore » Assurance Framework (QAF) for isolated mini-grids. The QAF for mini-grids aims to address some root challenges of providing safe, quality, and affordable power to remote customers via financially viable mini-grids through two key components: (1) Levels of service: Defines a standard set of tiers of end-user service and links them to technical parameters of power quality, power availability, and power reliability. These levels of service span the entire energy ladder, from basic energy service to high-quality, high-reliability, and high-availability service (often considered 'grid parity'); (2) Accountability and performance reporting framework: Provides a clear process of validating power delivery by providing trusted information to customers, funders, and/or regulators. The performance reporting protocol can also serve as a robust monitoring and evaluation tool for mini-grid operators and funding organizations. The QAF will provide a flexible alternative to rigid top-down standards for mini-grids in energy access contexts, outlining tiers of end-user service and linking them to relevant technical parameters. In addition, data generated through implementation of the QAF will provide the foundation for comparisons across projects, assessment of impacts, and greater confidence that will drive investment and scale-up in this sector. The QAF implementation process also defines a set of implementation guidelines that help the deployment of mini-grids on a regional or national scale, helping to insure successful rapid deployment of these relatively new remote energy options. Note that the QAF is technology agnostic, addressing both alternating current (AC) and direct current (DC) mini-grids, and is also applicable to renewable, fossil-fuel, and hybrid systems.« less
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows
NASA Technical Reports Server (NTRS)
McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush
2004-01-01
With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.
NASA Astrophysics Data System (ADS)
Baru, C.; Lin, K.
2009-04-01
The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US (http://es-portal.geongrid.org), which are distributed across data archives at IRIS in Seattle, Washington, UNAVCO in Boulder, Colorado, and at the ICDP archives in GFZ, Potsdam, Germany. This service implements a "virtual" catalog--the actual/"physical" catalogs and data are stored at each of the remote locations. A federated search across all these catalogs would enable GEON users to discover data across all of these environments with a single search request. Our objective is to implement this search service via the OGC Catalog Services for the Web (CS-W) standard by providing appropriate CSW "wrappers" for each metadata catalog, as necessary. This paper will discuss technical issues in designing and deploying such a multi-catalog search service in GEON and describe an initial prototype of the federated search capability.
Context-aware access control for pervasive access to process-based healthcare systems.
Koufi, Vassiliki; Vassilacopoulos, George
2008-01-01
Healthcare is an increasingly collaborative enterprise involving a broad range of healthcare services provided by many individuals and organizations. Grid technology has been widely recognized as a means for integrating disparate computing resources in the healthcare field. Moreover, Grid portal applications can be developed on a wireless and mobile infrastructure to execute healthcare processes which, in turn, can provide remote access to Grid database services. Such an environment provides ubiquitous and pervasive access to integrated healthcare services at the point of care, thus improving healthcare quality. In such environments, the ability to provide an effective access control mechanism that meets the requirement of the least privilege principle is essential. Adherence to the least privilege principle requires continuous adjustments of user permissions in order to adapt to the current situation. This paper presents a context-aware access control mechanism for HDGPortal, a Grid portal application which provides access to workflow-based healthcare processes using wireless Personal Digital Assistants. The proposed mechanism builds upon and enhances security mechanisms provided by the Grid Security Infrastructure. It provides tight, just-in-time permissions so that authorized users get access to specific objects according to the current context. These permissions are subject to continuous adjustments triggered by the changing context. Thus, the risk of compromising information integrity during task executions is reduced.
The Grid as a healthcare provision tool.
Hernández, V; Blanquer, I
2005-01-01
This paper presents a survey on HealthGrid technologies, describing the current status of Grid and eHealth and analyzing them in the medium-term future. The objective is to analyze the key points, barriers and driving forces for the take-up of HealthGrids. The article considers the procedures from other Grid disciplines such as high energy physics or biomolecular engineering and discusses the differences with respect to healthcare. It analyzes the status of the basic technology, the needs of the eHealth environment and the successes of current projects in health and other relevant disciplines. Information and communication technology (ICT) in healthcare is a promising area for the use of the Grid. There are many driving forces that are fostering the application of the secure, pervasive, ubiquitous and transparent access to information and computing resources that Grid technologies can provide. However, there are many barriers that must be solved. Many technical problems that arise in eHealth (standardization of data, federation of databases, content-based knowledge extraction, and management of personal data ...) can be solved with Grid technologies. The article presents the development of successful and demonstrative applications as the key for the take-up of HealthGrids, where short-term future medical applications will surely be biocomputing-oriented, and the future of Grid technologies on medical imaging seems promising. Finally, exploitation of HealthGrid is analyzed considering the curve of the adoption of ICT solutions and the definition of business models, which are far more complex than in other e-business technologies such ASP.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
Synchrophasor Based Tracking Three-Phase State Estimator and It's Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, A. G.; Thorp, James; Centeno, Virgilio
2013-08-31
Electric power infrastructure is one of the critical resources of the nation. Its reliability in the face of natural or man-made catastrophes is of paramount importance for the economic and public health wellbeing of a modern society. Maintaining high levels of security for the high voltage transmission back bone of the electric supply network is a task requiring access to modern monitoring tools. These tools have been made particularly effective with the advent of synchronized phasor measurement units (PMUs) which became available in late 1990s, and have now become an indispensable for optimal monitoring, protection and control of the powermore » grid. The present project was launched with an objective of demonstrating the value of the Wide Area Measurement System (WAMS) using PMUs and its applications on the Dominion Virginia Power High Voltage transmission grid. Virginia Tech is the birth place of PMUs, and was chosen to be the Principal Investigator of this project. In addition to Dominion Virginia Power, Quanta Technology of Raleigh, NC was selected to be co-Principal Investigators of this project.« less
2008-09-01
Research Methods: Qualitative and Quantitative Approaches (Boston: Pearson, 2006), 1-592. 48 This project demanded the use of a primarily...enforcement practices. 200 Neuman, Social Research Methods: Qualitative and Quantitative Approaches, 152...www.socialresearchmethods.net/kb/strucres.php (accessed July 12, 2008). 203 Neuman, Social Research Methods: Qualitative and Quantitative Approaches, 149. 204 Paul
Mongkolwat, Pattanasak; Channin, David S; Kleper, Vladimir; Rubin, Daniel L
2012-01-01
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.
Channin, David S.; Rubin, Vladimir Kleper Daniel L.
2012-01-01
In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and Image Markup (AIM), a project supported by the National Cancer Institute’s cancer Biomedical Informatics Grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible. © RSNA, 2012 PMID:22556315
Connecting Restricted, High-Availability, or Low-Latency Resources to a Seamless Global Pool for CMS
NASA Astrophysics Data System (ADS)
Balcas, J.; Bockelman, B.; Hufnagel, D.; Hurtado Anampa, K.; Jayatilaka, B.; Khan, F.; Larson, K.; Letts, J.; Mascheroni, M.; Mohapatra, A.; Marra Da Silva, J.; Mason, D.; Perez-Calero Yzquierdo, A.; Piperov, S.; Tiradani, A.; Verguilov, V.; CMS Collaboration
2017-10-01
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.
Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herner, K.; Alba Hernandex, A. F.; Bhat, S.
The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less
Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.
2017-10-01
The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.
The Climate-G testbed: towards a large scale data sharing environment for climate change
NASA Astrophysics Data System (ADS)
Aloisio, G.; Fiore, S.; Denvil, S.; Petitdidier, M.; Fox, P.; Schwichtenberg, H.; Blower, J.; Barbera, R.
2009-04-01
The Climate-G testbed provides an experimental large scale data environment for climate change addressing challenging data and metadata management issues. The main scope of Climate-G is to allow scientists to carry out geographical and cross-institutional climate data discovery, access, visualization and sharing. Climate-G is a multidisciplinary collaboration involving both climate and computer scientists and it currently involves several partners such as: Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Institut Pierre-Simon Laplace (IPSL), Fraunhofer Institut für Algorithmen und Wissenschaftliches Rechnen (SCAI), National Center for Atmospheric Research (NCAR), University of Reading, University of Catania and University of Salento. To perform distributed metadata search and discovery, we adopted a CMCC metadata solution (which provides a high level of scalability, transparency, fault tolerance and autonomy) leveraging both on P2P and grid technologies (GRelC Data Access and Integration Service). Moreover, data are available through OPeNDAP/THREDDS services, Live Access Server as well as the OGC compliant Web Map Service and they can be downloaded, visualized, accessed into the proposed environment through the Climate-G Data Distribution Centre (DDC), the web gateway to the Climate-G digital library. The DDC is a data-grid portal allowing users to easily, securely and transparently perform search/discovery, metadata management, data access, data visualization, etc. Godiva2 (integrated into the DDC) displays 2D maps (and animations) and also exports maps for display on the Google Earth virtual globe. Presently, Climate-G publishes (through the DDC) about 2TB of data related to the ENSEMBLES project (also including distributed replicas of data) as well as to the IPCC AR4. The main results of the proposed work are: wide data access/sharing environment for climate change; P2P/grid metadata approach; production-level Climate-G DDC; high quality tools for data visualization; metadata search/discovery across several countries/institutions; open environment for climate change data sharing.
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
Action research to improve methods of delivery and feedback in an Access Grid Room environment
NASA Astrophysics Data System (ADS)
McArthur, Lynne C.; Klass, Lara; Eberhard, Andrew; Stacey, Andrew
2011-12-01
This article describes a qualitative study which was undertaken to improve the delivery methods and feedback opportunity in honours mathematics lectures which are delivered through Access Grid Rooms. Access Grid Rooms are facilities that provide two-way video and audio interactivity across multiple sites, with the inclusion of smart boards. The principal aim was to improve the student learning experience, given the new environment. The specific aspects of the course delivery that the study focused on included presentation of materials and provision of opportunities for interaction between the students and between students and lecturers. The practical considerations in the delivery of distance learning are well documented in the literature, and similar problems arise in the Access Grid Room environment; in particular, those of limited access to face-to-face interaction and the reduction in peer support. The nature of the Access Grid Room classes implies that students studying the same course can be physically situated in different cities, and possibly in different countries. When studying, it is important that students have opportunity to discuss new concepts with others; particularly their peers and their lecturer. The Access Grid Room environment also presents new challenges for the lecturer, who must learn new skills in the delivery of materials. The unique nature of Access Grid Room technology offers unprecedented opportunity for effective course delivery and positive outcomes for students, and was developed in response to a need to be able to interact with complex data, other students and the instructor, in real-time, at a distance and from multiple sites. This is a relatively new technology and as yet there has been little or no studies specifically addressing the use and misuse of the technology. The study found that the correct placement of cameras and the use of printed material and smart boards were all crucial to the student experience. In addition, the inclusion of special tutorial type sessions were necessary to provide opportunities to students for one-on-one discussion with both lecturer and other students. This study contributes to the broader understanding of distance education in general and future Access Grid Room course delivery in particular.
Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment
Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel
2008-01-01
Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979
Evaluation of grid generation technologies from an applied perspective
NASA Technical Reports Server (NTRS)
Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.
1995-01-01
An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.
Sustainable Data Evolution Technology for Power Grid Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
The SDET Tool is used to create open-access power grid data sets and facilitate updates of these data sets by the community. Pacific Northwest National Laboratory (PNNL) and its power industry and software vendor partners are developing an innovative sustainable data evolution technology (SDET) to create open-access power grid datasets and facilitate updates to these datasets by the power grid community. The objective is to make this a sustained effort within and beyond the ARPA-E GRID DATA program so that the datasets can evolve over time and meet the current and future needs for power grid optimization and potentially othermore » applications in power grid operation and planning.« less
Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D
2007-10-01
Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.
NASA Astrophysics Data System (ADS)
Duffy, D.; Maxwell, T. P.; Doutriaux, C.; Williams, D. N.; Chaudhary, A.; Ames, S.
2015-12-01
As the size of remote sensing observations and model output data grows, the volume of the data has become overwhelming, even to many scientific experts. As societies are forced to better understand, mitigate, and adapt to climate changes, the combination of Earth observation data and global climate model projects is crucial to not only scientists but to policy makers, downstream applications, and even the public. Scientific progress on understanding climate is critically dependent on the availability of a reliable infrastructure that promotes data access, management, and provenance. The Earth System Grid Federation (ESGF) has created such an environment for the Intergovernmental Panel on Climate Change (IPCC). ESGF provides a federated global cyber infrastructure for data access and management of model outputs generated for the IPCC Assessment Reports (AR). The current generation of the ESGF federated grid allows consumers of the data to find and download data with limited capabilities for server-side processing. Since the amount of data for future AR is expected to grow dramatically, ESGF is working on integrating server-side analytics throughout the federation. The ESGF Compute Working Team (CWT) has created a Web Processing Service (WPS) Application Programming Interface (API) to enable access scalable computational resources. The API is the exposure point to high performance computing resources across the federation. Specifically, the API allows users to execute simple operations, such as maximum, minimum, average, and anomalies, on ESGF data without having to download the data. These operations are executed at the ESGF data node site with access to large amounts of parallel computing capabilities. This presentation will highlight the WPS API, its capabilities, provide implementation details, and discuss future developments.
Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model
NASA Astrophysics Data System (ADS)
Zhao, Erdong; Li, Shangqi
2017-08-01
As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.
From Pixels to Population Stress: Global Multispectral Remote Sensing for Vulnerable Communities
NASA Astrophysics Data System (ADS)
Prashad, L.; Kaplan, E.; Letouze, E.; Kirkpatrick, R.; Luengo-Oroz, M.; Christensen, P. R.
2011-12-01
The Arizona State University (ASU) School of Earth and Space Exploration's Mars Space Flight Facility (MSFF) and 100 Cities Project, in collaboration with the United Nations Global Pulse initiative are utilizing NASA multispectral satellite data to visualize and analyze socioeconomic characteristics and human activity in Uganda. The Global Pulse initiative is exploring how new kinds of real-time data and innovative technologies can be leveraged to detect early social impacts of slow-onset crisis and global shocks. Global Pulse is developing a framework for real-time monitoring, assembling an open-source toolkit for analyzing new kinds of data and establishing a global network of country-level "Pulse Labs" where governments, UN agencies, academia and the private sector learn together how to harness the new world of "big data" to protect the vulnerable with targeted and agile policy responses. The ASU MSFF and 100 Cities Project are coordinating with the Global Pulse team to utilize NASA remote sensing data in this effort. Human behavior and socioeconomic parameters have been successfully studied via proxy through remote sensing of the physical environment by measuring the growth of city boundaries and transportation networks, crop health, soil moisture, and slum development from visible and infrared imagery. The NASA/ NOAA image of Earth's "Lights at Night" is routinely used to estimate economic development and population density. There are many examples of the conventional uses of remote sensing in humanitarian-related projects including the Famine Early Warning System Network (FEWS NET) and the UN's operational satellite applications programme (UNOSAT), which provides remote sensing for humanitarian and disaster relief. Since the Global Pulse project is focusing on new, innovative uses of technology for early crisis detection, we are focusing on three non-conventional uses of satellite remote sensing to understand what role NASA multispectral satellites can play in monitoring underlying socioeconomic and human parameters. These are: 1) measuring and visualizing changes in agriculture and fertilizer use in Ugandan villages in order to assist policymakers in designing land use policies and evaluating the impact of fertilizer use on smallholder farmers in developing countries; 2) monitoring the size and composition of large scale rubbish dumps to determine correlation with changes in policy and economic growth; 3) measuring the size and shape of open air markets, or proxies related to the markets, to determine if changes can be detected that correspond to fluctuations in economic activity. The ASU MSFF open source geographical information systems (GIS) platform, J-Earth, will be used to provide easy access to and analytical tools for the data and imagery resulting from this project. J-Earth is a part of the Java Mission-planning and Analysis for Remote Sensing (JMARS) suite of software first developed for targeting NASA instruments on planetary missions.
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-05-01
The Earth Science Data Grid System (ESDGS) is a software system in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We also develop the earth science application metadata; geospatial, temporal, and content-based indexing; and some other tools. In this paper, we will describe software architecture and components of the data grid system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
ProAtlantic - The Atlantic Checkpoint - Data Availability and Adequacy in the Atlantic Basin
NASA Astrophysics Data System (ADS)
McGrath, F.
2017-12-01
DG MAREs Atlantic Checkpoint is a basin scale wide monitoring system assessment activity based upon targeted end-user applications. It is designed to be a benchmark for the assessment of hydrographic, geological, habitat, climate and fisheries data existence and availability in the Atlantic basin. DG MAREs Atlantic Checkpoint service will be delivered by the ProAtlantic project. The objective of this project is to investigate, through appropriate methodologies in the framework of 11 key marine challenges, how current international and national data providers - e.g. EMODNet, Copernicus - meet the requirements of the stakeholders and deliver fit for purpose data. By so doing, the main thematic and geographic gaps will be readily identified in the Atlantic basin for future consideration by DG MARE. For each challenge, specific web products in the form of maps, metadata, spreadsheets and reports will be delivered. These products are not an end by themselves but rather a means of showing whether data were available, let alone accessible. For example, the Fisheries Impact Challenge outputs include data grids (VMS/Seabed) and data adequacy reports. Production of gridded data layers in order to show the extent of fisheries impact on the seafloor involved the identification, acquisition and collation of data sources for the required data types (VMS/Seabed/Habitats Data) in the Atlantic basin. The resulting spatial coverage of these grids indicates the relatively low level of data availability and adequacy across the Atlantic basin. Aside from the data delivered by programmes such as EMODNet and Copernicus, there are a lot of initiatives by regional bodies such as OSPAR and ICES that consist of assembling and disseminating data to address specific issues. Several international projects have delivered research, data collection, and networking around several of the Atlantic Checkpoint challenge topics, namely MPAs, renewable energy assessment, seabed mapping, oil spill mitigation or climate monitoring, leading to comprehensive data sets (e.g. the INTERREG MAIA MPA network). The results of the ProAtlantic project indicate that there is a requirement internationally for improving access to, and visibility of these data sets. This is a continuous process and will impact current and future initiatives in the Atlantic basin.
Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo
2010-05-01
In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an integration on existing solutions. More specifically, the Open Geospatial Consortium (OGC) Web Services (OWS) specifications play a fundamental role in geospatial information sharing (e.g. in INSPIRE Implementing Rules, GEOSS architecture, GMES Services, etc.). On the Grid side, the gLite middleware, developed in the European EGEE (Enabling Grids for E-sciencE) Projects, is widely spread in Europe and beyond, proving its high scalability and it is one of the middleware chosen for the future European Grid Infrastructure (EGI) initiative. Therefore the convergence between OWS and gLite technologies would be desirable for a seamless access to the Grid capabilities through OWS-compliant systems. Anyway, to achieve this harmonization there are some obstacles to overcome. Firstly, a semantics mismatch must be addressed: gLite handle low-level (e.g. close to the machine) concepts like "file", "data", "instruments", "job", etc., while geo-information services handle higher-level (closer to the human) concepts like "coverage", "observation", "measurement", "model", etc. Secondly, an architectural mismatch must be addressed: OWS implements a Web Service-Oriented-Architecture which is stateless, synchronous and with no embedded security (which is demanded to other specs), while gLite implements the Grid paradigm in an architecture which is stateful, asynchronous (even not fully event-based) and with strong embedded security (based on the VO paradigm). In recent years many initiatives and projects have worked out possible approaches for implementing Grid-enabled OWSs. Just to mention some: (i) in 2007 the OGC has signed a Memorandum of Understanding with the Open Grid Forum, "a community of users, developers, and vendors leading the global standardization effort for grid computing."; (ii) the OGC identified "WPS Profiles - Conflation; and Grid processing" as one of the tasks in the Geo Processing Workflow theme of the OWS Phase 6 (OWS-6); (iii) several national, European and international projects investigated different aspects of this integration, developing demonstrators and Proof-of-Concepts; In this context, "gLite enablement of OpenGeospatial Web Services" (G-OWS) is an initiative started in 2008 by the European CYCLOPS, GENESI-DR, and DORII Projects Consortia in order to collect/coordinate experiences on the enablement of OWS on top of the gLite middleware [GOWS]. Currently G-OWS counts ten member organizations from Europe and beyond, and four European Projects involved. It broadened its scope to the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Its operational objectives are the following: i) to contribute to the OGC-OGF initiative; ii) to release a reference implementation as standard gLite APIs (under the gLite software license); iii) to release a reference model (including procedures and guidelines) for OWS Grid-ification, as far as gLite is concerned; iv) to foster and promote the formation of consortiums for participation to projects/initiatives aimed at building Grid-enabled SDIs To achieve this objectives G-OWS bases its activities on two main guiding principles: a) the adoption of a service-oriented architecture based on the information modelling approach, and b) standardization as a means of achieving interoperability (i.e. adoption of standards from ISO TC211, OGC OWS, OGF). In the first year of activity G-OWS has designed a general architectural framework stemming from the FP6 CYCLOPS studies and enriched by the outcomes of other projects and initiatives involved (i.e. FP7 GENESI-DR, FP7 DORII, AIST GeoGrid, etc.). Some proof-of-concepts have been developed to demonstrate the flexibility and scalability of such architectural framework. The G-OWS WG developed implementations of gLite-enabled Web Coverage Service (WCS) and Web Processing Service (WPS), and an implementation of a Shibboleth authentication for gLite-enabled OWS in order to evaluate the possible integration of Web and Grid security models. The presentation will aim to communicate the G-OWS organization, activities, future plans and means to involve the ESSI community. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Foster 2001] I. Foster, C. Kesselman and S. Tuecke, "The Anatomy of the Grid. The International Journal ofHigh Performance Computing Applications", 15(3):200-222, Fall 2001 [GOWS] G-OWS WG, https://www.g-ows.org/, accessed: 15 January 2010
National Fusion Collaboratory: Grid Computing for Simulations and Experiments
NASA Astrophysics Data System (ADS)
Greenwald, Martin
2004-05-01
The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.
Deployed Communications in an Austere Environment: A Delphi Study
2013-12-01
gateways to access the Global Information Grid ( GIG ) will escalate dramatically. The ability simply to “deploy” a unit similar to the RF- SATCOM network...experts had divergent views on how deployed communications systems would link back to the GIG . The scenario uses both projected technologies. First...the self-configuring RF-SATCOM network link acts as a gateway to the GIG , providing wireless RF connectivity to autho- rized devices within the area
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, Andrew P.; Sparn, Bethany F.; Jin, Xin
This document is the final report of a two-year development, test, and demonstration project entitled 'Cohesive Application of Standards-Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL) Integrated Network Test-bed for Energy Grid Research and Technology (INTEGRATE) initiative. The Electric Power Research Institute (EPRI) and a team of partners were selected by NREL to carry out a project to develop and test how smart, connected consumer devices can act to enable the use of more clean energy technologies on the electric power grid. The project team includes a set ofmore » leading companies that produce key products in relation to achieving this vision: thermostats, water heaters, pool pumps, solar inverters, electric vehicle supply equipment, and battery storage systems. A key requirement of the project was open access at the device level - a feature seen as foundational to achieving a future of widespread distributed generation and storage. The internal intelligence, standard functionality and communication interfaces utilized in this project result in the ability to integrate devices at any level, to work collectively at the level of the home/business, microgrid, community, distribution circuit or other. Collectively, the set of products serve as a platform on which a wide range of control strategies may be developed and deployed.« less
GLIDE: a grid-based light-weight infrastructure for data-intensive environments
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.
2005-01-01
The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.
15 MW HArdware-in-the-loop Grid Simulation Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigas, Nikolaos; Fox, John Curtiss; Collins, Randy
2014-10-31
The 15MW Hardware-in-the-loop (HIL) Grid Simulator project was to (1) design, (2) construct and (3) commission a state-of-the-art grid integration testing facility for testing of multi-megawatt devices through a ‘shared facility’ model open to all innovators to promote the rapid introduction of new technology in the energy market to lower the cost of energy delivered. The 15 MW HIL Grid Simulator project now serves as the cornerstone of the Duke Energy Electric Grid Research, Innovation and Development (eGRID) Center. This project leveraged the 24 kV utility interconnection and electrical infrastructure of the US DOE EERE funded WTDTF project at themore » Clemson University Restoration Institute in North Charleston, SC. Additionally, the project has spurred interest from other technology sectors, including large PV inverter and energy storage testing and several leading edge research proposals dealing with smart grid technologies, grid modernization and grid cyber security. The key components of the project are the power amplifier units capable of providing up to 20MW of defined power to the research grid. The project has also developed a one of a kind solution to performing fault ride-through testing by combining a reactive divider network and a large power converter into a hybrid method. This unique hybrid method of performing fault ride-through analysis will allow for the research team at the eGRID Center to investigate the complex differences between the alternative methods of performing fault ride-through evaluations and will ultimately further the science behind this testing. With the final goal of being able to perform HIL experiments and demonstration projects, the eGRID team undertook a significant challenge with respect to developing a control system that is capable of communicating with several different pieces of equipment with different communication protocols in real-time. The eGRID team developed a custom fiber optical network that is based upon FPGA hardware that allows for communication between the key real-time interfaces and reduces the latency between these interfaces to acceptable levels for HIL experiments.« less
Benefits Analysis of Smart Grid Projects. White paper, 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Liu, Liping; Yu, JianCheng
Smart grids are rolling out internationally, with the United States (U.S.) nearing completion of a significant USD4-plus-billion federal program funded under the American Recovery and Reinvestment Act (ARRA-2009). The emergence of smart grids is widespread across developed countries. Multiple approaches to analyzing the benefits of smart grids have emerged. The goals of this white paper are to review these approaches and analyze examples of each to highlight their differences, advantages, and disadvantages. This work was conducted under the auspices of a joint U.S.-China research effort, the Climate Change Working Group (CCWG) Implementation Plan, Smart Grid. We present comparative benefits assessmentsmore » (BAs) of smart grid demonstrations in the U.S. and China along with a BA of a pilot project in Europe. In the U.S., we assess projects at two sites: (1) the University of California, Irvine campus (UCI), which consists of two distinct demonstrations: Southern California Edison’s (SCE) Irvine Smart Grid Demonstration Project (ISGD) and the UCI campus itself; and (2) the Navy Yard (TNY) area in Philadelphia, which has been repurposed as a mixed commercial-industrial, and possibly residential, development. In China, we cover several smart-grid aspects of the Sino-Singapore Tianjin Eco-city (TEC) and the Shenzhen Bay Technology and Ecology City (B-TEC). In Europe, we look at a BA of a pilot smart grid project in the Malagrotta area west of Rome, Italy, contributed by the Joint Research Centre (JRC) of the European Commission. The Irvine sub-project BAs use the U.S. Department of Energy (U.S. DOE) Smart Grid Computational Tool (SGCT), which is built on methods developed by the Electric Power Research Institute (EPRI). The TEC sub-project BAs apply Smart Grid Multi-Criteria Analysis (SG-MCA) developed by the State Grid Corporation of China (SGCC) based on the analytic hierarchy process (AHP) with fuzzy logic. The B-TEC and TNY sub-project BAs are evaluated using new approaches developed by those project teams. JRC has adopted an approach similar to EPRI’s but tailored to the Malagrotta distribution grid.« less
Distinction of Concept and Discussion on Construction Idea of Smart Water Grid Project
NASA Astrophysics Data System (ADS)
Ye, Y.; Yizi, S., Sr.; Lili, L., Sr.; Sang, X.; Zhai, J.
2016-12-01
Smart water grid project includes construction of water physical grid consisting of various flow regulating infrastructures, construction of water information grid in line with the trend of intelligent technology and construction of water management grid featured by system & mechanism construction and systemization of regulation decision-making. It is the integrated platform and comprehensive carrier for water conservancy practices. Currently, there still is dispute over engineering construction idea of smart water grid which, however, represents the future development trend of water management and is increasingly emphasized. The paper, based on distinction of concept of water grid and water grid engineering, explains the concept of water grid intelligentization, actively probes into construction idea of Smart water grid project in our country and presents scientific problems to be solved as well as core technologies to be mastered for smart water grid construction.
AstroGrid-D: Grid technology for astronomical science
NASA Astrophysics Data System (ADS)
Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve
2011-02-01
We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.
Accessing Wind Tunnels From NASA's Information Power Grid
NASA Technical Reports Server (NTRS)
Becker, Jeff; Biegel, Bryan (Technical Monitor)
2002-01-01
The NASA Ames wind tunnel customers are one of the first users of the Information Power Grid (IPG) storage system at the NASA Advanced Supercomputing Division. We wanted to be able to store their data on the IPG so that it could be accessed remotely in a secure but timely fashion. In addition, incorporation into the IPG allows future use of grid computational resources, e.g., for post-processing of data, or to do side-by-side CFD validation. In this paper, we describe the integration of grid data access mechanisms with the existing DARWIN web-based system that is used to access wind tunnel test data. We also show that the combined system has reasonable performance: wind tunnel data may be retrieved at 50Mbits/s over a 100 base T network connected to the IPG storage server.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Climate simulations and services on HPC, Cloud and Grid infrastructures
NASA Astrophysics Data System (ADS)
Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio
2017-04-01
Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.
International Charter `Space and Major Disasters' Collaborations
NASA Astrophysics Data System (ADS)
Jones, B. K.
2017-12-01
The International Charter aims at providing a unified system of space data acquisition and delivery to national disaster authorities of countries affected by natural or man-made disasters. Each of the sixteen Member Agencies has committed resources to support the objectives of the Charter and thus helping to mitigate the effects of disasters on human life and property, getting critical information into the hands of the disaster responders so that they can make informed decisions in the wake of a disaster. The Charter Member Agencies work together to provide remotely sensed imagery to any requesting country that is experiencing a natural or man-made disaster. The Space Agencies contribute priority satellite taskings, archive retrievals, and map production, as well as imagery of the affected areas. The imagery is provided at no cost to the affected country and is made available for the immediate response phase of the disaster. The Charter also has agreements with Sentinel Asia to submit activation requests on behalf of its 30+ member countries and the United Nations Office of Outer Space Affairs (UN OOSA) and United Nations Institute for Training and Research (UNITAR)/ United Nations Operational Satellite Applications Programme (UNOSAT) to submit activations on behalf of United Nations relief agencies such as UNICEF and UNOCHA. To further expand accessibility to the Charter Member Agency resources, the Charter has implemented the Universal Access initiative, which allows any country's disaster management authority to submit an application, attend a brief training session, and after successful completion, become an Authorized User able to submit activation requests without assistance from Member Agencies. The data provided by the Charter is used for many purposes including damage assessments, reference maps, evacuation route planning, search and rescue operations, decision maker briefings, scientific evaluations, and other response activities.
Grid infrastructure for automatic processing of SAR data for flood applications
NASA Astrophysics Data System (ADS)
Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii
2010-05-01
More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be executed by different resources of the Grid system. The resulting geospatial services are available in various OGC standards such as KML and WMS. Currently, the Grid infrastructure integrates the resources of several geographically distributed organizations, in particular: Space Research Institute NASU-NSAU (Ukraine) with deployed computational and storage nodes based on Globus Toolkit 4 (htpp://www.globus.org) and gLite 3 (http://glite.web.cern.ch) middleware, access to geospatial data and a Grid portal; Institute of Cybernetics of NASU (Ukraine) with deployed computational and storage nodes (SCIT-1/2/3 clusters) based on Globus Toolkit 4 middleware and access to computational resources (approximately 500 processors); Center of Earth Observation and Digital Earth Chinese Academy of Sciences (CEODE-CAS, China) with deployed computational nodes based on Globus Toolkit 4 middleware and access to geospatial data (approximately 16 processors). We are currently adding new geospatial services based on optical satellite data, namely MODIS. This work is carried out jointly with the CEODE-CAS. Using workflow patterns that were developed for SAR data processing we are building new workflows for optical data processing.
A Global Repository for Planet-Sized Experiments and Observations
NASA Technical Reports Server (NTRS)
Williams, Dean; Balaji, V.; Cinquini, Luca; Denvil, Sebastien; Duffy, Daniel; Evans, Ben; Ferraro, Robert D.; Hansen, Rose; Lautenschlager, Michael; Trenham, Claire
2016-01-01
Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) allows users to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP) output used by the Intergovernmental Panel on Climate Change assessment reports. Data served by ESGF not only include model output (i.e., CMIP simulation runs) but also include observational data from satellites and instruments, reanalyses, and generated images. Metadata summarize basic information about the data for fast and easy data discovery.
Pohjonen, Hanna; Ross, Peeter; Blickman, Johan G; Kamman, Richard
2007-01-01
Emerging technologies are transforming the workflows in healthcare enterprises. Computing grids and handheld mobile/wireless devices are providing clinicians with enterprise-wide access to all patient data and analysis tools on a pervasive basis. In this paper, emerging technologies are presented that provide computing grids and streaming-based access to image and data management functions, and system architectures that enable pervasive computing on a cost-effective basis. Finally, the implications of such technologies are investigated regarding the positive impacts on clinical workflows.
Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.
2017-12-01
The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both the south and north polar regions, grids can be exported from GMRT in a variety of formats including ASCII, GeoTIFF and NetCDF to support use in common mapping software applications such as ArcGIS, GMT, Matlab, and Python. New web services have also been developed to enable programmatic access to grids and images in north and south polar projections.
The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.
2010-12-01
Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.
NASA Astrophysics Data System (ADS)
Buendía, M.; Salvador, R.; Cibrián, R.; Laguia, M.; Sotoca, J. M.
1999-01-01
The projection of structured light is a technique frequently used to determine the surface shape of an object. In this paper, a new procedure is described that efficiently resolves the correspondence between the knots of the projected grid and those obtained on the object when the projection is made. The method is based on the use of three images of the projected grid. In two of them the grid is projected over a flat surface placed, respectively, before and behind the object; both images are used for calibration. In the third image the grid is projected over the object. It is not reliant on accurate determination of the camera and projector pair relative to the grid and object. Once the method is calibrated, we can obtain the surface function by just analysing the projected grid on the object. The procedure is especially suitable for the study of objects without discontinuities or large depth gradients. It can be employed for determining, in a non-invasive way, the patient's back surface function. Symmetry differences permit a quantitative diagnosis of spinal deformities such as scoliosis.
NASA Astrophysics Data System (ADS)
Heilmann, B. Z.; Vallenilla Ferrara, A. M.
2009-04-01
The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The EIAGRID portal provides an innovative solution to this problem combining state-of-the-art data processing methods and modern remote grid computing technology. In field-processing equipment is substituted by remote access to high performance grid computing facilities. The latter can be ubiquitously controlled by a user-friendly web-browser interface accessed from the field by any mobile computer using wireless data transmission technology such as UMTS (Universal Mobile Telecommunications System) or HSUPA/HSDPA (High-Speed Uplink/Downlink Packet Access). The complexity of data-manipulation and processing and thus also the time demanding user interaction is minimized by a data-driven, and highly automated velocity analysis and imaging approach based on the Common-Reflection-Surface (CRS) stack. Furthermore, the huge computing power provided by the grid deployment allows parallel testing of alternative processing sequences and parameter settings, a feature which considerably reduces the turn-around times. A shared data storage using georeferencing tools and data grid technology is under current development. It will allow to publish already accomplished projects, making results, processing workflows and parameter settings available in a transparent and reproducible way. Creating a unified database shared by all users will facilitate complex studies and enable the use of data-crossing techniques to incorporate results of other environmental applications hosted on the GRIDA3 portal.
The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca
2013-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less
The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinquini, Luca; Crichton, Daniel; Miller, Neill
2012-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less
Interoperating Cloud-based Virtual Farms
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Colamaria, F.; Colella, D.; Casula, E.; Elia, D.; Franco, A.; Lusso, S.; Luparello, G.; Masera, M.; Miniello, G.; Mura, D.; Piano, S.; Vallero, S.; Venaruzzo, M.; Vino, G.
2015-12-01
The present work aims at optimizing the use of computing resources available at the grid Italian Tier-2 sites of the ALICE experiment at CERN LHC by making them accessible to interactive distributed analysis, thanks to modern solutions based on cloud computing. The scalability and elasticity of the computing resources via dynamic (“on-demand”) provisioning is essentially limited by the size of the computing site, reaching the theoretical optimum only in the asymptotic case of infinite resources. The main challenge of the project is to overcome this limitation by federating different sites through a distributed cloud facility. Storage capacities of the participating sites are seen as a single federated storage area, preventing the need of mirroring data across them: high data access efficiency is guaranteed by location-aware analysis software and storage interfaces, in a transparent way from an end-user perspective. Moreover, the interactive analysis on the federated cloud reduces the execution time with respect to grid batch jobs. The tests of the investigated solutions for both cloud computing and distributed storage on wide area network will be presented.
The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data
NASA Technical Reports Server (NTRS)
Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark;
2012-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-12
... DEPARTMENT OF ENERGY Notice of Availability of Report on Data Access and Privacy Issues Related to... report entitled, ``Data Access and Privacy Issues Related to Smart Grid Technologies.'' In this report... meeting conducted during the preparation of the report. This report responds to recommendations for DOE...
Nbody Simulations and Weak Gravitational Lensing using new HPC-Grid resources: the PI2S2 project
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Comparato, M.
2008-08-01
We present the main project of the new grid infrastructure and the researches, that have been already started in Sicily and will be completed by next year. The PI2S2 project of the COMETA consortium is funded by the Italian Ministry of University and Research and will be completed in 2009. Funds are from the European Union Structural Funds for Objective 1 regions. The project, together with a similar project called Trinacria GRID Virtual Laboratory (Trigrid VL), aims to create in Sicily a computational grid for e-science and e-commerce applications with the main goal of increasing the technological innovation of local enterprises and their competition on the global market. PI2S2 project aims to build and develop an e-Infrastructure in Sicily, based on the grid paradigm, mainly for research activity using the grid environment and High Performance Computer systems. As an example we present the first results of a new grid version of FLY a tree Nbody code developed by INAF Astrophysical Observatory of Catania, already published in the CPC program Library, that will be used in the Weak Gravitational Lensing field.
Information Power Grid (IPG) Tutorial 2003
NASA Technical Reports Server (NTRS)
Meyers, George
2003-01-01
For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.
Leveraging Open Standards and Technologies to Enhance Community Access to Earth Science Lidar Data
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Krishnan, S.; Cowart, C.; Baru, C.; Arrowsmith, R.
2011-12-01
Lidar (Light Detection and Ranging) data, collected from space, airborne and terrestrial platforms, have emerged as an invaluable tool for a variety of Earth science applications ranging from ice sheet monitoring to modeling of earth surface processes. However, lidar present a unique suite of challenges from the perspective of building cyberinfrastructure systems that enable the scientific community to access these valuable research datasets. Lidar data are typically characterized by millions to billions of individual measurements of x,y,z position plus attributes; these "raw" data are also often accompanied by derived raster products and are frequently terabytes in size. As a relatively new and rapidly evolving data collection technology, relevant open data standards and software projects are immature compared to those for other remote sensing platforms. The NSF-funded OpenTopography Facility project has developed an online lidar data access and processing system that co-locates data with on-demand processing tools to enable users to access both raw point cloud data as well as custom derived products and visualizations. OpenTopography is built on a Service Oriented Architecture (SOA) in which applications and data resources are deployed as standards compliant (XML and SOAP) Web services with the open source Opal Toolkit. To develop the underlying applications for data access, filtering and conversion, and various processing tasks, OpenTopography has heavily leveraged existing open source software efforts for both lidar and raster data. Operating on the de facto LAS binary point cloud format (maintained by ASPRS), open source libLAS and LASlib libraries provide OpenTopography data ingestion, query and translation capabilities. Similarly, raster data manipulation is performed through a suite of services built on the Geospatial Data Abstraction Library (GDAL). OpenTopography has also developed our own algorithm for high-performance gridding of lidar point cloud data, Points2Grid, and have released the code as an open source project. An emerging conversation that the lidar community and OpenTopography are actively engaged in is the need for open, community supported standards and metadata for both full waveform and terrestrial (waveform and discrete return) lidar data. Further, given the immature nature of many lidar data archives and limited online access to public domain data, there is an opportunity to develop interoperable data catalogs based on an open standard such as the OGC CSW specification to facilitate discovery and access to Earth science oriented lidar data.
Biography of a technology: North America's power grid through the twentieth century
NASA Astrophysics Data System (ADS)
Cohn, Julie A.
North Americans are among the world's most intense consumers of electricity. The vast majority in the United States and Canada access power from a network of transmission lines that stretch from the East Coast to the West Coast and from Canada to the Mexican Baja. This network, known as the largest interconnected machine in the world, evolved during the first two thirds of the twentieth century. With the very first link-ups occurring at the end of the 1890s, a wide variety of public and private utilities extended power lines to reach markets, access and manage energy resources, balance loads, realize economies of scale, provide backup power, and achieve economic stability. In 1967, utility managers and the Bureau of Reclamation connected the expansive eastern and western power pools to create the North American grid. Unlike other power grids around the world, built by single, centrally controlled entities, this large technological system emerged as the result of multiple decisions across eighty-five years of development, and negotiations for control at the economic, political, and technological levels. This dissertation describes the process of building the North American grid and the paradoxes the resulting system represents. While the grid functions as a single machine moving electricity across the continent, it is owned by many independent entities. Smooth operations suggest that the grid is a unified system; however, it operates under shared management and divided authority. In addition, although a single power network seems the logical outcome of electrification, in fact it was assembled through aggregation, not planning. Interconnections intentionally increase the robustness of individual sub-networks, yet the system itself is fragile, as demonstrated by major cascading power outages. Finally, the transmission network facilitates increased use of energy resources and consumption of power, but at certain points in the past, it also served as a technology of conservation. While this project explores the history of how and why North America has a huge interconnected power system, it also offers insights into the challenges the grid poses for our energy future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe Lederman
This report contains the comprehensive summary of the work performed on the SBIR Phase II project (“Distributed Relevance Ranking in Heterogeneous Document Collections”) at Deep Web Technologies (http://www.deepwebtech.com). We have successfully completed all of the tasks defined in our SBIR Proposal work plan (See Table 1 - Phase II Tasks Status). The project was completed on schedule and we have successfully deployed an initial production release of the software architecture at DOE-OSTI for the Science.gov Alliance's search portal (http://www.science.gov). We have implemented a set of grid services that supports the extraction, filtering, aggregation, and presentation of search results from numerousmore » heterogeneous document collections. Illustration 3 depicts the services required to perform QuickRank™ filtering of content as defined in our architecture documentation. Functionality that has been implemented is indicated by the services highlighted in green. We have successfully tested our implementation in a multi-node grid deployment both within the Deep Web Technologies offices, and in a heterogeneous geographically distributed grid environment. We have performed a series of load tests in which we successfully simulated 100 concurrent users submitting search requests to the system. This testing was performed on deployments of one, two, and three node grids with services distributed in a number of different configurations. The preliminary results from these tests indicate that our architecture will scale well across multi-node grid deployments, but more work will be needed, beyond the scope of this project, to perform testing and experimentation to determine scalability and resiliency requirements. We are pleased to report that a production quality version (1.4) of the science.gov Alliance's search portal based on our grid architecture was released in June of 2006. This demonstration portal is currently available at http://science.gov/search30 . The portal allows the user to select from a number of collections grouped by category and enter a query expression (See Illustration 1 - Science.gov 3.0 Search Page). After the user clicks “search” a results page is displayed that provides a list of results from the selected collections ordered by relevance based on the query expression the user provided. Our grid based solution to deep web search and document ranking has already gained attention within DOE, other Government Agencies and a fortune 50 company. We are committed to the continued development of grid based solutions to large scale data access, filtering, and presentation problems within the domain of Information Retrieval and the more general categories of content management, data mining and data analysis.« less
Cost benefit analysis for smart grid projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karali, Nihan; He, Gang; Mauzey, J
The U.S. is unusual in that a definition of the term “smart grid” was written into legislation, appearing in the Energy Independence and Security Act (2007). When the recession called for stimulus spending and the American Recovery and Reinvestment Act (ARRA, 2009) was passed, a framework already existed for identification of smart grid projects. About $4.5B of the U.S. Department of Energy’s (U.S. DOE’s) $37B allocation from ARRA was directed to smart grid projects of two types, investment grants and demonstrations. Matching funds from other sources more than doubled the total value of ARRA-funded smart grid projects. The Smart Gridmore » Investment Grant Program (SGIG) consumed all but $620M of the ARRA funds, which was available for the 32 projects in the Smart Grid Demonstration Program (SGDP, or demonstrations). Given the economic potential of these projects and the substantial investments required, there was keen interest in estimating the benefits of the projects (i.e., quantifying and monetizing the performance of smart grid technologies). Common method development and application, data collection, and analysis to calculate and publicize the benefits were central objectives of the program. For this purpose standard methods and a software tool, the Smart Grid Computational Tool (SGCT), were developed by U.S. DOE and a spreadsheet model was made freely available to grantees and other analysts. The methodology was intended to define smart grid technologies or assets, the mechanisms by which they generate functions, their impacts and, ultimately, their benefits. The SGCT and its application to the Demonstration Projects are described, and actual projects in Southern California and in China are selected to test and illustrate the tool. The usefulness of the methodology and tool for international analyses is then assessed.« less
Investigating Time-Varying Drivers of Grid Project Emissions Impacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Emily L.; Thayer, Brandon L.; Pal, Seemita
The emissions consequences of smart grid technologies depend heavily on their context and vary not only by geographical location, but by time of year. The same technology operated to meet the same objective may increase the emissions associated with energy generation for part of the year and decrease emissions during other times. The Grid Project Impact Quantification (GridPIQ) tool provides the ability to estimate these seasonal variations and garner insight into the time-varying drivers of grid project emissions impacts. This work leverages GridPIQ to examine the emissions implications across years and seasons of adding energy storage technology to reduce dailymore » peak demand in California and New York.« less
Policy-based Distributed Data Management
NASA Astrophysics Data System (ADS)
Moore, R. W.
2009-12-01
The analysis and understanding of climate variability and change builds upon access to massive collections of observational and simulation data. The analyses involve distributed computing, both at the storage systems (which support data subsetting) and at compute engines (for assimilation of observational data into simulations). The integrated Rule Oriented Data System (iRODS) organizes the distributed data into collections to facilitate enforcement of management policies, support remote data processing, and enable development of reference collections. Currently at RENCI, the iRODS data grid is being used to manage ortho-photos and lidar data for the State of North Carolina, provide a unifying storage environment for engagement centers across the state, support distributed access to visualizations of weather data, and is being explored to manage and disseminate collections of ensembles of meteorological and hydrological model results. In collaboration with the National Climatic Data Center, an iRODS data grid is being established to support data transmission from NCDC to ORNL, and to integrate NCDC archives with ORNL compute services. To manage the massive data transfers, parallel I/O streams are used between High Performance Storage System tape archives and the supercomputers at ORNL. Further, we are exploring the movement and management of large RADAR and in situ datasets to be used for data mining between RENCI and NCDC, and for the distributed creation of decision support and climate analysis tools. The iRODS data grid supports all phases of the scientific data life cycle, from management of data products for a project, to sharing of data between research institutions, to publication of data in a digital library, to preservation of data for use in future research projects. Each phase is characterized by a broader user community, with higher expectations for more detailed descriptions and analysis mechanisms for manipulating the data. The higher usage requirements are enforced by management policies that define the required metadata, the required data formats, and the required analysis tools. The iRODS policy based data management system automates the creation of the community chosen data products, validates integrity and authenticity assessment criteria, and enforces management policies across all accesses of the system.
Work step indication with grid-pattern projection for demented senior people.
Uranishi, Yuki; Yamamoto, Goshiro; Asghar, Zeeshan; Pulli, Petri; Kato, Hirokazu; Oshiro, Osamu
2013-01-01
This paper proposes a work step indication method for supporting daily work with a grid-pattern projection. To support an independent life of demented senior people, it is desirable that an instruction is easy to understand visually and not complicated. The proposed method in this paper uses a range image sensor and a camera in addition to a projector. A 3D geometry of a target scene is measured by the range image sensor, and the grid-pattern is projected onto the scene directly. Direct projection of the work step is easier to be associated with the target objects around the assisted person, and the grid-pattern is a solution to indicate the spatial instruction. A prototype has been implemented and has demonstrated that the proposed grid-pattern projection is easy to show the work step.
A Flexible Component based Access Control Architecture for OPeNDAP Services
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank
2010-05-01
Network data access services such as OPeNDAP enable widespread access to data across user communities. However, without ready means to restrict access to data for such services, data providers and data owners are constrained from making their data more widely available. Even with such capability, the range of different security technologies available can make interoperability between services and user client tools a challenge. OPeNDAP is a key data access service in the infrastructure under development to support the CMIP5 (Couple Model Intercomparison Project Phase 5). The work is being carried out as part of an international collaboration including the US Earth System Grid and Curator projects and the EU funded IS-ENES and Metafor projects. This infrastructure will bring together Petabytes of climate model data and associated metadata from over twenty modelling centres around the world in a federation with a core archive mirrored at three data centres. A security system is needed to meet the requirements of organisations responsible for model data including the ability to restrict data access to registered users, keep them up to date with changes to data and services, audit access and protect finite computing resources. Individual organisations have existing tools and services such as OPeNDAP with which users in the climate research community are already familiar. The security system should overlay access control in a way which maintains the usability and ease of access to these services. The BADC (British Atmospheric Data Centre) has been working in collaboration with the Earth System Grid development team and partner organisations to develop the security architecture. OpenID and MyProxy were selected at an early stage in the ESG project to provide single sign-on capability across the federation of participating organisations. Building on the existing OPeNDAP specification an architecture based on pluggable server side components has been developed at the BADC. These components filter requests to the service they protect and apply the required authentication and authorisation schemes. Filters have been developed for OpenID and SSL client based authentication. The latter enabling access with MyProxy issued credentials. By preserving a clear separation between the security and application functionality, multiple authentication technologies may be supported without the need for modification to the underlying OPeNDAP application. The software has been developed in the Python programming language securing the Python based OPeNDAP implementation, PyDAP. This utilises the Python WSGI (Web Server Gateway Interface) specification to create distinct security filter components. Work is also currently underway to develop a parallel Java based filter implementation to secure the THREDDS Data Server. Whilst the ability to apply this flexible approach to the server side security layer is important, the development of compatible client software is vital to the take up of these services across a wide user base. To date PyDAP and wget based clients have been tested and work is planned to integrate the required security interface into the netCDF API. This forms part of ongoing collaboration with the OPeNDAP user and development community to ensure interoperability.
Scientific Grid activities and PKI deployment in the Cybermedia Center, Osaka University.
Akiyama, Toyokazu; Teranishi, Yuuichi; Nozaki, Kazunori; Kato, Seiichi; Shimojo, Shinji; Peltier, Steven T; Lin, Abel; Molina, Tomas; Yang, George; Lee, David; Ellisman, Mark; Naito, Sei; Koike, Atsushi; Matsumoto, Shuichi; Yoshida, Kiyokazu; Mori, Hirotaro
2005-10-01
The Cybermedia Center (CMC), Osaka University, is a research institution that offers knowledge and technology resources obtained from advanced researches in the areas of large-scale computation, information and communication, multimedia content and education. Currently, CMC is involved in Japanese national Grid projects such as JGN II (Japan Gigabit Network), NAREGI and BioGrid. Not limited to Japan, CMC also actively takes part in international activities such as PRAGMA. In these projects and international collaborations, CMC has developed a Grid system that allows scientists to perform their analysis by remote-controlling the world's largest ultra-high voltage electron microscope located in Osaka University. In another undertaking, CMC has assumed a leadership role in BioGrid by sharing its experiences and knowledge on the system development for the area of biology. In this paper, we will give an overview of the BioGrid project and introduce the progress of the Telescience unit, which collaborates with the Telescience Project led by the National Center for Microscopy and Imaging Research (NCMIR). Furthermore, CMC collaborates with seven Computing Centers in Japan, NAREGI and National Institute of Informatics to deploy PKI base authentication infrastructure. The current status of this project and future collaboration with Grid Projects will be delineated in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Brian; Huque, Aminul; Rogers, Lindsey
In 2011, EPRI began a four-year effort under the Department of Energy (DOE) SunShot Initiative Solar Energy Grid Integration Systems - Advanced Concepts (SEGIS-AC) to demonstrate smart grid ready inverters with utility communication. The objective of the project was to successfully implement and demonstrate effective utilization of inverters with grid support functionality to capture the full value of distributed photovoltaic (PV). The project leveraged ongoing investments and expanded PV inverter capabilities, to enable grid operators to better utilize these grid assets. Developing and implementing key elements of PV inverter grid support capabilities will increase the distribution system’s capacity for highermore » penetration levels of PV, while reducing the cost. The project team included EPRI, Yaskawa-Solectria Solar, Spirae, BPL Global, DTE Energy, National Grid, Pepco, EDD, NPPT and NREL. The project was divided into three phases: development, deployment, and demonstration. Within each phase, the key areas included: head-end communications for Distributed Energy Resources (DER) at the utility operations center; methods for coordinating DER with existing distribution equipment; back-end PV plant master controller; and inverters with smart-grid functionality. Four demonstration sites were chosen in three regions of the United States with different types of utility operating systems and implementations of utility-scale PV inverters. This report summarizes the project and findings from field demonstration at three utility sites.« less
Determining economic benefits of satellite data in short-range forecasting
NASA Technical Reports Server (NTRS)
Suchman, D.; Auvine, B.; Hinton, B.
1981-01-01
The chances of enhanced short term weather predictions and economic benefits from the use of GOES satellite data were examined. Results for a meteorological consulting firm before and after the introduction of GOES data were chosen as the method, and monetary benefits were selected as the measure. Services were provided for use by road and street departments, commodities dealers, and marine clients of the consulting firm. The Man-computer Interactive Data Access Program (McIDAS) was employed to furnish 1/2 hour visual or IR imagery for remote access. The commodities clients reconnected the GOES real-time imagery once the study was completed, while the consulting firm, which was personnel and not equipment intensive, did not. Further development of the flexibility of access to the GOES data and improvements in the projected grids are indicated.
Unlocking data: federated identity with LSDMA and dCache
NASA Astrophysics Data System (ADS)
Millar, AP; Behrmann, G.; Bernardt, C.; Fuhrmann, P.; Hardt, M.; Hayrapetyan, A.; Litvintsev, D.; Mkrtchyan, T.; Rossi, A.; Schwank, K.
2015-12-01
X.509, the dominant identity system from grid computing, has proved unpopular for many user communities. More popular alternatives generally assume the user is interacting via their web-browser. Such alternatives allow a user to authenticate with many services with the same credentials (user-name and password). They also allow users from different organisations form collaborations quickly and simply. Scientists generally require that their custom analysis software has direct access to the data. Such direct access is not currently supported by alternatives to X.509, as they require the use of a web-browser. Various approaches to solve this issue are being investigated as part of the Large Scale Data Management and Analysis (LSDMA) project, a German funded national R&D project. These involve dynamic credential translation (creating an X.509 credential) to allow backwards compatibility in addition to direct SAML- and OpenID Connect-based authentication. We present a summary of the current state of art and the current status of the federated identity work funded by the LSDMA project along with the future road map.
Grid computing enhances standards-compatible geospatial catalogue service
NASA Astrophysics Data System (ADS)
Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang
2010-04-01
A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, H; Kong, V; Jin, J
Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid movesmore » back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment
NASA Astrophysics Data System (ADS)
Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu
The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.
Bringing Federated Identity to Grid Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teheran, Jeny
The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less
NASA Astrophysics Data System (ADS)
Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.
2009-05-01
The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Williams, Dean; Aloisio, Giovanni
2016-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.
A Scalable proxy cache for Grid Data Access
NASA Astrophysics Data System (ADS)
Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan
2012-12-01
We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.
x509-free access to WLCG resources
NASA Astrophysics Data System (ADS)
Short, H.; Manzi, A.; De Notaris, V.; Keeble, O.; Kiryanov, A.; Mikkonen, H.; Tedesco, P.; Wartel, R.
2017-10-01
Access to WLCG resources is authenticated using an x509 and PKI infrastructure. Even though HEP users have always been exposed to certificates directly, the development of modern Web Applications by the LHC experiments calls for simplified authentication processes keeping the underlying software unmodified. In this work we will show a solution with the goal of providing access to WLCG resources using the user’s home organisations credentials, without the need for user-acquired x509 certificates. In particular, we focus on identity providers within eduGAIN, which interconnects research and education organisations worldwide, and enables the trustworthy exchange of identity-related information. eduGAIN has been integrated at CERN in the SSO infrastructure so that users can authenticate without the need of a CERN account. This solution achieves x509-free access to Grid resources with the help of two services: STS and an online CA. The STS (Security Token Service) allows credential translation from the SAML2 format used by Identity Federations to the VOMS-enabled x509 used by most of the Grid. The IOTA CA (Identifier-Only Trust Assurance Certification Authority) is responsible for the automatic issuing of short-lived x509 certificates. The IOTA CA deployed at CERN has been accepted by EUGridPMA as the CERN LCG IOTA CA, included in the IGTF trust anchor distribution and installed by the sites in WLCG. We will also describe the first pilot projects which are integrating the solution.
DREAM: Distributed Resources for the Earth System Grid Federation (ESGF) Advanced Management
NASA Astrophysics Data System (ADS)
Williams, D. N.
2015-12-01
The data associated with climate research is often generated, accessed, stored, and analyzed on a mix of unique platforms. The volume, variety, velocity, and veracity of this data creates unique challenges as climate research attempts to move beyond stand-alone platforms to a system that truly integrates dispersed resources. Today, sharing data across multiple facilities is often a challenge due to the large variance in supporting infrastructures. This results in data being accessed and downloaded many times, which requires significant amounts of resources, places a heavy analytic development burden on the end users, and mismanaged resources. Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) has begun to solve this problem. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. However, significant challenges remain, including workflow provenance, modular and flexible deployment, scalability of a diverse set of computational resources, and more. Expanding on the existing ESGF, the Distributed Resources for the Earth System Grid Federation Advanced Management (DREAM) will ensure that the access, storage, movement, and analysis of the large quantities of data that are processed and produced by diverse science projects can be dynamically distributed with proper resource management. This system will enable data from an infinite number of diverse sources to be organized and accessed from anywhere on any device (including mobile platforms). The approach offers a powerful roadmap for the creation and integration of a unified knowledge base of an entire ecosystem, including its many geophysical, geographical, social, political, agricultural, energy, transportation, and cyber aspects. The resulting aggregation of data combined with analytics services has the potential to generate an informational universe and knowledge system of unprecedented size and value to the scientific community, downstream applications, decision makers, and the public.
Earth Science Data Grid System
NASA Astrophysics Data System (ADS)
Chi, Y.; Yang, R.; Kafatos, M.
2004-12-01
The Earth Science Data Grid System (ESDGS) is a software in support of earth science data storage and access. It is built upon the Storage Resource Broker (SRB) data grid technology. We have developed a complete data grid system consistent of SRB server providing users uniform access to diverse storage resources in a heterogeneous computing environment and metadata catalog server (MCAT) managing the metadata associated with data set, users, and resources. We are also developing additional services of 1) metadata management, 2) geospatial, temporal, and content-based indexing, and 3) near/on site data processing, in response to the unique needs of Earth science applications. In this paper, we will describe the software architecture and components of the system, and use a practical example in support of storage and access of rainfall data from the Tropical Rainfall Measuring Mission (TRMM) to illustrate its functionality and features.
A computing method for spatial accessibility based on grid partition
NASA Astrophysics Data System (ADS)
Ma, Linbing; Zhang, Xinchang
2007-06-01
An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.
NASA Astrophysics Data System (ADS)
Tian, Zhang; Yanfeng, Gong
2017-05-01
In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.
Universal access to electricity in Burkina Faso: scaling-up renewable energy technologies
NASA Astrophysics Data System (ADS)
Moner-Girona, M.; Bódis, K.; Huld, T.; Kougias, I.; Szabó, S.
2016-08-01
This paper describes the status quo of the power sector in Burkina Faso, its limitations, and develops a new methodology that through spatial analysis processes with the aim to provide a possible pathway for universal electricity access. Following the SE4All initiative approach, it recommends the more extensive use of distributed renewable energy systems to increase access to electricity on an accelerated timeline. Less than 5% of the rural population in Burkina Faso have currently access to electricity and supply is lacking at many social structures such as schools and hospitals. Energy access achievements in Burkina Faso are still very modest. According to the latest SE4All Global Tracking Framework (2015), the access to electricity annual growth rate in Burkina Faso from 2010 to 2012 is 0%. The rural electrification strategy for Burkina Faso is scattered in several electricity sector development policies: there is a need of defining a concrete action plan. Planning and coordination between grid extension and the off-grid electrification programme is essential to reach a long-term sustainable energy model and prevent high avoidable infrastructure investments. This paper goes into details on the methodology and findings of the developed Geographic Information Systems tool. The aim of the dynamic planning tool is to provide support to the national government and development partners to define an alternative electrification plan. Burkina Faso proves to be paradigm case for the methodology as its national policy for electrification is still dominated by grid extension and the government subsidising fossil fuel electricity production. However, the results of our analysis suggest that the current grid extension is becoming inefficient and unsustainable in order to reach the national energy access targets. The results also suggest that Burkina Faso’s rural electrification strategy should be driven local renewable resources to power distributed mini-grids. We find that this approach would connect more people to power more quickly, and would reduce fossil fuel use that would otherwise be necessary for grid extension options.
Wireless Sensor Network for Electric Transmission Line Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alphenaar, Bruce
Generally, federal agencies tasked to oversee power grid reliability are dependent on data from grid infrastructure owners and operators in order to obtain a basic level of situational awareness. Since there are many owners and operators involved in the day-to-day functioning of the power grid, the task of accessing, aggregating and analyzing grid information from these sources is not a trivial one. Seemingly basic tasks such as synchronizing data timestamps between many different data providers and sources can be difficult as evidenced during the post-event analysis of the August 2003 blackout. In this project we investigate the efficacy and costmore » effectiveness of deploying a network of wireless power line monitoring devices as a method of independently monitoring key parts of the power grid as a complement to the data which is currently available to federal agencies from grid system operators. Such a network is modeled on proprietary power line monitoring technologies and networks invented, developed and deployed by Genscape, a Louisville, Kentucky based real-time energy information provider. Genscape measures transmission line power flow using measurements of electromagnetic fields under overhead high voltage transmission power lines in the United States and Europe. Opportunities for optimization of the commercial power line monitoring technology were investigated in this project to enable lower power consumption, lower cost and improvements to measurement methodologies. These optimizations were performed in order to better enable the use of wireless transmission line monitors in large network deployments (perhaps covering several thousand power lines) for federal situational awareness needs. Power consumption and cost reduction were addressed by developing a power line monitor using a low power, low cost wireless telemetry platform known as the ''Mote''. Motes were first developed as smart sensor nodes in wireless mesh networking applications. On such a platform, it has been demonstrated in this project that wireless monitoring units can effectively deliver real-time transmission line power flow information for less than $500 per monitor. The data delivered by such a monitor has during the course of the project been integrated with a national grid situational awareness visualization platform developed by Oak Ridge National Laboratory. Novel vibration energy scavenging methods based on piezoelectric cantilevers were also developed as a proposed method to power such monitors, with a goal of further cost reduction and large-scale deployment. Scavenging methods developed during the project resulted in 50% greater power output than conventional cantilever-based vibrational energy scavenging devices typically used to power smart sensor nodes. Lastly, enhanced and new methods for electromagnetic field sensing using multi-axis magnetometers and infrared reflectometry were investigated for potential monitoring applications in situations with a high density of power lines or high levels of background 60 Hz noise in order to isolate power lines of interest from other power lines in close proximity. The goal of this project was to investigate and demonstrate the feasibility of using small form factor, highly optimized, low cost, low power, non-contact, wireless electric transmission line monitors for delivery of real-time, independent power line monitoring for the US power grid. The project was divided into three main types of activity as follows; (1) Research into expanding the range of applications for non-contact power line monitoring to enable large scale low cost sensor network deployments (Tasks 1, 2); (2) Optimization of individual sensor hardware components to reduce size, cost and power consumption and testing in a pilot field study (Tasks 3,5); and (3) Demonstration of the feasibility of using the data from the network of power line monitors via a range of custom developed alerting and data visualization applications to deliver real-time information to federal agencies and others tasked with grid reliability (Tasks 6,8).« less
The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment
NASA Astrophysics Data System (ADS)
Brun, R.; Duellmann, D.; Ganis, G.; Hanushevsky, A.; Janyst, L.; Peters, A. J.; Rademakers, F.; Sindrilaru, E.
2011-12-01
The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyse the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with a discussion of the potential role of this new component at the different tiers of a distributed computing grid.
The Use of Proxy Caches for File Access in a Multi-Tier Grid Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brun, R.; Dullmann, D.; Ganis, G.
2012-04-19
The use of proxy caches has been extensively studied in the HEP environment for efficient access of database data and showed significant performance with only very moderate operational effort at higher grid tiers (T2, T3). In this contribution we propose to apply the same concept to the area of file access and analyze the possible performance gains, operational impact on site services and applicability to different HEP use cases. Base on a proof-of-concept studies with a modified XROOT proxy server we review the cache efficiency and overheads for access patterns of typical ROOT based analysis programs. We conclude with amore » discussion of the potential role of this new component at the different tiers of a distributed computing grid.« less
How to deal with petabytes of data: the LHC Grid project
NASA Astrophysics Data System (ADS)
Britton, D.; Lloyd, S. L.
2014-06-01
We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing.
NREL: International Activities - Country Programs
for use of mini-grid quality assurance and design standards and advising on mini-grid business models communities of practice and technical collaboration across countries on mini-grid development, modeling and interconnection standards and procedures, and with strengthening mini-grids and energy access programs. NREL is
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2004-01-01
Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
Fungal genome resources at NCBI.
Robbertse, B; Tatusova, T
2011-09-01
The National Center for Biotechnology Information (NCBI) is well known for the nucleotide sequence archive, GenBank and sequence analysis tool BLAST. However, NCBI integrates many types of biomolecular data from variety of sources and makes it available to the scientific community as interactive web resources as well as organized releases of bulk data. These tools are available to explore and compare fungal genomes. Searching all databases with Fungi [organism] at http://www.ncbi.nlm.nih.gov/ is the quickest way to find resources of interest with fungal entries. Some tools though are resources specific and can be indirectly accessed from a particular database in the Entrez system. These include graphical viewers and comparative analysis tools such as TaxPlot, TaxMap and UniGene DDD (found via UniGene Homepage). Gene and BioProject pages also serve as portals to external data such as community annotation websites, BioGrid and UniProt. There are many different ways of accessing genomic data at NCBI. Depending on the focus and goal of research projects or the level of interest, a user would select a particular route for accessing genomic databases and resources. This review article describes methods of accessing fungal genome data and provides examples that illustrate the use of analysis tools.
Operating a transmission company under open access: The basic requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, S.; Shuttleworth, G.
1993-03-01
In both Europe and North America, technical and legal changes are increasing the opportunities for electricity traders to use transmission lines and grids that are owned by other companies. This article discusses the view that transmission maybe a service potentially separable from the production and retailing of electricity, and that transmission should be freely available at an appropriate price. Grid operators are wary of proposals to open access to transmission. European legislators want grid operators to become Transmission System Operators (TSO), moving energy around the network for others. Also discussed in this article are the powers that the TSO shouldmore » be allowed to exercise if access to transmission is made available.« less
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid
NASA Astrophysics Data System (ADS)
Kuwayama, Akira
The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.
Smart Grid Communications Security Project, U.S. Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Frank
There were four groups that worked on this project in different areas related to Smart Girds and Security. They included faculty and students from electric computer and energy engineering, law, business and sociology. The results of the work are summarized in a verity of reports, papers and thesis. A major report to the Governor of Colorado’s energy office with contributions from all the groups working on this project is given bellow. Smart Grid Deployment in Colorado: Challenges and Opportunities, Report to Colorado Governor’s Energy Office and Colorado Smart Grid Task Force(2010) (Kevin Doran, Frank Barnes, and Puneet Pasrich, eds.) Thismore » report includes information on the state of the grid cyber security, privacy, energy storage and grid stability, workforce development, consumer behavior with respect to the smart grid and safety issues.« less
Grid-Scale Energy Storage Demonstration of Ancillary Services Using the UltraBattery Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seasholtz, Jeff
2015-08-20
The collaboration described in this document is being done as part of a cooperative research agreement under the Department of Energy’s Smart Grid Demonstration Program. This document represents the Final Technical Performance Report, from July 2012 through April 2015, for the East Penn Manufacturing Smart Grid Program demonstration project. This Smart Grid Demonstration project demonstrates Distributed Energy Storage for Grid Support, in particular the economic and technical viability of a grid-scale, advanced energy storage system using UltraBattery ® technology for frequency regulation ancillary services and demand management services. This project entailed the construction of a dedicated facility on the Eastmore » Penn campus in Lyon Station, PA that is being used as a working demonstration to provide regulation ancillary services to PJM and demand management services to Metropolitan Edison (Met-Ed).« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-14
... Smart Grid: Data Access, Third Party Use, and Privacy AGENCY: Department of Energy. ACTION: Notice of... information from smart meters, historical consumption data, and pricing and billing information. DOE will hold... electronic form--including real-time information from smart meters, historical consumption data, and pricing...
Optimising LAN access to grid enabled storage elements
NASA Astrophysics Data System (ADS)
Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.
2008-07-01
When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.
Enabling Object Storage via shims for Grid Middleware
NASA Astrophysics Data System (ADS)
Cadellin Skipsey, Samuel; De Witt, Shaun; Dewhurst, Alastair; Britton, David; Roy, Gareth; Crooks, David
2015-12-01
The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called ”Cloud” storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for ”POSIX-like” filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow.
Grid-based platform for training in Earth Observation
NASA Astrophysics Data System (ADS)
Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor
2010-05-01
GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in order to allow the introduction of more workflow specific issues. Moreover, an event-condition-action (ECA) approach allows a greater flexibility when expressing data and task dependencies, as well as the creation of adaptive workflows which can react to changes in the configuration of the Grid or in the workflow itself. Changes inside the grid are handled by creating specific rules which allow resource selection based on various task scheduling criteria. Modifications of the workflow are usually accomplished either by inserting or retracting at runtime rules belonging to it or by modifying the executor of the task in case a better one is found. The former implies changes in its structure while the latter does not necessarily mean changes of the resource but more precisely changes of the algorithm used for solving the task. More details can be found in [3]. Another important platform component is the data indexing and storage service, GDIS, providing features for data storage, indexing data using a specialized RDBMS, finding data by various conditions, querying external services and keeping track of temporary data generated by other components. The data storage component part of GDIS is responsible for storing the data by using available storage backends such as local disk file systems (ext3), local cluster storage (GFS) or distributed file systems (HDFS). A front-end GridFTP service is capable of interacting with the storage domains on behalf of the clients and in a uniform way and also enforces the security restrictions provided by other specialized services and related with data access. The data indexing is performed by PostGIS. An advanced and flexible interface for searching the project's geographical repository is built around a custom query language (LLQL - Lisp Like Query Language) designed to provide fine grained access to the data in the repository and to query external services (e.g. for exploiting the connection with GENESI-DR catalog). More details can be found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [2] D. Petcu, D. Zaharie, M. Neagul, S. Panica, M. Frincu, D. Gorgan, T. Stefanut, V. Bacu, Remote Sensed Image Processing on Grids for Training in Earth Observation. In Image Processing, V. Kordic (ed.), In-Tech, January 2010. [3] M. Neagul, S. Panica, D. Petcu, D. Zaharie, D. Gorgan, Web and Grid Services for Training in Earth Observation, IDAACS 2009, IEEE Computer Press, 241-246 [4] M. Frincu, S. Panica, M. Neagul, D. Petcu, Gisheo: On Demand Grid Service Based Platform for EO Data Processing. HiperGrid 2009, Politehnica Press, 415-422. [5] D. Gorgan, T. Stefanut, V. Bacu, Grid Based Training Environment for Earth Observation, GPC 2009, LNCS 5529, 98-109
Uniformity on the grid via a configuration framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Igor V Terekhov et al.
2003-03-11
As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less
GMLC Hawaii Regional Partnership: Distributed Inverter-Based Grid Frequency Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Hoke, Andy
This presentation is part of a panel session at the IEEE ISGT conference on Grid Modernization Initiative projects. This segment of the panel session provides a brief overview of a Hawaii Regional Partnership project focusing grid frequency support from distributed resources on the fastest time scales.
Integrated Network Testbed for Energy Grid Research and Technology
Network Testbed for Energy Grid Research and Technology Experimentation Project Under the Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project, NREL and partners completed five successful technology demonstrations at the ESIF. INTEGRATE is a $6.5-million, cost
Recovery Act-SmartGrid regional demonstration transmission and distribution (T&D) Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedges, Edward T.
This document represents the Final Technical Report for the Kansas City Power & Light Company (KCP&L) Green Impact Zone SmartGrid Demonstration Project (SGDP). The KCP&L project is partially funded by Department of Energy (DOE) Regional Smart Grid Demonstration Project cooperative agreement DE-OE0000221 in the Transmission and Distribution Infrastructure application area. This Final Technical Report summarizes the KCP&L SGDP as of April 30, 2015 and includes summaries of the project design, implementation, operations, and analysis performed as of that date.
Setting Up a Grid-CERT: Experiences of an Academic CSIRT
ERIC Educational Resources Information Center
Moller, Klaus
2007-01-01
Purpose: Grid computing has often been heralded as the next logical step after the worldwide web. Users of grids can access dynamic resources such as computer storage and use the computing resources of computers under the umbrella of a virtual organisation. Although grid computing is often compared to the worldwide web, it is vastly more complex…
NASA Astrophysics Data System (ADS)
Mentis, Dimitrios; Howells, Mark; Rogner, Holger; Korkovelos, Alexandros; Arderne, Christopher; Zepeda, Eduardo; Siyal, Shahid; Taliotis, Costantinos; Bazilian, Morgan; de Roo, Ad; Tanvez, Yann; Oudalov, Alexandre; Scholtz, Ernst
2017-08-01
In September 2015, the United Nations General Assembly adopted Agenda 2030, which comprises a set of 17 Sustainable Development Goals (SDGs) defined by 169 targets. ‘Ensuring access to affordable, reliable, sustainable and modern energy for all by 2030’ is the seventh goal (SDG7). While access to energy refers to more than electricity, the latter is the central focus of this work. According to the World Bank’s 2015 Global Tracking Framework, roughly 15% of the world’s population (or 1.1 billion people) lack access to electricity, and many more rely on poor quality electricity services. The majority of those without access (87%) reside in rural areas. This paper presents results of a geographic information systems approach coupled with open access data. We present least-cost electrification strategies on a country-by-country basis for Sub-Saharan Africa. The electrification options include grid extension, mini-grid and stand-alone systems for rural, peri-urban, and urban contexts across the economy. At low levels of electricity demand there is a strong penetration of standalone technologies. However, higher electricity demand levels move the favourable electrification option from stand-alone systems to mini grid and to grid extensions.
Methodological Approaches for Estimating the Benefits and Costs of Smart Grid Demonstration Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Russell
This report presents a comprehensive framework for estimating the benefits and costs of Smart Grid projects and a step-by-step approach for making these estimates. The framework identifies the basic categories of benefits, the beneficiaries of these benefits, and the Smart Grid functionalities that lead to different benefits and proposes ways to estimate these benefits, including their monetization. The report covers cost-effectiveness evaluation, uncertainty, and issues in estimating baseline conditions against which a project would be compared. The report also suggests metrics suitable for describing principal characteristics of a modern Smart Grid to which a project can contribute. This first sectionmore » of the report presents background information on the motivation for the report and its purpose. Section 2 introduces the methodological framework, focusing on the definition of benefits and a sequential, logical process for estimating them. Beginning with the Smart Grid technologies and functions of a project, it maps these functions to the benefits they produce. Section 3 provides a hypothetical example to illustrate the approach. Section 4 describes each of the 10 steps in the approach. Section 5 covers issues related to estimating benefits of the Smart Grid. Section 6 summarizes the next steps. The methods developed in this study will help improve future estimates - both retrospective and prospective - of the benefits of Smart Grid investments. These benefits, including those to consumers, society in general, and utilities, can then be weighed against the investments. Such methods would be useful in total resource cost tests and in societal versions of such tests. As such, the report will be of interest not only to electric utilities, but also to a broad constituency of stakeholders. Significant aspects of the methodology were used by the U.S. Department of Energy (DOE) to develop its methods for estimating the benefits and costs of its renewable and distributed systems integration demonstration projects as well as its Smart Grid Investment Grant projects and demonstration projects funded under the American Recovery and Reinvestment Act (ARRA). The goal of this report, which was cofunded by the Electric Power Research Institute (EPRI) and DOE, is to present a comprehensive set of methods for estimating the benefits and costs of Smart Grid projects. By publishing this report, EPRI seeks to contribute to the development of methods that will establish the benefits associated with investments in Smart Grid technologies. EPRI does not endorse the contents of this report or make any representations as to the accuracy and appropriateness of its contents. The purpose of this report is to present a methodological framework that will provide a standardized approach for estimating the benefits and costs of Smart Grid demonstration projects. The framework also has broader application to larger projects, such as those funded under the ARRA. Moreover, with additional development, it will provide the means for extrapolating the results of pilots and trials to at-scale investments in Smart Grid technologies. The framework was developed by a panel whose members provided a broad range of expertise.« less
Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.
2006-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.
NASA Astrophysics Data System (ADS)
Lee, Jasper C.; Ma, Kevin C.; Liu, Brent J.
2008-03-01
A Data Grid for medical images has been developed at the Image Processing and Informatics Laboratory, USC to provide distribution and fault-tolerant storage of medical imaging studies across Internet2 and public domain. Although back-up policies and grid certificates guarantee privacy and authenticity of grid-access-points, there still lacks a method to guarantee the sensitive DICOM images have not been altered or corrupted during transmission across a public domain. This paper takes steps toward achieving full image transfer security within the Data Grid by utilizing DICOM image authentication and a HIPAA-compliant auditing system. The 3-D lossless digital signature embedding procedure involves a private 64 byte signature that is embedded into each original DICOM image volume, whereby on the receiving end the signature can to be extracted and verified following the DICOM transmission. This digital signature method has also been developed at the IPILab. The HIPAA-Compliant Auditing System (H-CAS) is required to monitor embedding and verification events, and allows monitoring of other grid activity as well. The H-CAS system federates the logs of transmission and authentication events at each grid-access-point and stores it into a HIPAA-compliant database. The auditing toolkit is installed at the local grid-access-point and utilizes Syslog [1], a client-server standard for log messaging over an IP network, to send messages to the H-CAS centralized database. By integrating digital image signatures and centralized logging capabilities, DICOM image integrity within the Medical Imaging and Informatics Data Grid can be monitored and guaranteed without loss to any image quality.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Radenski, Atanas; Follen, Gregory J. (Technical Monitor)
2001-01-01
The rapid growth of internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of new, internet-oriented software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high -performance computing applications community. The general goal of this research project is to contribute to better understanding of the transition to internet-based high -performance computing and to develop solutions for some of the difficulties of this transition. More specifically, our goal is to design an architecture for generic divide and conquer internet-based computing, to develop a portable implementation of this architecture, to create an example library of high-performance divide-and-conquer computing agents that run on top of this architecture, and to evaluate the performance of these agents. We have been designing an architecture that incorporates a master task-pool server and utilizes satellite computational servers that operate on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. Our designed architecture is intended to be complementary to and accessible from computational grids such as Globus, Legion, and Condor. Grids provide remote access to existing high-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end internet nodes. Our project is focused on a generic divide-and-conquer paradigm and its applications that operate on a loose and ever changing pool of lower-end internet nodes.
A Development of Lightweight Grid Interface
NASA Astrophysics Data System (ADS)
Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.
2011-12-01
In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
A coarse-grid-projection acceleration method for finite-element incompressible flow computations
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne; FiN Lab Team
2015-11-01
Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
2015-01-27
The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
HEP Data Grid Applications in Korea
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Oh, Youngdo; Son, Dongchul; Kim, Bockjoo; Lee, Sangsan
2003-04-01
We will introduce the national HEP Data Grid applications in Korea. Through a five-year HEP Data Grid project (2002-2006) for CMS, AMS, CDF, PHENIX, K2K and Belle experiments in Korea, the Center for High Energy Physics, Kyungpook National University in Korea will construct the 1,000 PC cluster and related storage system such as 1,200 TByte Raid disk system. This project includes one of the master plan to construct Asia Regional Data Center by 2006 for the CMS and AMS Experiments and DCAF(DeCentralized Analysis Farm) for the CDF Experiments. During the first year of the project, we have constructed a cluster of around 200 CPU's with a 50 TBytes of a storage system. We will present our first year's experience of the software and hardware applications for HEP Data Grid of EDG and SAM Grid testbeds.
Enhancement of Local Climate Analysis Tool
NASA Astrophysics Data System (ADS)
Horsfall, F. M.; Timofeyeva, M. M.; Dutton, J.
2012-12-01
The National Oceanographic and Atmospheric Administration (NOAA) National Weather Service (NWS) will enhance its Local Climate Analysis Tool (LCAT) to incorporate specific capabilities to meet the needs of various users including energy, health, and other communities. LCAT is an online interactive tool that provides quick and easy access to climate data and allows users to conduct analyses at the local level such as time series analysis, trend analysis, compositing, correlation and regression techniques, with others to be incorporated as needed. LCAT uses principles of Artificial Intelligence in connecting human and computer perceptions on application of data and scientific techniques in multiprocessing simultaneous users' tasks. Future development includes expanding the type of data currently imported by LCAT (historical data at stations and climate divisions) to gridded reanalysis and General Circulation Model (GCM) data, which are available on global grids and thus will allow for climate studies to be conducted at international locations. We will describe ongoing activities to incorporate NOAA Climate Forecast System (CFS) reanalysis data (CFSR), NOAA model output data, including output from the National Multi Model Ensemble Prediction System (NMME) and longer term projection models, and plans to integrate LCAT into the Earth System Grid Federation (ESGF) and its protocols for accessing model output and observational data to ensure there is no redundancy in development of tools that facilitate scientific advancements and use of climate model information in applications. Validation and inter-comparison of forecast models will be included as part of the enhancement to LCAT. To ensure sustained development, we will investigate options for open sourcing LCAT development, in particular, through the University Corporation for Atmospheric Research (UCAR).
New Multibeam Bathymetry Mosaic at NOAA/NCEI
NASA Astrophysics Data System (ADS)
Varner, J. D.; Cartwright, J.; Rosenberg, A. M.; Amante, C.; Sutherland, M.; Jencks, J. H.
2017-12-01
NOAA's National Centers for Environmental Information (NCEI) maintains an ever-growing archive of multibeam bathymetric data acquired from U.S. and international government and academic sources. The data are partitioned in the individual survey files in which they were originally received, and are stored in various formats not directly accessible by popular analysis and visualization tools. In order to improve the discoverability and accessibility of the data, NCEI created a new Multibeam Bathymetry Mosaic. Each survey was gridded at 3 arcsecond cell size and organized in an ArcGIS mosaic dataset, which was published as a set of standards-based web services usable in desktop GIS and web clients. In addition to providing a "seamless" grid of all surveys, a filter can be applied to isolate individual surveys. Both depth values in meters and shaded relief visualizations are available. The product represents the current state of the archive; no QA/QC was performed on the data before being incorporated, and the mosaic will be updated incrementally as new surveys are added to the archive. We expect the mosaic will address customer needs for visualization/extraction that existing tools (e.g. NCEI's AutoGrid) are unable to meet, and also assist data managers in identifying problem surveys, missing data, quality control issues, etc. This project complements existing efforts such as the Global Multi-Resolution Topography Data Synthesis (GMRT) at LDEO. Comprehensive visual displays of bathymetric data holdings are invaluable tools for seafloor mapping initiatives, such as Seabed 2030, that will aid in minimizing data collection redundancies and ensuring that valuable data are made available to the broadest community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hambrick, J.
2012-01-01
Although implementing Smart Grid projects at the distribution level provides many advantages and opportunities for advanced operation and control, a number of significant challenges must be overcome to maintain the high level of safety and reliability that the modern grid must provide. For example, while distributed generation (DG) promises to provide opportunities to increase reliability and efficiency and may provide grid support services such as volt/var control, the presence of DG can impact distribution operation and protection schemes. Additionally, the intermittent nature of many DG energy sources such as photovoltaics (PV) can present a number of challenges to voltage regulation,more » etc. This presentation provides an overview a number of Smart Grid projects being performed by the National Renewable Energy Laboratory (NREL) along with utility, industry, and academic partners. These projects include modeling and analysis of high penetration PV scenarios (with and without energy storage), development and testing of interconnection and microgrid equipment, as well as the development and implementation of advanced instrumentation and data acquisition used to analyze the impacts of intermittent renewable resources. Additionally, standards development associated with DG interconnection and analysis as well as Smart Grid interoperability will be discussed.« less
GRID Alternatives: Solar Programs in Underserved Communities
Introduces GRID Alternatives: Solar Programs in Underserved Communities, a program that partners with a variety of organizations to help low-income communities access the benefits of solar technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z J
2012-12-06
The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less
Using Computing and Data Grids for Large-Scale Science and Engineering
NASA Technical Reports Server (NTRS)
Johnston, William E.
2001-01-01
We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.
NASA Astrophysics Data System (ADS)
Mentis, Dimitrios; Howells, Mark; Rogner, Holger; Korkovelos, Alexandros; Arderne, Christopher; Siyal, Shahid; Zepeda, Eduardo; Taliotis, Constantinos; Bazilian, Morgan; de Roo, Ad; Tanvez, Yann; Oudalov, Alexandre; Scholtz, Ernst
2017-04-01
In September 2015, the United Nations General Assembly adopted Agenda 2030, which comprises a set of 17 Sustainable Development Goals (SDGs) defined by 169 targets. "Ensuring access to affordable, reliable, sustainable and modern energy for all by 2030" is the seventh goal (SDG7). While access to energy refers to more than electricity, the latter is the central focus of this work. According to the World Bank's 2015 Global Tracking Framework, roughly 15% of world population (or 1.1 billion people) lack access to electricity, and many more rely on poor quality electricity services. The majority of those without access (87%) reside in rural areas. This paper presents results of a Geographic Information Systems (GIS) approach coupled with open access data and linked to the Electricity Model Base for Africa (TEMBA), a model that represents each continental African country's electricity supply system. We present least-cost electrification strategies on a country-by-country basis for Sub-Saharan Africa. The electrification options include grid extension, mini-grid and stand-alone systems for rural, peri-urban, and urban contexts across the economy. At low levels of electricity demand there is a strong penetration of standalone technologies. However, higher electricity demand levels move the favourable electrification option from stand-alone systems to mini grid and to grid extensions.
CERN@school: bringing CERN into the classroom
NASA Astrophysics Data System (ADS)
Whyntie, T.; Cook, J.; Coupe, A.; Fickling, R. L.; Parker, B.; Shearer, N.
2016-04-01
CERN@school brings technology from CERN into the classroom to aid with the teaching of particle physics. It also aims to inspire the next generation of physicists and engineers by giving participants the opportunity to be part of a national collaboration of students, teachers and academics, analysing data obtained from detectors based on the ground and in space to make new, curiosity-driven discoveries at school. CERN@school is based around the Timepix hybrid silicon pixel detector developed by the Medipix 2 Collaboration, which features a 300 μm thick silicon sensor bump-bonded to a Timepix readout ASIC. This defines a 256-by-256 grid of pixels with a pitch of 55 μm, the data from which can be used to visualise ionising radiation in a very accessible way. Broadly speaking, CERN@school consists of a web portal that allows access to data collected by the Langton Ultimate Cosmic ray Intensity Detector (LUCID) experiment in space and the student-operated Timepix detectors on the ground; a number of Timepix detector kits for ground-based experiments, to be made available to schools for both teaching and research purposes; and educational resources for teachers to use with LUCID data and detector kits in the classroom. By providing access to cutting-edge research equipment, raw data from ground and space-based experiments, CERN@school hopes to provide the foundation for a programme that meets the many of the aims and objectives of CERN and the project's supporting academic and industrial partners. The work presented here provides an update on the status of the programme as supported by the UK Science and Technology Facilities Council (STFC) and the Royal Commission for the Exhibition of 1851. This includes recent results from work with the GridPP Collaboration on using grid resources with schools to run GEANT4 simulations of CERN@school experiments.
netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data
NASA Astrophysics Data System (ADS)
Zender, C. S.
2015-12-01
Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1992-01-01
A relatively small and low-cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration
2004-12-01
The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.
Data publication with the structural biology data grid supports live analysis
Meyer, Peter A.; Socias, Stephanie; Key, Jason; ...
2016-03-07
Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of themore » original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. In conclusion, it is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.« less
Data publication with the structural biology data grid supports live analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Peter A.; Socias, Stephanie; Key, Jason
Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of themore » original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. In conclusion, it is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.« less
Aklin, Michaël; Bayer, Patrick; Harish, S P; Urpelainen, Johannes
2017-05-01
This article assesses the socioeconomic effects of solar microgrids. The lack of access to electricity is a major obstacle to the socioeconomic development of more than a billion people. Off-grid solar technologies hold potential as an affordable and clean solution to satisfy basic electricity needs. We conducted a randomized field experiment in India to estimate the causal effect of off-grid solar power on electricity access and broader socioeconomic development of 1281 rural households. Within a year, electrification rates in the treatment group increased by 29 to 36 percentage points. Daily hours of access to electricity increased only by 0.99 to 1.42 hours, and the confidence intervals are wide. Kerosene expenditure on the black market decreased by 47 to 49 rupees per month. Despite these strong electrification and expenditure effects, we found no systematic evidence for changes in savings, spending, business creation, time spent working or studying, or other broader indicators of socioeconomic development.
Aklin, Michaël; Bayer, Patrick; Harish, S. P.; Urpelainen, Johannes
2017-01-01
This article assesses the socioeconomic effects of solar microgrids. The lack of access to electricity is a major obstacle to the socioeconomic development of more than a billion people. Off-grid solar technologies hold potential as an affordable and clean solution to satisfy basic electricity needs. We conducted a randomized field experiment in India to estimate the causal effect of off-grid solar power on electricity access and broader socioeconomic development of 1281 rural households. Within a year, electrification rates in the treatment group increased by 29 to 36 percentage points. Daily hours of access to electricity increased only by 0.99 to 1.42 hours, and the confidence intervals are wide. Kerosene expenditure on the black market decreased by 47 to 49 rupees per month. Despite these strong electrification and expenditure effects, we found no systematic evidence for changes in savings, spending, business creation, time spent working or studying, or other broader indicators of socioeconomic development. PMID:28560328
Data publication with the structural biology data grid supports live analysis.
Meyer, Peter A; Socias, Stephanie; Key, Jason; Ransey, Elizabeth; Tjon, Emily C; Buschiazzo, Alejandro; Lei, Ming; Botka, Chris; Withrow, James; Neau, David; Rajashankar, Kanagalaghatta; Anderson, Karen S; Baxter, Richard H; Blacklow, Stephen C; Boggon, Titus J; Bonvin, Alexandre M J J; Borek, Dominika; Brett, Tom J; Caflisch, Amedeo; Chang, Chung-I; Chazin, Walter J; Corbett, Kevin D; Cosgrove, Michael S; Crosson, Sean; Dhe-Paganon, Sirano; Di Cera, Enrico; Drennan, Catherine L; Eck, Michael J; Eichman, Brandt F; Fan, Qing R; Ferré-D'Amaré, Adrian R; Fromme, J Christopher; Garcia, K Christopher; Gaudet, Rachelle; Gong, Peng; Harrison, Stephen C; Heldwein, Ekaterina E; Jia, Zongchao; Keenan, Robert J; Kruse, Andrew C; Kvansakul, Marc; McLellan, Jason S; Modis, Yorgo; Nam, Yunsun; Otwinowski, Zbyszek; Pai, Emil F; Pereira, Pedro José Barbosa; Petosa, Carlo; Raman, C S; Rapoport, Tom A; Roll-Mecak, Antonina; Rosen, Michael K; Rudenko, Gabby; Schlessinger, Joseph; Schwartz, Thomas U; Shamoo, Yousif; Sondermann, Holger; Tao, Yizhi J; Tolia, Niraj H; Tsodikov, Oleg V; Westover, Kenneth D; Wu, Hao; Foster, Ian; Fraser, James S; Maia, Filipe R N C; Gonen, Tamir; Kirchhausen, Tom; Diederichs, Kay; Crosas, Mercè; Sliz, Piotr
2016-03-07
Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of the original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. It is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis.
Data publication with the structural biology data grid supports live analysis
Meyer, Peter A.; Socias, Stephanie; Key, Jason; Ransey, Elizabeth; Tjon, Emily C.; Buschiazzo, Alejandro; Lei, Ming; Botka, Chris; Withrow, James; Neau, David; Rajashankar, Kanagalaghatta; Anderson, Karen S.; Baxter, Richard H.; Blacklow, Stephen C.; Boggon, Titus J.; Bonvin, Alexandre M. J. J.; Borek, Dominika; Brett, Tom J.; Caflisch, Amedeo; Chang, Chung-I; Chazin, Walter J.; Corbett, Kevin D.; Cosgrove, Michael S.; Crosson, Sean; Dhe-Paganon, Sirano; Di Cera, Enrico; Drennan, Catherine L.; Eck, Michael J.; Eichman, Brandt F.; Fan, Qing R.; Ferré-D'Amaré, Adrian R.; Christopher Fromme, J.; Garcia, K. Christopher; Gaudet, Rachelle; Gong, Peng; Harrison, Stephen C.; Heldwein, Ekaterina E.; Jia, Zongchao; Keenan, Robert J.; Kruse, Andrew C.; Kvansakul, Marc; McLellan, Jason S.; Modis, Yorgo; Nam, Yunsun; Otwinowski, Zbyszek; Pai, Emil F.; Pereira, Pedro José Barbosa; Petosa, Carlo; Raman, C. S.; Rapoport, Tom A.; Roll-Mecak, Antonina; Rosen, Michael K.; Rudenko, Gabby; Schlessinger, Joseph; Schwartz, Thomas U.; Shamoo, Yousif; Sondermann, Holger; Tao, Yizhi J.; Tolia, Niraj H.; Tsodikov, Oleg V.; Westover, Kenneth D.; Wu, Hao; Foster, Ian; Fraser, James S.; Maia, Filipe R. N C.; Gonen, Tamir; Kirchhausen, Tom; Diederichs, Kay; Crosas, Mercè; Sliz, Piotr
2016-01-01
Access to experimental X-ray diffraction image data is fundamental for validation and reproduction of macromolecular models and indispensable for development of structural biology processing methods. Here, we established a diffraction data publication and dissemination system, Structural Biology Data Grid (SBDG; data.sbgrid.org), to preserve primary experimental data sets that support scientific publications. Data sets are accessible to researchers through a community driven data grid, which facilitates global data access. Our analysis of a pilot collection of crystallographic data sets demonstrates that the information archived by SBDG is sufficient to reprocess data to statistics that meet or exceed the quality of the original published structures. SBDG has extended its services to the entire community and is used to develop support for other types of biomedical data sets. It is anticipated that access to the experimental data sets will enhance the paradigm shift in the community towards a much more dynamic body of continuously improving data analysis. PMID:26947396
Ford Plug-In Project: Bringing PHEVs to Market Demonstration and Validation Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Annunzio, Julie; Slezak, Lee; Conley, John Jason
2014-03-26
This project is in support of our national goal to reduce our dependence on fossil fuels. By supporting efforts that contribute toward the successful mass production of plug-in hybrid electric vehicles, our nation’s transportation-related fuel consumption can be offset with energy from the grid. Over four and a half years ago, when this project was originally initiated, plug-in electric vehicles were not readily available in the mass marketplace. Through the creation of a 21 unit plug-in hybrid vehicle fleet, this program was designed to demonstrate the feasibility of the technology and to help build cross-industry familiarity with the technology andmore » interface of this technology with the grid. Ford Escape PHEV Demonstration Fleet 3 March 26, 2014 Since then, however, plug-in vehicles have become increasingly more commonplace in the market. Ford, itself, now offers an all-electric vehicle and two plug-in hybrid vehicles in North America and has announced a third plug-in vehicle offering for Europe. Lessons learned from this project have helped in these production vehicle launches and are mentioned throughout this report. While the technology of plugging in a vehicle to charge a high voltage battery with energy from the grid is now in production, the ability for vehicle-to-grid or bi-directional energy flow was farther away than originally expected. Several technical, regulatory and potential safety issues prevented progressing the vehicle-to-grid energy flow (V2G) demonstration and, after a review with the DOE, V2G was removed from this demonstration project. Also proving challenging were communications between a plug-in vehicle and the grid or smart meter. While this project successfully demonstrated the vehicle to smart meter interface, cross-industry and regulatory work is still needed to define the vehicle-to-grid communication interface.« less
HTTP as a Data Access Protocol: Trials with XrootD in CMS’s AAA Project
NASA Astrophysics Data System (ADS)
Balcas, J.; Bockelman, B. P.; Kcira, D.; Newman, H.; Vlimant, J.; Hendricks, T. W.;
2017-10-01
The main goal of the project to demonstrate the ability of using HTTP data federations in a manner analogous to the existing AAA infrastructure of the CMS experiment. An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. As a first step, we are building systems that produce and ingest network data transfers up to 80 Gbps. In collaboration with AAA, HTTP support is enabled at the US redirector and the Caltech testbed. A plugin for CMSSW is being developed for HTTP access based on the DaviX software. It will replace the present fork/exec or curl for HTTP access. In addition, extensions to the XRootD HTTP implementation are being developed to add functionality to it, such as client-based monitoring identifiers. In the future, patches will be developed to better integrate HTTP-over-XRootD with the Open Science Grid (OSG) distribution. First results of the transfer tests using HTTP are presented in this paper together with details about the initial setup.
Grids and clouds in the Czech NGI
NASA Astrophysics Data System (ADS)
Kundrát, Jan; Adam, Martin; Adamová, Dagmar; Chudoba, Jiří; Kouba, Tomáš; Lokajíček, Miloš; Mikula, Alexandr; Říkal, Václav; Švec, Jan; Vohnout, Rudolf
2016-09-01
There are several infrastructure operators within the Czech Republic NGI (National Grid Initiative) which provide users with access to high-performance computing facilities over a grid and cloud interface. This article focuses on those where the primary author has personal first-hand experience. We cover some operational issues as well as the history of these facilities.
a Schema for Extraction of Indoor Pedestrian Navigation Grid Network from Floor Plans
NASA Astrophysics Data System (ADS)
Niu, Lei; Song, Yiquan
2016-06-01
The requirement of the indoor navigation related tasks such emergency evacuation calls for efficient solutions for handling data sources. Therefore, the navigation grid extraction from existing floor plans draws attentions. To this, we have to thoroughly analyse the source data, such as Autocad dxf files. Then, we could establish a sounding navigation solution, which firstly complements the basic navigation rectangle boundaries, secondly subdivides these rectangles and finally generates accessible networks with these refined rectangles. Test files are introduced to validate the whole workflow and evaluate the solution performance. In conclusion, we have achieved the preliminary step of forming up accessible network from the navigation grids.
Energy storage at the threshold: Smart mobility and the grid of the future
NASA Astrophysics Data System (ADS)
Crabtree, George
2018-01-01
Energy storage is poised to drive transformations in transportation and the electricity grid that personalize access to mobility and energy services, not unlike the transformation of smart phones that personalized access to people and information. Storage will work with other emerging technologies such as electric vehicles, ride-sharing, self-driving and connected cars in transportation and with renewable generation, distributed energy resources and smart energy management on the grid to create mobility and electricity as services matched to customer needs replacing the conventional one-size-fits-all approach. This survey outlines the prospects, challenges and impacts of the coming mobility and electricity transformations.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Blast2GO goes grid: developing a grid-enabled prototype for functional genomics analysis.
Aparicio, G; Götz, S; Conesa, A; Segrelles, D; Blanquer, I; García, J M; Hernandez, V; Robles, M; Talon, M
2006-01-01
The vast amount in complexity of data generated in Genomic Research implies that new dedicated and powerful computational tools need to be developed to meet their analysis requirements. Blast2GO (B2G) is a bioinformatics tool for Gene Ontology-based DNA or protein sequence annotation and function-based data mining. The application has been developed with the aim of affering an easy-to-use tool for functional genomics research. Typical B2G users are middle size genomics labs carrying out sequencing, ETS and microarray projects, handling datasets up to several thousand sequences. In the current version of B2G. The power and analytical potential of both annotation and function data-mining is somehow restricted to the computational power behind each particular installation. In order to be able to offer the possibility of an enhanced computational capacity within this bioinformatics application, a Grid component is being developed. A prototype has been conceived for the particular problem of speeding up the Blast searches to obtain fast results for large datasets. Many efforts have been done in the literature concerning the speeding up of Blast searches, but few of them deal with the use of large heterogeneous production Grid Infrastructures. These are the infrastructures that could reach the largest number of resources and the best load balancing for data access. The Grid Service under development will analyse requests based on the number of sequences, splitting them accordingly to the available resources. Lower-level computation will be performed through MPIBLAST. The software architecture is based on the WSRF standard.
NASA Technical Reports Server (NTRS)
Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James
2014-01-01
Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.
Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.
NASA Astrophysics Data System (ADS)
Devarakonda, R.
2014-12-01
Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projection with a spatial extent that covers the North America as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool (http://daymet.ornl.gov/singlepixel.html) and THREDDS (Thematic Real-time Environmental Data Services) Data Server (TDS) (http://daymet.ornl.gov/thredds_mosaics.html). The Single Pixel Data Extraction Tool [2] allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. The ORNL DAAC's TDS provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. References: [1] Thornton, P. E., Thornton, M. M., Mayer, B. W., Wilhelmi, N., Wei, Y., Devarakonda, R., & Cook, R. (2012). "Daymet: Daily surface weather on a 1 km grid for North America, 1980-2008". Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center for Biogeochemical Dynamics (DAAC), 1. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available [http://daymet.ornl.go/singlepixel.html].
GridAPPS-D Conceptual Design v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melton, Ronald B.; Schneider, Kevin P.; McDermott, Thomas E.
2017-05-31
The purpose of this document is to provide a conceptual design of the distribution system application development platform being developed for the U.S. Department of Energy’s Advanced Distribution Management System (ADMS) Program by the Grid Modernization Laboratory Consortium project GM0063. The platform will be referred to as GridAPPS-D. This document provides a high level, conceptual view of the platform and provides related background and contextual information. This document is intended to both educate readers about the technical work of the project and to serve as a point of reference for the project team. The document will be updated as themore » project progresses.« less
The Climate-G Portal: a Grid Enabled Scientifc Gateway for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni
2010-05-01
Grid portals are web gateways aiming at concealing the underlying infrastructure through a pervasive, transparent, user-friendly, ubiquitous and seamless access to heterogeneous and geographical spread resources (i.e. storage, computational facilities, services, sensors, network, databases). Definitively they provide an enhanced problem-solving environment able to deal with modern, large scale scientific and engineering problems. Scientific gateways are able to introduce a revolution in the way scientists and researchers organize and carry out their activities. Access to distributed resources, complex workflow capabilities, and community-oriented functionalities are just some of the features that can be provided by such a web-based environment. In the context of the EGEE NA4 Earth Science Cluster, Climate-G is a distributed testbed focusing on climate change research topics. The Euro-Mediterranean Center for Climate Change (CMCC) is actively participating in the testbed providing the scientific gateway (Climate-G Portal) to access to the entire infrastructure. The Climate-G Portal has to face important and critical challenges as well as has to satisfy and address key requirements. In the following, the most relevant ones are presented and discussed. Transparency: the portal has to provide a transparent access to the underlying infrastructure preventing users from dealing with low level details and the complexity of a distributed grid environment. Security: users must be authenticated and authorized on the portal to access and exploit portal functionalities. A wide set of roles is needed to clearly assign the proper one to each user. The access to the computational grid must be completely secured, since the target infrastructure to run jobs is a production grid environment. A security infrastructure (based on X509v3 digital certificates) is strongly needed. Pervasivity and ubiquity: the access to the system must be pervasive and ubiquitous. This is easily true due to the nature of the needed web approach. Usability and simplicity: the portal has to provide simple, high level and user friendly interfaces to ease the access and exploitation of the entire system. Coexistence of general purpose and domain oriented services: along with general purpose services (file transfer, job submission, etc.), the portal has to provide domain based services and functionalities. Subsetting of data, visualization of 2D maps around a virtual globe, delivery of maps through OGC compliant interfaces (i.e. Web Map Service - WMS) are just some examples. Since april 2009, about 70 users (85% coming from the climate change community) got access to the portal. A key challenge of this work is the idea to provide users with an integrated working environment, that is a place where scientists can find huge amount of data, complete metadata support, a wide set of data access services, data visualization and analysis tools, easy access to the underlying grid infrastructure and advanced monitoring interfaces.
The Earth System Grid Federation (ESGF) Project
NASA Astrophysics Data System (ADS)
Carenton-Madiec, Nicolas; Denvil, Sébastien; Greenslade, Mark
2015-04-01
The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) enterprise system is a collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of model output and observational data. ESGF's primary goal is to facilitate advancements in Earth System Science. It is an interagency and international effort led by the US Department of Energy (DOE), and co-funded by National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), National Science Foundation (NSF), Infrastructure for the European Network of Earth System Modelling (IS-ENES) and international laboratories such as the Max Planck Institute for Meteorology (MPI-M) german Climate Computing Centre (DKRZ), the Australian National University (ANU) National Computational Infrastructure (NCI), Institut Pierre-Simon Laplace (IPSL), and the British Atmospheric Data Center (BADC). Its main mission is to support current CMIP5 activities and prepare for future assesments. The ESGF architecture is based on a system of autonomous and distributed nodes, which interoperate through common acceptance of federation protocols and trust agreements. Data is stored at multiple nodes around the world, and served through local data and metadata services. Nodes exchange information about their data holdings and services, trust each other for registering users and establishing access control decisions. The net result is that a user can use a web browser, connect to any node, and seamlessly find and access data throughout the federation. This type of collaborative working organization and distributed architecture context en-lighted the need of integration and testing processes definition to ensure the quality of software releases and interoperability. This presentation will introduce the ESGF project and demonstrate the range of tools and processes that have been set up to support release management activities.
Access to Emissions Distributions and Related Ancillary Data through the ECCAD database
NASA Astrophysics Data System (ADS)
Darras, Sabine; Granier, Claire; Liousse, Catherine; De Graaf, Erica; Enriquez, Edgar; Boulanger, Damien; Brissebrat, Guillaume
2017-04-01
The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.
Access to Emissions Distributions and Related Ancillary Data through the ECCAD database
NASA Astrophysics Data System (ADS)
Darras, Sabine; Enriquez, Edgar; Granier, Claire; Liousse, Catherine; Boulanger, Damien; Fontaine, Alain
2016-04-01
The ECCAD database (Emissions of atmospheric Compounds and Compilation of Ancillary Data) provides a user-friendly access to global and regional surface emissions for a large set of chemical compounds and ancillary data (land use, active fires, burned areas, population,etc). The emissions inventories are time series gridded data at spatial resolution from 1x1 to 0.1x0.1 degrees. ECCAD is the emissions database of the GEIA (Global Emissions InitiAtive) project and a sub-project of the French Atmospheric Data Center AERIS (http://www.aeris-data.fr). ECCAD has currently more than 2200 users originating from more than 80 countries. The project benefits from this large international community of users to expand the number of emission datasets made available. ECCAD provides detailed metadata for each of the datasets and various tools for data visualization, for computing global and regional totals and for interactive spatial and temporal analysis. The data can be downloaded as interoperable NetCDF CF-compliant files, i.e. the data are compatible with many other client interfaces. The presentation will provide information on the datasets available within ECCAD, as well as examples of the analysis work that can be done online through the website: http://eccad.aeris-data.fr.
FY2017 Electrification Annual Progress Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
During fiscal year 2017 (FY 2017), the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) funded early stage research & development (R&D) projects that address Batteries and Electrification of the U.S. transportation sector. The VTO Electrification Sub-Program is composed of Electric Drive Technologies, and Grid Integration activities. The Electric Drive Technologies group conducts R&D projects that advance Electric Motors and Power Electronics technologies. The Grid and Charging Infrastructure group conducts R&D projects that advance Grid Modernization and Electric Vehicle Charging technologies. This document presents a brief overview of the Electrification Sub-Program and progress reports for its R&D projects. Eachmore » of the progress reports provide a project overview and highlights of the technical results that were accomplished in FY 2017.« less
Atlasmaker: A Grid-based Implementation of the Hyperatlas
NASA Astrophysics Data System (ADS)
Williams, R.; Djorgovski, S. G.; Feldmann, M. T.; Jacob, J.
2004-07-01
The Atlasmaker project is using Grid technology, in combination with NVO interoperability, to create new knowledge resources in astronomy. The product is a multi-faceted, multi-dimensional, scientifically trusted image atlas of the sky, made by federating many different surveys at different wavelengths, times, resolutions, polarizations, etc. The Atlasmaker software does resampling and mosaicking of image collections, and is well-suited to operate with the Hyperatlas standard. Requests can be satisfied via on-demand computations or by accessing a data cache. Computed data is stored in a distributed virtual file system, such as the Storage Resource Broker (SRB). We expect these atlases to be a new and powerful paradigm for knowledge extraction in astronomy, as well as a magnificent way to build educational resources. The system is being incorporated into the data analysis pipeline of the Palomar-Quest synoptic survey, and is being used to generate all-sky atlases from the 2MASS, SDSS, and DPOSS surveys for joint object detection.
NASA Astrophysics Data System (ADS)
Elangovan, D.; Archana, R.; Jayadeep, V. J.; Nithin, M.; Arunkumar, G.
2017-11-01
More than fifty percent Indian population do not have access to electricity in daily lives. The distance between the power generating stations and the distribution centers forms one of the main reasons for lack of electrification in rural and remote areas. Here lies the importance of decentralization of power generation through renewable energy resources. In the present world, electricity is predominantly powered by alternating current, but most day to day devices like LED lamps, computers and electrical vehicles, all run on DC power. By directly supplying DC to these loads, the number of power conversion stages was reduced, and overall system efficiency increases. Replacing existing AC network with DC is a humongous task, but with power electronic techniques, this project intends to implement DC grid at a household level in remote and rural areas. Proposed work was designed and simulated successfully for various loads amounting to 250 W through appropriate power electronic convertors. Maximum utilization of the renewable sources for domestic and commercial application was achieved with the proposed DC topology.
Advanced technologies for scalable ATLAS conditions database access on the grid
NASA Astrophysics Data System (ADS)
Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.
2010-04-01
During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.
Integration of external metadata into the Earth System Grid Federation (ESGF)
NASA Astrophysics Data System (ADS)
Berger, Katharina; Levavasseur, Guillaume; Stockhause, Martina; Lautenschlager, Michael
2015-04-01
International projects with high volume data usually disseminate their data in a federated data infrastructure, e.g.~the Earth System Grid Federation (ESGF). The ESGF aims to make the geographically distributed data seamlessly discoverable and accessible. Additional data-related information is currently collected and stored in separate repositories by each data provider. This scattered and useful information is not or only partly available for ESGF users. Examples for such additional information systems are ES-DOC/metafor for model and simulation information, IPSL's versioning information, CHARMe for user annotations, DKRZ's quality information and data citation information. The ESGF Quality Control working team (esgf-qcwt) aims to integrate these valuable pieces of additional information into the ESGF in order to make them available to users and data archive managers by (i) integrating external information into ESGF portal, (ii) integrating links to external information objects into the ESGF metadata index, e.g. by the use of PIDs (Persistent IDentifiers), and (iii) automating the collection of external information during the ESGF data publication process. For the sixth phase of CMIP (Coupled Model Intercomparison Project), the ESGF metadata index is to be enriched by additional information on data citation, file version, etc. This information will support users directly and can be automatically exploited by higher level services (human and machine readability).
PERSIANN-CDR Daily Precipitation Dataset for Hydrologic Applications and Climate Studies.
NASA Astrophysics Data System (ADS)
Sorooshian, S.; Hsu, K. L.; Ashouri, H.; Braithwaite, D.; Nguyen, P.; Thorstensen, A. R.
2015-12-01
Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network - Climate Data Record (PERSIANN-CDR) is a newly developed and released dataset which covers more than 3 decades (01/01/1983 - 03/31/2015 to date) of daily precipitation estimations at 0.25° resolution for 60°S-60°N latitude band. PERSIANN-CDR is processed using the archive of the Gridded Satellite IRWIN CDR (GridSat-B1) from the International Satellite Cloud Climatology Project (ISCCP), and the Global Precipitation Climatology Project (GPCP) 2.5° monthly product for bias correction. The dataset has been released and made available for public access through NOAA's National Centers for Environmental Information (NCEI) (http://www1.ncdc.noaa.gov/pub/data/sds/cdr/CDRs/PERSIANN/Overview.pdf). PERSIANN-CDR has already shown its usefulness for a wide range of applications, including climate variability and change monitoring, hydrologic applications, and water resources system planning and management. This precipitation CDR data has also been used in studying the behavior of historical extreme precipitation events. Demonstration of PERSIANN-CDR data in detecting trends and variability of precipitation over the past 30 years, the potential usefulness of the dataset for evaluating climate model performance relevant to precipitation in retrospective mode, will be presented.
Accelerating Cancer Systems Biology Research through Semantic Web Technology
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.
2012-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758
Accelerating cancer systems biology research through Semantic Web technology.
Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S
2013-01-01
Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.
Clemson University Wind Turbine Drivetrain Test Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuten, James Maner; Haque, Imtiaz; Rigas, Nikolaos
In November of 2009, Clemson University was awarded a competitive grant from the U.S. Department of Energy to design, build and operate a facility for full-scale, highly accelerated mechanical testing of next-generation wind turbine drivetrain technologies. The primary goal of the project was to design, construct, commission, and operate a state-of-the-art sustainable facility that permits full-scale highly accelerated testing of advanced drivetrain systems for large wind turbines. The secondary goal was to meet the objectives of the American Recovery and Reinvestment Act of 2009, especially in job creation, and provide a positive impact on economically distressed areas in the Unitedmore » States, and preservation and economic recovery in an expeditious manner. The project was executed according to a managed cooperative agreement with the Department of Energy and was an extraordinary success. The resultant new facility is located in North Charleston, SC, providing easy transportation access by rail, road or ship and operates on an open access model such that it is available to the U.S. Wind Industry for research, analysis, and evaluation activities. The 72 m by 97 m facility features two mechanical dynamometer test bays for evaluating the torque and blade dynamic forces experienced by the rotors of wind turbine drivetrains. The dynamometers are rated at 7.5 MW and 15 MW of low speed shaft power and are configured as independent test areas capable of simultaneous operation. All six degrees of freedom, three linear and three rotational, for blade and rotor dynamics are replicated through the combination of a drive motor, speed reduction gearbox and a controllable hydraulic load application unit (LAU). This new LAU setup readily supports accelerated lifetime mechanical testing and load analysis for the entire drivetrain system of the nacelle and easily simulates a wide variety of realistic operating scenarios in a controlled laboratory environment. The development of these two dynamometer test rigs is the first significant achievement for the project. These test rigs embody a new manner of test due to the system configuration and completely new design with a free floating loading hub in the LAU. This project provided the catalyst for the advancement to this new test rig configuration that has been adopted by every significant wind turbine test rig constructed since the inception of this project. There are currently two different vendors supplying these new systems. Catalyzing this new design is the second major success of the project. With the increased market penetration of wind energy over the past decade, many regions and countries have developed specific electrical grid specifications and performance codes for large wind farms to ensure operational reliability and stability. These grid codes provide requirements for interconnection with the grid during low or high voltage phenomena, typically encountered during and after system fault events. Given the installed infrastructure of the Wind Turbine Drivetrain Testing Facility (WTDTF), a natural expansion of facility capability was to include the necessary equipment for performing fault ride-through evaluations of wind turbines to the Low Voltage Ride Through (LVRT) codes. Once the decision was made to expand the scope of the original grant into fault ride-through testing, it was clear that there are several markets, not just wind, which could benefit from this type of test and that simple fault ride-through testing could be extended into a broader scope of electrical testing capabilities. It was at this point that Clemson University was awarded a second grant to build a 15 MW Hardware-In-the-Loop (HIL) Grid Simulator in order to establish world class electrical testing capabilities to compliment the mechanical testing at the WTDTF. This third significant achievement resulted in the 15 MW HIL Grid Simulator as the corner stone of the Duke Energy eGRID Center and is collocated with the WTDTF in the SCE&G Energy Innovation Center at the new Facility in North Charleston, SC, USA. Before the eGRID was completed, it was recognized that the ability to test solar farm equipment was but a small step away thru the addition of enhanced equipment to provide for DC testing. In yet another expansion/success, a 2.5 MW rectifier system was designed and implemented by Clemson staff to enhance the Center’s capabilities. The program required over 250,000 man-hours of on-site construction labor reworking the brownfield facility on the former Navy Base, clearly satisfying one of the major goals of the Reinvestment Act. This was done while winning numerous awards for design and construction of the facility, including the Top US Project for 2014 from the Trade Journal Engineering News Record. The project was a major collaborative developmental activity managed by Clemson University staff that involved the DOE and many partners and organizations.« less
Power Systems Integration Laboratory | Energy Systems Integration Facility
inverters. Key Infrastructure Grid simulator, load bank, Opal-RT, battery, inverter mounting racks, data , frequency-watt, and grid anomaly ride-through. Key Infrastructure House power, Opal-RT, PV simulator access
A gating grid driver for time projection chambers
NASA Astrophysics Data System (ADS)
Tangwancharoen, S.; Lynch, W. G.; Barney, J.; Estee, J.; Shane, R.; Tsang, M. B.; Zhang, Y.; Isobe, T.; Kurata-Nishimura, M.; Murakami, T.; Xiao, Z. G.; Zhang, Y. F.; SπRIT Collaboration
2017-05-01
A simple but novel driver system has been developed to operate the wire gating grid of a Time Projection Chamber (TPC). This system connects the wires of the gating grid to its driver via low impedance transmission lines. When the gating grid is open, all wires have the same voltage allowing drift electrons, produced by the ionization of the detector gas molecules, to pass through to the anode wires. When the grid is closed, the wires have alternating higher and lower voltages causing the drift electrons to terminate at the more positive wires. Rapid opening of the gating grid with low pickup noise is achieved by quickly shorting the positive and negative wires to attain the average bias potential with N-type and P-type MOSFET switches. The circuit analysis and simulation software SPICE shows that the driver restores the gating grid voltage to 90% of the opening voltage in less than 0.20 μs, for small values of the termination resistors. When tested in the experimental environment of a time projection chamber larger termination resistors were chosen so that the driver opens the gating grid in 0.35 μs. In each case, opening time is basically characterized by the RC constant given by the resistance of the switches and terminating resistors and the capacitance of the gating grid and its transmission line. By adding a second pair of N-type and P-type MOSFET switches, the gating grid is closed by restoring 99% of the original charges to the wires within 3 μs.
Dynamic federation of grid and cloud storage
NASA Astrophysics Data System (ADS)
Furano, Fabrizio; Keeble, Oliver; Field, Laurence
2016-09-01
The Dynamic Federations project ("Dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol-agnostic, we have focused our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. The Data Bridge has been deployed in production with good results. We believe that the features of a loosely coupled federation of open-protocolbased storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.
Environmental applications based on GIS and GRID technologies
NASA Astrophysics Data System (ADS)
Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.
2009-04-01
In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS⢠(Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula, Southern Sardinia, Italy), the soil database is managed at the DISTER node (Cagliari, southern Sardinia, Italy), and the geochemical database is managed at the DIGITA node (Cagliari, southern Sardinia, Italy). The input data are files (raster ASCII format) and database tables. The raster files have been zipped and stored in iRods. The tables are imported into a PostgreSQL database and accessed by the Rule-oriented Database Access (RDA) system available for PostgreSQL in iRODS 1.1. From the EnginFrame portal it is possible to view and use the applications through three services: "Upload Data", "View Data and Metadata", and "Execute Application". The Land Suitability application, based on the FAO framework for land evaluation, produces suitability maps (at the scale 1:10,000) for 11 different possible alternative uses. The maps, with a ASCII raster format, are downloadable by the user and viewable by Mapserver. This application has been implemented in an area of southern Sardinia (Monastir) and may be useful to direct municipal urban planning towards a rational land use. The Desertification Risk Assessment application produces, by means of biophysical and socioeconomic key indicators, a final combined map showing critical, fragile, and potential Environmentally Sensitive Areas to desertification. This application has been implemented in an area of south-west Sardinia (Muravera). The final index for the sensitivity is obtained by the geometric mean among four parameters: SQI (Soil Quality Index), CQI (Climate Quality Index), VQI (Vegetation Quality Index) e MQI (Management Quality Index). The final result (ESAs = (SQI * CQI * VQI * MQI)14) is a map at the scale 1:50,000, with a ASCII raster format, downloadable by the user and viewable by Mapserver. This type of map may be useful to direct land planning at catchment basin level. The Georesources and Environmental Geochemistry application, whose test is in progress in the area of Muravera (south-west Sardinia) through stream sediment sampling, aims at producing maps defining, with high precision, areas (hydrographic basins) where the values of a given element exceed the lithological background (i.e. geochemically anomalous). Such a product has a double purpose. First of all, it identifies releasing sources and may be useful for the necessary remediation actions, if they insist on areas historically prone to more or less intense anthropical activities. On the other hand, if these sources are of natural origin, they could also be interpreted as ore mineral occurrences. In the latter case the study of these occurrences could lead to discover economic ore bodies of small-to-medium size (at least in the present target area) and consequently to the revival of a local mining industry.
OVERGRID: A Unified Overset Grid Generation Graphical Interface
NASA Technical Reports Server (NTRS)
Chan, William M.; Akien, Edwin W. (Technical Monitor)
1999-01-01
This paper presents a unified graphical interface and gridding strategy for performing overset grid generation. The interface called OVERGRID has been specifically designed to follow an efficient overset gridding strategy, and contains general grid manipulation capabilities as well as modules that are specifically suited for overset grids. General grid utilities include functions for grid redistribution, smoothing, concatenation, extraction, extrapolation, projection, and many others. Modules specially tailored for overset grids include a seam curve extractor, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, and a Cartesian box grid generator, Grid visualization is achieved using OpenGL while widgets are constructed with Tcl/Tk. The software is portable between various platforms from UNIX workstations to personal computers.
Studies of lightning data in conjunction with geostationary satellite data
NASA Technical Reports Server (NTRS)
Auvine, B.; Martin, D.
1985-01-01
Since January, work has been proceeding on the first phase of this project: the creation of an extensive real-time lightning data base accessible via the Space Science and Engineering Center McIdas system. The purpose of this endeavor is two-fold: to enhance the availability and ease of access to lightning data among the various networks, governmental and research agencies; and to test the feasiblity and desirability of such efforts in succeeding years. The final steps in the creation of the necessary communications links, hardware, and software are in the process of being completed. Operations ground rules for access among the various users have been discussed and are being refined. While the research planned for the last year of the project will rely for the most part on archived, quality-controlled data from the various networks, the real-time data will provide a valuable first-look at potentially interesting case studies. For this purpose, tools are being developed on McIdas for display and analysis of the data as they become available. In conjunction with concurrent GOES real-time imagery, strike locations can be plotted, gridded and contoured, or displayed in various statistical formats including frequency distributions, histograms, and scatter plots. The user may also perform these functions in relation to arbitrarily defined areas on the satellite image. By mid-May these preparations for the access and analysis of real-time lightning data are expected to be complete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xu; Marnay, Chris; Feng, Wei
The Chinese government has paid growing attention to renewable energy development and has set ambitious goals for carbon dioxide (CO2) emissions reduction and energy savings. Smart grid (SG) technologies have been regarded as emerging ways to integrate renewable energy and to help achieve these climate and energy goals. This report first reviews completed SG demonstrations under the U.S. American Recovery and Reinvestment Act (ARRA); especially two key programs: the SG Investment Grant (SGIG) and the SG Demonstration Project (SGDP). Under the SGIG, the larger of the two programs, over $3.4 billion was used to help industry deploy existing advanced SGmore » technologies and tools to improve grid performance and reduce costs. Including industry investment, a total of $8 billion was spent on 99 cost-shared projects, which involved more than 200 participating electric utilities and other organizations. These projects aimed to modernize the electric grid, strengthen cyber security, improve interoperability, and collect comprehensive data on SG operations and benefits.« less
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
Inverting x,y grid coordinates to obtain latitude and longitude in the vanderGrinten projection
NASA Technical Reports Server (NTRS)
Rubincam, D. P.
1980-01-01
The latitude and longitude of a point on the Earth's surface are found from its x,y grid coordinates in the vanderGrinten projection. The latitude is a solution of a cubic equation and the longitude a solution of a quadratic equation. Also, the x,y grid coordinates of a point on the Earth's surface can be found if its latitude and longitude are known by solving two simultaneous quadratic equations.
NASA Astrophysics Data System (ADS)
Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen
Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.
NASA Astrophysics Data System (ADS)
Arcorace, Mauro; Silvestro, Francesco; Rudari, Roberto; Boni, Giorgio; Dell'Oro, Luca; Bjorgo, Einar
2016-04-01
Most flood prone areas in the globe are mainly located in developing countries where making communities more flood resilient is a priority. Despite different flood forecasting initiatives are now available from academia and research centers, what is often missing is the connection between the timely hazard detection and the community response to warnings. In order to bridge the gap between science and decision makers, UN agencies play a key role on the dissemination of information in the field and on capacity-building to local governments. In this context, having a reliable global early warning system in the UN would concretely improve existing in house capacities for Humanitarian Response and the Disaster Risk Reduction. For those reasons, UNITAR-UNOSAT has developed together with USGS and CIMA Foundation a Global Flood EWS called "Flood-FINDER". The Flood-FINDER system is a modelling chain which includes meteorological, hydrological and hydraulic models that are accurately linked to enable the production of warnings and forecast inundation scenarios up to three weeks in advance. The system is forced with global satellite derived precipitation products and Numerical Weather Prediction outputs. The modelling chain is based on the "Continuum" hydrological model and risk assessments produced for GAR2015. In combination with existing hydraulically reconditioned SRTM data and 1D hydraulic models, flood scenarios are derived at multiple scales and resolutions. Climate and flood data are shared through a Web GIS integrated platform. First validation of the modelling chain has been conducted through a flood hindcasting test case, over the Chao Phraya river basin in Thailand, using multi temporal satellite-based analysis derived for the exceptional flood event of 2011. In terms of humanitarian relief operations, the EO-based services of flood mapping in rush mode generally suffer from delays caused by the time required for their activation, programming, acquisitions and image processing. Flood-FINDER aims to pre-empt this process and to provide preliminary analyses where no field data is available. In the early 2015, the Flood-FINDER's forecast along the Shire River has been used to guide the rapid mapping activities in Southern Malawi and Northern Mozambique. It proved efficient support providing timely information about the evolution of the flood event over an area lacking of field data. Regarding in-country capacity building, Flood-FINDER allowed UNOSAT to set up in middle 2015 a flood early warning system in Chad along the Chari River basin with the collaboration of Chadian Ministry of hydraulics and livestock. Weekly flood bulletins have been shared with local authorities and UN agencies over the entire rainy season. Finally, an experimental version of the global web alerting platform has been recently developed for supporting the El Nino flood preparedness in the Horn of Africa. Flood-FINDEŔs mission is to support decision makers throughout all the disaster management cycle with flood alerts, modelled scenarios, EO-based impact assessments and with direct support at country level to implement disaster mitigation strategies. The aim for the future is to seek funding for having the global system fully operational using CERN's supercomputing facilities and to establish new in-country projects with local authorities.
The National Grid Project: A system overview
NASA Technical Reports Server (NTRS)
Gaither, Adam; Gaither, Kelly; Jean, Brian; Remotigue, Michael; Whitmire, John; Soni, Bharat; Thompson, Joe; Dannenhoffer,, John; Weatherill, Nigel
1995-01-01
The National Grid Project (NGP) is a comprehensive numerical grid generation software system that is being developed at the National Science Foundation (NSF) Engineering Research Center (ERC) for Computational Field Simulation (CFS) at Mississippi State University (MSU). NGP is supported by a coalition of U.S. industries and federal laboratories. The objective of the NGP is to significantly decrease the amount of time it takes to generate a numerical grid for complex geometries and to increase the quality of these grids to enable computational field simulations for applications in industry. A geometric configuration can be discretized into grids (or meshes) that have two fundamental forms: structured and unstructured. Structured grids are formed by intersecting curvilinear coordinate lines and are composed of quadrilateral (2D) and hexahedral (3D) logically rectangular cells. The connectivity of a structured grid provides for trivial identification of neighboring points by incrementing coordinate indices. Unstructured grids are composed of cells of any shape (commonly triangles, quadrilaterals, tetrahedra and hexahedra), but do not have trivial identification of neighbors by incrementing an index. For unstructured grids, a set of points and an associated connectivity table is generated to define unstructured cell shapes and neighboring points. Hybrid grids are a combination of structured grids and unstructured grids. Chimera (overset) grids are intersecting or overlapping structured grids. The NGP system currently provides a user interface that integrates both 2D and 3D structured and unstructured grid generation, a solid modeling topology data management system, an internal Computer Aided Design (CAD) system based on Non-Uniform Rational B-Splines (NURBS), a journaling language, and a grid/solution visualization system.
NeuroLOG: a community-driven middleware design.
Montagnat, Johan; Gaignard, Alban; Lingrand, Diane; Rojas Balderrama, Javier; Collet, Philippe; Lahire, Philippe
2008-01-01
The NeuroLOG project designs an ambitious neurosciences middleware, gaining from many existing components and learning from past project experiences. It is targeting a focused application area and adopting a user-centric perspective to meet the neuroscientists expectations. It aims at fostering the adoption of HealthGrids in a pre-clinical community. This paper details the project's design study and methodology which were proposed to achieve the integration of heterogeneous site data schemas and the definition of a site-centric policy. The NeuroLOG middleware will bridge HealthGrid and local resources to match user desires to control their resources and provide a transitional model towards HealthGrids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
The climate and weather data science community gathered December 3–5, 2013, at Lawrence Livermore National Laboratory, in Livermore, California, for the third annual Earth System Grid Federation (ESGF) and Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Meeting, which was hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UV-CDAT are global collaborations designed to develop a new generation of open-source software infrastructure that provides distributed access and analysis to observed andmore » simulated data from the climate and weather communities. The tools and infrastructure developed under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change, while the F2F meetings help to build a stronger climate and weather data science community and stronger federated software infrastructure. The 2013 F2F meeting determined requirements for existing and impending national and international community projects; enhancements needed for data distribution, analysis, and visualization infrastructure; and standards and resources needed for better collaborations.« less
Web-based visualization of gridded dataset usings OceanBrowser
NASA Astrophysics Data System (ADS)
Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie
2015-04-01
OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).
Quality Assurance Framework for Mini-Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esterly, Sean; Baring-Gould, Ian; Booth, Samuel
To address the root challenges of providing quality power to remote consumers through financially viable mini-grids, the Global Lighting and Energy Access Partnership (Global LEAP) initiative of the Clean Energy Ministerial and the U.S. Department of Energy teamed with the National Renewable Energy Laboratory (NREL) and Power Africa to develop a Quality Assurance Framework (QAF) for isolated mini-grids. The framework addresses both alternating current (AC) and direct current (DC) mini-grids, and is applicable to renewable, fossil-fuel, and hybrid systems.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
Zhang, Hong; Ren, Lei; Kong, Vic; Giles, William; Zhang, You; Jin, Jian-Yue
2016-01-01
A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the Catphan phantom were mostly recovered according to visual evaluation. The scatter related artifacts, such as cupping artifacts, were almost completely removed. The IPSF-SMOG is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.
Economic performance and sustainability of HealthGrids: evidence from two case studies.
Dobrev, Alexander; Scholz, Stefan; Zegners, Dainis; Stroetmann, Karl A; Semler, Sebastian C
2009-01-01
Financial sustainability is not a driving force of HealthGrids today, as a previous desk research survey of 22 international HealthGrid projects has showed. The majority of applications are project based, which puts a time limit of funding, but also of goals and objectives. Given this situation, we analysed two initiatives, WISDOM and MammoGrid from an economic, cost-benefit perspective, and evaluated the potential for these initiatives to be brought to market as self-financing, sustainable services. We conclude that the topic of HealthGrids should be pursued further because of the substantial potential for net gains to society at large. The most significant hurdle to sustainability - the discrepancy between social benefits and private incentives - can be solved by sound business models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian
2017-06-01
This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.
Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, A. C.; Feng, W. C.; Belford, Geneva G.
2001-01-01
Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X; Driewer, J; Lei, Y
2015-06-15
Purpose: Grid therapy has promising applications in the radiation treatment of bulky and large tumors. However, research and applications of grid therapy is limited by the accessibility of the specialized blocks that produce the grid of pencil-like radiation beams. In this study, a Cerrobend grid block was fabricated using a 3D printing technique. Methods: A grid block mold was designed with divergent tubes following beam central rays. The mold was printed using a resin with the working temperature below 230 °C. The melted Cerrobend liquid at 120°oC was cast into the resin mold to yield a block with a thicknessmore » of 7.4 cm. The grid had a hexagonal pattern, with each pencil beam diameter of 1.4 cm at the iso-center plane; the distance between the beam centers was 2 cm. The dosimetric properties of the grid block were studied using radiographic film and small field dosimeters. Results: the grid block was fabricated to be mounted at the third accessory mount of a Siemens Oncor linear accelerator. Fabricating a grid block using 3D printing is similar to making cutouts for traditional radiotherapy photon blocks, with the difference being that the mold was created by a 3D printer rather than foam. In this study, the valley-to-peak ratio for a 6MV photon grid beam was 20% at dmax, and 30% at 10 cm depth, respectively. Conclusion: We have demonstrated a novel process for implementing grid radiotherapy using 3D printing techniques. Compared to existing approaches, our technique combines reduced cost, accessibility, and flexibility in customization with efficient delivery. This lays the groundwork for future studies to improve our understanding of the efficacy of grid therapy and apply it to improve cancer treatment.« less
Energy Systems Integration: Demonstrating Distribution Feeder Voltage Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-01-01
Overview fact sheet about the Smarter Grid Solutions Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.
Energy Systems Integration: Demonstrating Distributed Grid-Edge Control Hierarchy
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-01-01
Overview fact sheet about the OMNETRIC Group Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.
Power Market Design | Grid Modernization | NREL
Power Market Design Power Market Design NREL researchers are developing a modeling platform to test (a commercial electricity production simulation model) and FESTIV (the NREL-developed Flexible Energy consisting of researchers in power systems and economics Projects Grid Market Design Project The objective of
Grid Standards and Codes | Grid Modernization | NREL
simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine
Data Management System for the National Energy-Water System (NEWS) Assessment Framework
NASA Astrophysics Data System (ADS)
Corsi, F.; Prousevitch, A.; Glidden, S.; Piasecki, M.; Celicourt, P.; Miara, A.; Fekete, B. M.; Vorosmarty, C. J.; Macknick, J.; Cohen, S. M.
2015-12-01
Aiming at providing a comprehensive assessment of the water-energy nexus, the National Energy-Water System (NEWS) project requires the integration of data to support a modeling framework that links climate, hydrological, power production, transmission, and economical models. Large amounts of Georeferenced data has to be streamed to the components of the inter-disciplinary model to explore future challenges and tradeoffs in the US power production, based on climate scenarios, power plant locations and technologies, available water resources, ecosystem sustainability, and economic demand. We used open source and in-house build software components to build a system that addresses two major data challenges: On-the-fly re-projection, re-gridding, interpolation, extrapolation, nodata patching, merging, temporal and spatial aggregation, of static and time series datasets in virtually any file formats and file structures, and any geographic extent for the models I/O, directly at run time; Comprehensive data management based on metadata cataloguing and discovery in repositories utilizing the MAGIC Table (Manipulation and Geographic Inquiry Control database). This innovative concept allows models to access data on-the-fly by data ID, irrespective of file path, file structure, file format and regardless its GIS specifications. In addition, a web-based information and computational system is being developed to control the I/O of spatially distributed Earth system, climate, and hydrological, power grid, and economical data flow within the NEWS framework. The system allows scenario building, data exploration, visualization, querying, and manipulation any loaded gridded, point, and vector polygon dataset. The system has demonstrated its potential for applications in other fields of Earth science modeling, education, and outreach. Over time, this implementation of the system will provide near real-time assessment of various current and future scenarios of the water-energy nexus.
Spaceflight Operations Services Grid Prototype
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Mehrotra, Piyush; Lisotta, Anthony
2004-01-01
NASA over the years has developed many types of technologies and conducted various types of science resulting in numerous variations of operations, data and applications. For example, operations range from deep space projects managed by JPL, Saturn and Shuttle operations managed from JSC and KSC, ISS science operations managed from MSFC and numerous low earth orbit satellites managed from GSFC that are varied and intrinsically different but require many of the same types of services to fulfill their missions. Also, large data sets (databases) of Shuttle flight data, solar system projects and earth observing data exist which because of their varied and sometimes outdated technologies are not and have not been fully examined for additional information and knowledge. Many of the applications/systems supporting operational services e.g. voice, video, telemetry and commanding, are outdated and obsolete. The vast amounts of data are located in various formats, at various locations and range over many years. The ability to conduct unified space operations, access disparate data sets and to develop systems and services that can provide operational services does not currently exist in any useful form. In addition, adding new services to existing operations is generally expensive and with the current budget constraints not feasible on any broad level of implementation. To understand these services a discussion of each one follows. The Spaceflight User-based Services are those services required to conduct space flight operations. Grid Services are those Grid services that will be used to overcome, through middleware software, some or all the problems that currently exists. In addition, Network Services will be discussed briefly. Network Services are crucial to any type of remedy and are evolving adequately to support any technology currently in development.
Interoperability Is Key to Smart Grid Success - Continuum Magazine | NREL
standards. Ever wonder what makes it possible to withdraw money securely from another bank's ATM, or call a communication allows access to money and phone calls nationwide, the Smart Grid-an automated electric power
76 FR 70435 - Combined Notice of Filings #1
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... between National Grid, NYPA, PEP, and MI to be effective 1/1/2012. Filed Date: 11/02/2011. Accession... Agreement No. 1743 between National Grid, NYPA, PEP, and MI to be effective 1/1/2012. Filed Date: 11/02/2011...
The QUANTGRID Project (RO)—Quantum Security in GRID Computing Applications
NASA Astrophysics Data System (ADS)
Dima, M.; Dulea, M.; Petre, M.; Petre, C.; Mitrica, B.; Stoica, M.; Udrea, M.; Sterian, R.; Sterian, P.
2010-01-01
The QUANTGRID Project, financed through the National Center for Programme Management (CNMP-Romania), is the first attempt at using Quantum Crypted Communications (QCC) in large scale operations, such as GRID Computing, and conceivably in the years ahead in the banking sector and other security tight communications. In relation with the GRID activities of the Center for Computing & Communications (Nat.'l Inst. Nucl. Phys.—IFIN-HH), the Quantum Optics Lab. (Nat.'l Inst. Plasma and Lasers—INFLPR) and the Physics Dept. (University Polytechnica—UPB) the project will build a demonstrator infrastructure for this technology. The status of the project in its incipient phase is reported, featuring tests for communications in classical security mode: socket level communications under AES (Advanced Encryption Std.), both proprietary code in C++ technology. An outline of the planned undertaking of the project is communicated, highlighting its impact in quantum physics, coherent optics and information technology.
NASA Astrophysics Data System (ADS)
Hou, C. Y.; Dattore, R.; Peng, G. S.
2014-12-01
The National Center for Atmospheric Research's Global Climate Four-Dimensional Data Assimilation (CFDDA) Hourly 40km Reanalysis dataset is a dynamically downscaled dataset with high temporal and spatial resolution. The dataset contains three-dimensional hourly analyses in netCDF format for the global atmospheric state from 1985 to 2005 on a 40km horizontal grid (0.4°grid increment) with 28 vertical levels, providing good representation of local forcing and diurnal variation of processes in the planetary boundary layer. This project aimed to make the dataset publicly available, accessible, and usable in order to provide a unique resource to allow and promote studies of new climate characteristics. When the curation project started, it had been five years since the data files were generated. Also, although the Principal Investigator (PI) had generated a user document at the end of the project in 2009, the document had not been maintained. Furthermore, the PI had moved to a new institution, and the remaining team members were reassigned to other projects. These factors made data curation in the areas of verifying data quality, harvest metadata descriptions, documenting provenance information especially challenging. As a result, the project's curation process found that: Data curator's skill and knowledge helped make decisions, such as file format and structure and workflow documentation, that had significant, positive impact on the ease of the dataset's management and long term preservation. Use of data curation tools, such as the Data Curation Profiles Toolkit's guidelines, revealed important information for promoting the data's usability and enhancing preservation planning. Involving data curators during each stage of the data curation life cycle instead of at the end could improve the curation process' efficiency. Overall, the project showed that proper resources invested in the curation process would give datasets the best chance to fulfill their potential to help with new climate pattern discovery.
Distributed data analysis in ATLAS
NASA Astrophysics Data System (ADS)
Nilsson, Paul; Atlas Collaboration
2012-12-01
Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Dasilva, Arindo M.
2012-01-01
Reanalyses have become important sources of data in weather and climate research. While observations are the most crucial component of the systems, few research projects consider carefully the multitudes of assimilated observations and their impact on the results. This is partly due to the diversity of observations and their individual complexity, but also due to the unfriendly nature of the data formats. Here, we discuss the NASA Modern-Era Retrospective analysis for Research and Applications (MERRA) and a companion dataset, the Gridded Innovations and Observations (GIO). GIO is simply a post-processing of the assimilated observations and their innovations (forecast error and analysis error) to a common spatio-temporal grid, following that of the MERRA analysis fields. This data includes in situ, retrieved and radiance observations that are assimilated and used in the reanalysis. While all these disparate observations and statistics are in a uniform easily accessible format, there are some limitations. Similar observations are binned to the grid, so that multiple observations are combined in the gridding process. The data is then implicitly thinned. Some details in the meta data may also be lost (e.g. aircraft or station ID). Nonetheless, the gridded observations should provide easy access to all the observations input to the reanalysis. To provide an example of the GIO data, a case study evaluating observing systems over the United States and statistics is presented, and demonstrates the evaluation of the observations and the data assimilation. The GIO data is used to collocate 200mb Radiosonde and Aircraft temperature measurements from 1979-2009. A known warm bias of the aircraft measurements is apparent compared to the radiosonde data. However, when larger quantities of aircraft data are available, they dominate the analysis and the radiosonde data become biased against the forecast. When AMSU radiances become available the radiosonde and aircraft analysis and forecast error take on an annual cycle. While this supports results of previous work that recommend bias corrections for the aircraft measurements, the interactions with AMSU radiances will also require further investigation. This also provides an example for reanalysis users in examining the available observations and their impact on the analysis. GIO data is presently available alongside the MERRA reanalysis.
Grid point extraction and coding for structured light system
NASA Astrophysics Data System (ADS)
Song, Zhan; Chung, Ronald
2011-09-01
A structured light system simplifies three-dimensional reconstruction by illuminating a specially designed pattern to the target object, thereby generating a distinct texture on it for imaging and further processing. Success of the system hinges upon what features are to be coded in the projected pattern, extracted in the captured image, and matched between the projector's display panel and the camera's image plane. The codes have to be such that they are largely preserved in the image data upon illumination from the projector, reflection from the target object, and projective distortion in the imaging process. The features also need to be reliably extracted in the image domain. In this article, a two-dimensional pseudorandom pattern consisting of rhombic color elements is proposed, and the grid points between the pattern elements are chosen as the feature points. We describe how a type classification of the grid points plus the pseudorandomness of the projected pattern can equip each grid point with a unique label that is preserved in the captured image. We also present a grid point detector that extracts the grid points without the need of segmenting the pattern elements, and that localizes the grid points in subpixel accuracy. Extensive experiments are presented to illustrate that, with the proposed pattern feature definition and feature detector, more features points in higher accuracy can be reconstructed in comparison with the existing pseudorandomly encoded structured light systems.
Service engineering for grid services in medicine and life science.
Weisbecker, Anette; Falkner, Jürgen
2009-01-01
Clearly defined services with appropriate business models are necessary in order to exploit the benefit of grid computing for industrial and academic users in medicine and life sciences. In the project Services@MediGRID the service engineering approach is used to develop those clearly defined grid services and to provide sustainable business models for their usage.
NREL Research Proves Wind Can Provide Ancillary Grid Fault Response | News
controllable grid interface (CGI) test facility, which simulates the real-time conditions of a utility-scale power grid. This began an ongoing, Energy Department-funded research effort to test how wind turbines test their equipment under any possible grid fault condition. Researchers such as Mark McDade, project
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2014-09-30
The Maui Smart Grid Project (MSGP) is under the leadership of the Hawaii Natural Energy Institute (HNEI) of the University of Hawaii at Manoa. The project team includes Maui Electric Company, Ltd. (MECO), Hawaiian Electric Company, Inc. (HECO), Sentech (a division of SRA International, Inc.), Silver Spring Networks (SSN), Alstom Grid, Maui Economic Development Board (MEDB), University of Hawaii-Maui College (UHMC), and the County of Maui. MSGP was supported by the U.S. Department of Energy (DOE) under Cooperative Agreement Number DE-FC26-08NT02871, with approximately 50% co-funding supplied by MECO. The project was designed to develop and demonstrate an integrated monitoring, communications,more » database, applications, and decision support solution that aggregates renewable energy (RE), other distributed generation (DG), energy storage, and demand response technologies in a distribution system to achieve both distribution and transmission-level benefits. The application of these new technologies and procedures will increase MECO’s visibility into system conditions, with the expected benefits of enabling more renewable energy resources to be integrated into the grid, improving service quality, increasing overall reliability of the power system, and ultimately reducing costs to both MECO and its customers.« less
NASA Technical Reports Server (NTRS)
Chan, William M.
1993-01-01
An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.
A VO-Driven Astronomical Data Grid in China
NASA Astrophysics Data System (ADS)
Cui, C.; He, B.; Yang, Y.; Zhao, Y.
2010-12-01
With the implementation of many ambitious observation projects, including LAMOST, FAST, and Antarctic observatory at Doom A, observational astronomy in China is stepping into a brand new era with emerging data avalanche. In the era of e-Science, both these cutting-edge projects and traditional astronomy research need much more powerful data management, sharing and interoperability. Based on data-grid concept, taking advantages of the IVOA interoperability technologies, China-VO is developing a VO-driven astronomical data grid environment to enable multi-wavelength science and large database science. In the paper, latest progress and data flow of the LAMOST, architecture of the data grid, and its supports to the VO are discussed.
NASA Technical Reports Server (NTRS)
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... good example of an enabling Smart Grid technology that can empower both utilities and consumers to... Information and Communication Technologies (ICT) sector by integrating broadband into the developing Smart...'s years [[Page 26204
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.
2007-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.
ARIANNA: A research environment for neuroimaging studies in autism spectrum disorders.
Retico, Alessandra; Arezzini, Silvia; Bosco, Paolo; Calderoni, Sara; Ciampa, Alberto; Coscetti, Simone; Cuomo, Stefano; De Santis, Luca; Fabiani, Dario; Fantacci, Maria Evelina; Giuliano, Alessia; Mazzoni, Enrico; Mercatali, Pietro; Miscali, Giovanni; Pardini, Massimiliano; Prosperi, Margherita; Romano, Francesco; Tamburini, Elena; Tosetti, Michela; Muratori, Filippo
2017-08-01
The complexity and heterogeneity of Autism Spectrum Disorders (ASD) require the implementation of dedicated analysis techniques to obtain the maximum from the interrelationship among many variables that describe affected individuals, spanning from clinical phenotypic characterization and genetic profile to structural and functional brain images. The ARIANNA project has developed a collaborative interdisciplinary research environment that is easily accessible to the community of researchers working on ASD (https://arianna.pi.infn.it). The main goals of the project are: to analyze neuroimaging data acquired in multiple sites with multivariate approaches based on machine learning; to detect structural and functional brain characteristics that allow the distinguishing of individuals with ASD from control subjects; to identify neuroimaging-based criteria to stratify the population with ASD to support the future development of personalized treatments. Secure data handling and storage are guaranteed within the project, as well as the access to fast grid/cloud-based computational resources. This paper outlines the web-based architecture, the computing infrastructure and the collaborative analysis workflows at the basis of the ARIANNA interdisciplinary working environment. It also demonstrates the full functionality of the research platform. The availability of this innovative working environment for analyzing clinical and neuroimaging information of individuals with ASD is expected to support researchers in disentangling complex data thus facilitating their interpretation. Copyright © 2017 Elsevier Ltd. All rights reserved.
CILogon-HA. Higher Assurance Federated Identities for DOE Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basney, James
The CILogon-HA project extended the existing open source CILogon service (initially developed with funding from the National Science Foundation) to provide credentials at multiple levels of assurance to users of DOE facilities for collaborative science. CILogon translates mechanism and policy across higher education and grid trust federations, bridging from the InCommon identity federation (which federates university and DOE lab identities) to the Interoperable Global Trust Federation (which defines standards across the Worldwide LHC Computing Grid, the Open Science Grid, and other cyberinfrastructure). The CILogon-HA project expanded the CILogon service to support over 160 identity providers (including 6 DOE facilities) andmore » 3 internationally accredited certification authorities. To provide continuity of operations upon the end of the CILogon-HA project period, project staff transitioned the CILogon service to operation by XSEDE.« less
Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori
2016-01-01
Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211
GANGA: A tool for computational-task management and easy access to Grid resources
NASA Astrophysics Data System (ADS)
Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.
2009-11-01
In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:
Smart Grid Development: Multinational Demo Project Analysis
NASA Astrophysics Data System (ADS)
Oleinikova, I.; Mutule, A.; Obushevs, A.; Antoskovs, N.
2016-12-01
This paper analyses demand side management (DSM) projects and stakeholders' experience with the aim to develop, promote and adapt smart grid tehnologies in Latvia. The research aims at identifying possible system service posibilites, including demand response (DR) and determining the appropriate market design for such type of services to be implemented at the Baltic power system level, with the cooperation of distribution system operator (DSO) and transmission system operator (TSO). This paper is prepared as an extract from the global smart grid best practices, smart solutions and business models.
Spaceflight Operations Services Grid (SOSG) Prototype Implementation and Feasibility Study
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Thigpen, William W.; Lisotta, Anthony J.; Redman, Sandra
2004-01-01
Science Operations Services Grid is focusing on building a prototype grid-based environment that incorporates existing and new spaceflight services to enable current and future NASA programs with cost savings and new and evolvable methods to conduct science in a distributed environment. The Science Operations Services Grid (SOSG) will provide a distributed environment for widely disparate organizations to conduct their systems and processes in a more efficient and cost effective manner. These organizations include those that: 1) engage in space-based science and operations, 2) develop space-based systems and processes, and 3) conduct scientific research, bringing together disparate scientific disciplines like geology and oceanography to create new information. In addition educational outreach will be significantly enhanced by providing to schools the same tools used by NASA with the ability of the schools to actively participate on many levels in the science generated by NASA from space and on the ground. The services range from voice, video and telemetry processing and display to data mining, high level processing and visualization tools all accessible from a single portal. In this environment, users would not require high end systems or processes at their home locations to use these services. Also, the user would need to know minimal details about the applications in order to utilize the services. In addition, security at all levels is an underlying goal of the project. The Science Operations Services Grid will focus on four tools that are currently used by the ISS Payload community along with nine more that are new to the community. Under the prototype four Grid virtual organizations PO) will be developed to represent four types of users. They are a Payload (experimenters) VO, a Flight Controllers VO, an Engineering and Science Collaborators VO and an Education and Public Outreach VO. The User-based services will be implemented to replicate the operational voice, video, telemetry and commanding systems. Once the User-based services are in place, they will be analyzed to establish feasibility for Grid enabling. If feasible then each User-based service will be Grid enabled. The remaining non-Grid services if not already Web enabled will be so enabled. In the end, four portals will be developed one for each VO. Each portal will contain the appropriate User-based services required for that VO to operate.
Research on the effects of wind power grid to the distribution network of Henan province
NASA Astrophysics Data System (ADS)
Liu, Yunfeng; Zhang, Jian
2018-04-01
With the draining of traditional energy, all parts of nation implement policies to develop new energy to generate electricity under the favorable national policy. The wind has no pollution, Renewable and other advantages. It has become the most popular energy among the new energy power generation. The development of wind power in Henan province started relatively late, but the speed of the development is fast. The wind power of Henan province has broad development prospects. Wind power has the characteristics of volatility and randomness. The wind power access to power grids will cause much influence on the power stability and the power quality of distribution network, and some areas have appeared abandon the wind phenomenon. So the study of wind power access to power grids and find out improvement measures is very urgent. Energy storage has the properties of the space transfer energy can stabilize the operation of power grid and improve the power quality.
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
Experiences of engineering Grid-based medical software.
Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T
2007-08-01
Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)
2003-01-01
The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote access from outside of NASA centers. A SAFE enabled IPG can enable IPG capabilities to be available to NASA mission design teams across different NASA center and partner company firewalls. This paper will first discuss some of the potential security issues for IPG to work across NASA center firewalls. We will then present the SAFE federated security model. Finally we will present the concept of the architecture of a SAFE enabled IPG and how it can benefit NASA mission development.
NASA Astrophysics Data System (ADS)
Konopko, Joanna
2015-12-01
A decentralized energy system is a relatively new approach in the power industry. Decentralized energy systems provide promising opportunities for deploying renewable energy sources locally available as well as for expanding access to clean energy services to remote communities. The electricity system of the future must produce and distribute electricity that is reliable and affordable. To accomplish these goals, both the electricity grid and the existing regulatory system must be smarter. In this paper, the major issues and challenges in distributed systems for smart grid are discussed and future trends are presented. The smart grid technologies and distributed generation systems are explored. A general overview of the comparison of the traditional grid and smart grid is also included.
NASA Astrophysics Data System (ADS)
O'Kuinghttons, Ryan; Koziol, Benjamin; Oehmke, Robert; DeLuca, Cecelia; Theurich, Gerhard; Li, Peggy; Jacob, Joseph
2016-04-01
The Earth System Modeling Framework (ESMF) Python interface (ESMPy) supports analysis and visualization in Earth system modeling codes by providing access to a variety of tools for data manipulation. ESMPy started as a Python interface to the ESMF grid remapping package, which provides mature and robust high-performance and scalable grid remapping between 2D and 3D logically rectangular and unstructured grids and sets of unconnected data. ESMPy now also interfaces with OpenClimateGIS (OCGIS), a package that performs subsetting, reformatting, and computational operations on climate datasets. ESMPy exposes a subset of ESMF grid remapping utilities. This includes bilinear, finite element patch recovery, first-order conservative, and nearest neighbor grid remapping methods. There are also options to ignore unmapped destination points, mask points on source and destination grids, and provide grid structure in the polar regions. Grid remapping on the sphere takes place in 3D Cartesian space, so the pole problem is not an issue as it can be with other grid remapping software. Remapping can be done between any combination of 2D and 3D logically rectangular and unstructured grids with overlapping domains. Grid pairs where one side of the regridding is represented by an appropriate set of unconnected data points, as is commonly found with observational data streams, is also supported. There is a developing interoperability layer between ESMPy and OpenClimateGIS (OCGIS). OCGIS is a pure Python, open source package designed for geospatial manipulation, subsetting, and computation on climate datasets stored in local NetCDF files or accessible remotely via the OPeNDAP protocol. Interfacing with OCGIS has brought GIS-like functionality to ESMPy (i.e. subsetting, coordinate transformations) as well as additional file output formats (i.e. CSV, ESRI Shapefile). ESMPy is distinguished by its strong emphasis on open source, community governance, and distributed development. The user base has grown quickly, and the package is integrating with several other software tools and frameworks. These include the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), Iris, PyFerret, cfpython, and the Community Surface Dynamics Modeling System (CSDMS). ESMPy minimum requirements include Python 2.6, Numpy 1.6.1 and an ESMF installation. Optional dependencies include NetCDF and OCGIS-related dependencies: GDAL, Shapely, and Fiona. ESMPy is regression tested nightly, and supported on Darwin, Linux and Cray systems with the GNU compiler suite and MPI communications. OCGIS is supported on Linux, and also undergoes nightly regression testing. Both packages are installable from Anaconda channels. Upcoming development plans for ESMPy involve development of a higher order conservative grid remapping method. Future OCGIS development will focus on mesh and location stream interoperability and streamlined access to ESMPy's MPI implementation.
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
LTE-advanced random access mechanism for M2M communication: A review
NASA Astrophysics Data System (ADS)
Mustafa, Rashid; Sarowa, Sandeep; Jaglan, Reena Rathee; Khan, Mohammad Junaid; Agrawal, Sunil
2016-03-01
Machine Type Communications (MTC) enables one or more self-sufficient machines to communicate directly with one another without human interference. MTC applications include smart grid, security, e-Health and intelligent automation system. To support huge numbers of MTC devices, one of the challenging issues is to provide a competent way for numerous access in the network and to minimize network overload. In this article, the different control mechanisms for overload random access are reviewed to avoid congestion caused by random access channel (RACH) of MTC devices. However, past and present wireless technologies have been engineered for Human-to-Human (H2H) communications, in particular, for transmission of voice. Consequently the Long Term Evolution (LTE) -Advanced is expected to play a central role in communicating Machine to Machine (M2M) and are very optimistic about H2H communications. Distinct and unique characteristics of M2M communications create new challenges from those in H2H communications. In this article, we investigate the impact of massive M2M terminals attempting random access to LTE-Advanced all at once. We discuss and review the solutions to alleviate the overload problem by Third Generation Partnership Project (3GPP). As a result, we evaluate and compare these solutions that can effectively eliminate the congestion on the random access channel for M2M communications without affecting H2H communications.
Recent development of the Multi-Grid detector for large area neutron scattering instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerard, Bruno
2015-07-01
Most of the Neutron Scattering facilities are committed in a continuous program of modernization of their instruments, requiring large area and high performance thermal neutron detectors. Beside scintillators detectors, {sup 3}He detectors, like linear PSDs (Position Sensitive Detectors) and MWPCs (Multi-Wires Proportional Chambers), are the most current techniques nowadays. Time Of Flight instruments are using {sup 3}He PSDs mounted side by side to cover tens of m{sup 2}. As a result of the so-called '{sup 3}He shortage crisis{sup ,} the volume of 3He which is needed to build one of these instruments is not accessible anymore. The development of alternativemore » techniques requiring no 3He, has been given high priority to secure the future of neutron scattering instrumentation. This is particularly important in the context where the future ESS (European Spallation Source) will start its operation in 2019-2020. Improved scintillators represent one of the alternative techniques. Another one is the Multi-Grid introduced at the ILL in 2009. A Multi-Grid detector is composed of several independent modules of typically 0.8 m x 3 m sensitive area, mounted side by side in air or in a vacuum TOF chamber. One module is composed of segmented boron-lined proportional counters mounted in a gas vessel; the counters, of square section, are assembled with Aluminium grids electrically insulated and stacked together. This design provides two advantages: First, magnetron sputtering techniques can be used to coat B{sub 4}C films on planar substrates, and second, the neutron position along the anode wires can be measured by reading out individually the grid signals with fast shaping amplifiers followed by comparators. Unlike charge division localisation in linear PSDs, the individual readout of the grids allows operating the Multi-Grid at a low amplification gain, hence this detector is tolerant to mechanical defects and its production accessible to laboratories equipped with standard equipment. Prototypes of different configurations and sizes have been developed and tested. A demonstrator, with a sensitive area of 0.8 m x 3 m, has been studied during the CRISP European project; it contains 1024 grids, and a surface of isotopically enriched B{sub 4}C film close to 80 m{sup 2}. Its size represented a challenge in terms of fabrication and mounting of the detection elements. Another challenge was to make the gas chamber mechanically compatible with operation in a vacuum TOF chamber. Optimal working condition of this detector was achieved by flushing Ar-CO{sub 2} at a pressure of 50 mbar, and by applying 400 Volts on the anodes. This unusual gas pressure allows to greatly simplifying the mechanics of the gas vessel in vacuum. The detection efficiency has been measured with high precision for different film thicknesses. 52% has been measured at 2.5 Angstrom, in good agreement with the MC simulation. A high position resolution has been achieved by centre of gravity measurement of the TOT (Time-Over-Threshold) signals between neighbouring grids. These results, as well as other detection parameters, including gamma sensitivity and spatial uniformity, will be presented. (author)« less
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Jensen, Jens; Stephens, Ag; van Engen, Willem
2013-04-01
We explore an application of OAuth to enable user delegation for OGC-based services and the evolution of this solution to form part of a wider Federation-as-a-Service offering for federated identity management. OAuth has established itself in the commercial sector as a means for users to delegate access to secured resources under their control to third parties. It has also found its way into the academic and research domains as a solution for user delegation. Notable examples including the CILogon project for Teragrid in the US, and also, closer to the Earth Sciences, as part of the OGC Web Services, Phase 6 Testbed. Both are examples of OAuth 1.0 implementations. Version 2.0 has seen significant changes to this original specification which have not been without controversy but it has arguably provided a greater degree of flexibility in how it can be applied and the use cases that it can address. At CEDA (Centre for Environmental Data Archival, STFC), a Python implementation of OAuth 2.0 was made to explore these capabilities with a focus on providing a solution for user delegation for data access, processing and visualisation services for the Earth Observation and Climate sciences domains. The initial goal was to provide a means of delegating short-lived user credentials to trusted services along the same lines as the established approach of Proxy certificates widely used in Grid computing. For the OGC and other HTTP-based services employed by CEDA, OAuth makes a natural fit for this role, integrating with minimal impact on existing interfaces. Working implementations have been made for CEDA's COWS Web Processing Service and Web Map Service. Packaging the software and making it available in Open Source repositories together with the generic nature of the solution have made it readily exploitable in other application domains. At the Max Planck Institute for Psycholinguistics (Nijmegen, The Netherlands), the software will be used to integrate some tools in the CLARIN infrastructure*. Enhancements have been fedback to the package through this activity. Collaboration with STFC's Scientific Computing department has also seen this solution expand and evolve to support a more demanding set of use cases required to meet the needs for Contrail, an EU Framework 7 project. The goal of Contrail is to develop an Open Source solution for federating resources from multiple Cloud providers. Bringing the solution developed with OAuth together with technologies such as SAML and OpenID it has been possible to develop a generic suite of services to support federated access and identity management, a Federation-as-a-Service package. This is showing promise with trials with the EUDAT project. A deployment of the Contrail software is also planned for CEMS (the facility for Climate and Environmental Monitoring from Space), a new joint academic-industry led facility based at the STFC Harwell site providing access to large-volume Earth Observation and Climate datasets through a Cloud-based service model. * This work is part of the programme of BiG Grid, the Dutch e-Science Grid, which is financially supported by the Netherlands Organisation for Scientific Research, NWO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hong; Kong, Vic; Ren, Lei
2016-01-15
Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and usemore » an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the Catphan phantom were mostly recovered according to visual evaluation. The scatter related artifacts, such as cupping artifacts, were almost completely removed. Conclusions: The IPSF-SMOG is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.« less
Integrated Distribution Management System for Alabama Principal Investigator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schatz, Joe
2013-03-31
Southern Company Services, under contract with the Department of Energy, along with Alabama Power, Alstom Grid (formerly AREVA T&D) and others moved the work product developed in the first phase of the Integrated Distribution Management System (IDMS) from “Proof of Concept” to true deployment through the activity described in this Final Report. This Project – Integrated Distribution Management Systems in Alabama – advanced earlier developed proof of concept activities into actual implementation and furthermore completed additional requirements to fully realize the benefits of an IDMS. These tasks include development and implementation of a Distribution System based Model that enables datamore » access and enterprise application integration.« less
Sorichetta, Alessandro; Hornby, Graeme M.; Stevens, Forrest R.; Gaughan, Andrea E.; Linard, Catherine; Tatem, Andrew J.
2015-01-01
The Latin America and the Caribbean region is one of the most urbanized regions in the world, with a total population of around 630 million that is expected to increase by 25% by 2050. In this context, detailed and contemporary datasets accurately describing the distribution of residential population in the region are required for measuring the impacts of population growth, monitoring changes, supporting environmental and health applications, and planning interventions. To support these needs, an open access archive of high-resolution gridded population datasets was created through disaggregation of the most recent official population count data available for 28 countries located in the region. These datasets are described here along with the approach and methods used to create and validate them. For each country, population distribution datasets, having a resolution of 3 arc seconds (approximately 100 m at the equator), were produced for the population count year, as well as for 2010, 2015, and 2020. All these products are available both through the WorldPop Project website and the WorldPop Dataverse Repository. PMID:26347245
Sorichetta, Alessandro; Hornby, Graeme M; Stevens, Forrest R; Gaughan, Andrea E; Linard, Catherine; Tatem, Andrew J
2015-01-01
The Latin America and the Caribbean region is one of the most urbanized regions in the world, with a total population of around 630 million that is expected to increase by 25% by 2050. In this context, detailed and contemporary datasets accurately describing the distribution of residential population in the region are required for measuring the impacts of population growth, monitoring changes, supporting environmental and health applications, and planning interventions. To support these needs, an open access archive of high-resolution gridded population datasets was created through disaggregation of the most recent official population count data available for 28 countries located in the region. These datasets are described here along with the approach and methods used to create and validate them. For each country, population distribution datasets, having a resolution of 3 arc seconds (approximately 100 m at the equator), were produced for the population count year, as well as for 2010, 2015, and 2020. All these products are available both through the WorldPop Project website and the WorldPop Dataverse Repository.
Accessing and visualizing scientific spatiotemporal data
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Bergou, Attila; Berriman, G. Bruce; Block, Gary L.; Collier, Jim; Curkendall, David W.; Good, John; Husman, Laura; Jacob, Joseph C.; Laity, Anastasia;
2004-01-01
This paper discusses work done by JPL's Parallel Applications Technologies Group in helping scientists access and visualize very large data sets through the use of multiple computing resources, such as parallel supercomputers, clusters, and grids.
A projection method for coupling two-phase VOF and fluid structure interaction simulations
NASA Astrophysics Data System (ADS)
Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro
2018-02-01
The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.
How to engage end-users in smart energy behaviour?
NASA Astrophysics Data System (ADS)
Valkering, Pieter; Laes, Erik; Kessels, Kris; Uyterlinde, Matthijs; Straver, Koen
2014-12-01
End users will play a crucial role in up-coming smart grids that aim to link end-users and energy providers in a better balanced and more efficient electricity system. Within this context, this paper aims to deliver a coherent view on current good practice in end-user engagement in smart grid projects. It draws from a recent review of theoretical insights from sustainable consumption behaviour, social marketing and innovation systems and empirical insights from recent smart grid projects to create an inventory of common motivators, enablers and barriers of behavioural change, and the end-user engagement principles that can be derived from that. We conclude with identifying current research challenges as input for a research agenda on end-user engagement in smart grids.
VisIVO: A Library and Integrated Tools for Large Astrophysical Dataset Exploration
NASA Astrophysics Data System (ADS)
Becciani, U.; Costa, A.; Ersotelos, N.; Krokos, M.; Massimino, P.; Petta, C.; Vitello, F.
2012-09-01
VisIVO provides an integrated suite of tools and services that can be used in many scientific fields. VisIVO development starts in the Virtual Observatory framework. VisIVO allows users to visualize meaningfully highly-complex, large-scale datasets and create movies of these visualizations based on distributed infrastructures. VisIVO supports high-performance, multi-dimensional visualization of large-scale astrophysical datasets. Users can rapidly obtain meaningful visualizations while preserving full and intuitive control of the relevant parameters. VisIVO consists of VisIVO Desktop - a stand-alone application for interactive visualization on standard PCs, VisIVO Server - a platform for high performance visualization, VisIVO Web - a custom designed web portal, VisIVOSmartphone - an application to exploit the VisIVO Server functionality and the latest VisIVO features: VisIVO Library allows a job running on a computational system (grid, HPC, etc.) to produce movies directly with the code internal data arrays without the need to produce intermediate files. This is particularly important when running on large computational facilities, where the user wants to have a look at the results during the data production phase. For example, in grid computing facilities, images can be produced directly in the grid catalogue while the user code is running in a system that cannot be directly accessed by the user (a worker node). The deployment of VisIVO on the DG and gLite is carried out with the support of EDGI and EGI-Inspire projects. Depending on the structure and size of datasets under consideration, the data exploration process could take several hours of CPU for creating customized views and the production of movies could potentially last several days. For this reason an MPI parallel version of VisIVO could play a fundamental role in increasing performance, e.g. it could be automatically deployed on nodes that are MPI aware. A central concept in our development is thus to produce unified code that can run either on serial nodes or in parallel by using HPC oriented grid nodes. Another important aspect, to obtain as high performance as possible, is the integration of VisIVO processes with grid nodes where GPUs are available. We have selected CUDA for implementing a range of computationally heavy modules. VisIVO is supported by EGI-Inspire, EDGI and SCI-BUS projects.
Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.
NASA Technical Reports Server (NTRS)
Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven;
2017-01-01
Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.
FANS-3D Users Guide (ESTEP Project ER 201031)
2016-08-01
governing laminar and turbulent flows in body-fitted curvilinear grids. The code employs multi-block overset ( chimera ) grids, including fully matched...governing incompressible flow in body-fitted grids. The code allows for multi-block overset ( chimera ) grids, which can be fully matched, arbitrarily...interested reader may consult the Chimera Overset Structured Mesh-Interpolation Code (COSMIC) Users’ Manual (Chen, 2009). The input file used for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troia, Matthew J.; McManamay, Ryan A.
Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less
Troia, Matthew J.; McManamay, Ryan A.
2016-06-12
Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less
GENESI-DR: Discovery, Access and on-Demand Processing in Federated Repositories
NASA Astrophysics Data System (ADS)
Cossu, Roberto; Pacini, Fabrizio; Parrini, Andrea; Santi, Eliana Li; Fusco, Luigi
2010-05-01
GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) is a European Commission (EC)-funded project, kicked-off early 2008 lead by ESA; partners include Space Agencies (DLR, ASI, CNES), both space and no-space data providers such as ENEA (I), Infoterra (UK), K-SAT (N), NILU (N), JRC (EU) and industry as Elsag Datamat (I), CS (F) and TERRADUE (I). GENESI-DR intends to meet the challenge of facilitating "time to science" from different Earth Science disciplines in discovery, access and use (combining, integrating, processing, …) of historical and recent Earth-related data from space, airborne and in-situ sensors, which are archived in large distributed repositories. In fact, a common dedicated infrastructure such as the GENESI-DR one permits the Earth Science communities to derive objective information and to share knowledge in all environmental sensitive domains over a continuum of time and a variety of geographical scales so addressing urgent challenges such as Global Change. GENESI-DR federates data, information and knowledge for the management of our fragile planet in line with one of the major goals of the many international environmental programmes such as GMES, GEO/GEOSS. As of today, 12 different Digital Repositories hosting more than 60 heterogeneous dataset series are federated in GENESI-DR. Series include satellite data, in situ data, images acquired by airborne sensors, digital elevation models and model outputs. ESA has started providing access to: Category-1 data systematically available on Internet; level 3 data (e.g., GlobCover map, MERIS Global Vegetation Index); ASAR products available in ESA Virtual Archive and related to the Supersites initiatives. In all cases, existing data policies and security constraints are fully respected. GENESI-DR also gives access to Grid and Cloud computing resources allowing authorized users to run a number of different processing services on the available data. The GENESI-DR operational platform is currently being validated against several applications from different domains, such as: automatic orthorectification of SPOT data; SAR Interferometry; GlobModel results visualization and verification by comparison with satellite observations; ozone estimation from ERS-GOME products and comparison with in-situ LIDAR measures; access to ocean-related heterogeneous data and on-the-fly generated products. The project is adopting, ISO 19115, ISO 19139 and OGC standards for geospatial metadata discovery and processing, is compliant with the basis of INSPIRE Implementing Rules for Metadata and Discovery, and uses the OpenSearch protocol with Geo extensions for data and services discovery. OpenSearch is now considered by OGC a mass-market standard to provide machine accessible search interface to data repositories. GENESI-DR is gaining momentum in the Earth Science community thanks to the active participation to the GEO task force "Data Integration and Analysis Systems" and to the several collaborations with EC projects. It is now extending international cooperation agreements specifically with the NASA (Goddard Earth Sciences Data Information Services), with CEODE (the Center of Earth Observation for Digital Earth of Beijing), with the APN (Asia-Pacific Network), with University of Tokyo (Japanese GeoGrid and Data Integration and Analysis System).
The GENIUS Grid Portal and robot certificates: a new tool for e-Science
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-01-01
Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747
The GENIUS Grid Portal and robot certificates: a new tool for e-Science.
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-06-16
Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.
NASA Astrophysics Data System (ADS)
Wang, Yongli; Wang, Gang; Zuo, Yi; Fan, Lisha; Ling, Yunpeng
2017-03-01
On March 15, 2015, the Central Office issued the "Opinions on Further Deepening the Reform of Electric Power System" (Zhong Fa No. 9). This policy marks the central government officially opened a new round of electricity reform. As a programmatic document under the new situation to comprehensively promote the reform of the power system, No. 9 document will be approved as a separate transmission and distribution of electricity prices, which is the first task of promoting the reform of the power system. Grid tariff reform is not only the transmission and distribution price of a separate approval, more of the grid company input-output relationship and many other aspects of deep-level adjustments. Under the background of the reform of the transmission and distribution price, the main factors affecting the input-output relationship, such as the main business, electricity pricing, and investment approval, financial accounting and so on, have changed significantly. The paper designed the comprehensive evaluation index system of power grid projects' investment benefits under the reform of transmission and distribution price to improve the investment efficiency of power grid projects after the power reform in China.
Energy Systems Integration: Demonstrating the Grid Benefits of Connected Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overview fact sheet about the Electric Power Research Institute (EPRI) and the University of Delaware Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.
Smart Grid Legislative and Regulatory Policies and Case Studies
2011-01-01
In recent years, a number of U.S. states have adopted or are considering smart grid related laws, regulations, and voluntary or mandatory requirements. At the same time, the number of smart grid pilot projects has been increasing rapidly. The Energy Information Administration (EIA) commissioned SAIC to research the development of smart grid in the United States and abroad. The research produced several documents that will help guide EIA as it considers how best to track smart grid developments.
Coupling mechanism of electric vehicle and grid under the background of smart grid
NASA Astrophysics Data System (ADS)
Dong, Mingyu; Li, Dezhi; Chen, Rongjun; Shu, Han; He, Yongxiu
2018-02-01
With the development of smart distribution technology in the future, electric vehicle users can not only charge reasonably based on peak-valley price, they can also discharge electricity into the power grid to realize their economic benefit when it’s necessary and thus promote peak load shifting. According to the characteristic that future electric vehicles can discharge, this paper studies the interaction effect between electric vehicles and the grid based on TOU (time of use) Price Strategy. In this paper, four scenarios are used to compare the change of grid load after implementing TOU Price Strategy. The results show that the wide access of electric vehicles can effectively reduce peak and valley difference.
NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.
75 FR 42727 - Implementing the National Broadband Plan; Comment Period Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-22
..., state, and private entities seek to develop Smart Grid technologies. The second RFI requested information on the evolving needs of electric utilities as Smart Grid technologies are more broadly deployed... accept reply comments, data, and information regarding the National Broadband Plan RFI: Data Access and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melton, Ron
The Pacific Northwest Smart Grid Demonstration (PNWSGD), a $179 million project that was co-funded by the U.S. Department of Energy (DOE) in late 2009, was one of the largest and most comprehensive demonstrations of electricity grid modernization ever completed. The project was one of 16 regional smart grid demonstrations funded by the American Recovery and Reinvestment Act. It was the only demonstration that included multiple states and cooperation from multiple electric utilities, including rural electric co-ops, investor-owned, municipal, and other public utilities. No fewer than 55 unique instantiations of distinct smart grid systems were demonstrated at the projects’ sites. Themore » local objectives for these systems included improved reliability, energy conservation, improved efficiency, and demand responsiveness. The demonstration developed and deployed an innovative transactive system, unique in the world, that coordinated many of the project’s distributed energy resources and demand-responsive components. With the transactive system, additional regional objectives were also addressed, including the mitigation of renewable energy intermittency and the flattening of system load. Using the transactive system, the project coordinated a regional response across the 11 utilities. This region-wide connection from the transmission system down to individual premises equipment was one of the major successes of the project. The project showed that this can be done and assets at the end points can respond dynamically on a wide scale. In principle, a transactive system of this type might eventually help coordinate electricity supply, transmission, distribution, and end uses by distributing mostly automated control responsibilities among the many distributed smart grid domain members and their smart devices.« less
CUAHSI Hydrologic Information Systems
NASA Astrophysics Data System (ADS)
Maidment, D.; Zaslavsky, I.; Tarboton, D.; Piasecki, M.; Goodall, J.
2006-12-01
The Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has a Hydrologic Information System (HIS) project, which is supported by NSF to develop infrastructure and services to support the advance of hydrologic science in the United States. This paper provides an overview of the HIS project. A set of web services called WaterOneFlow is being developed to provide better access to water observations data (point measurements of streamflow, water quality, climate and groundwater levels) from government agencies and individual investigator projects. Successful partnerships have been created with the USGS National Water Information System, EPA Storet and the NCDC Climate Data Online. Observations catalogs have been created for stations in the measurement networks of each of these data systems so that they can be queried in a uniform manner through CUAHSI HIS, and data delivered from them directly to the user via web services. A CUAHSI Observations Data Model has been designed for storing individual investigator data and an equivalent set of web services created for that so that individual investigators can publish their data onto the internet in the same format CUAHSI is providing for the federal agency data. These data will be accessed through HIS Servers hosted at the national level by CUAHSI and also by research centers and academic departments for regional application of HIS. An individual user application called HIS Analyst will enable individual hydrologic scientists to access the information from the network of HIS Servers. The present focus is on water observations data but later development of this system will include weather and climate grid information, GIS data, remote sensing data and linkages between data and hydrologic simulation models.
NASA Technical Reports Server (NTRS)
Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert
2005-01-01
Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1993-01-01
During this contract period, our work has focused on improvements to elliptic grid generation methods. There are two principle objectives in this project. One objective is to make the elliptic methods more reliable and efficient, and the other is to construct a modular code that can be incorporated into the National Grid Project (NGP), or any other grid generation code. Progress has been made in meeting both of these objectives. The two objectives are actually complementary. As the code development for the NGP progresses, we see many areas where improvements in algorithms can be made.
NASA Astrophysics Data System (ADS)
Yun, S. H.; Hudnut, K. W.; Owen, S. E.; Webb, F.; Simons, M.; Macdonald, A.; Sacco, P.; Gurrola, E. M.; Manipon, G.; Liang, C.; Fielding, E. J.; Milillo, P.; Hua, H.; Coletta, A.
2015-12-01
The April 25, 2015 M7.8 Gorkha earthquake caused more than 8,000 fatalities and widespread building damage in central Nepal. Four days after the earthquake, the Italian Space Agency's (ASI's) COSMO-SkyMed Synthetic Aperture Radar (SAR) satellite acquired data over Kathmandu area. Nine days after the earthquake, the Japan Aerospace Exploration Agency's (JAXA's) ALOS-2 SAR satellite covered larger area. Using these radar observations, we rapidly produced damage proxy maps derived from temporal changes in Interferometric SAR (InSAR) coherence. These maps were qualitatively validated through comparison with independent damage analyses by National Geospatial-Intelligence Agency (NGA) and the UNITAR's (United Nations Institute for Training and Research's) Operational Satellite Applications Programme (UNOSAT), and based on our own visual inspection of DigitalGlobe's WorldView optical pre- vs. post-event imagery. Our maps were quickly released to responding agencies and the public, and used for damage assessment, determining inspection/imaging priorities, and reconnaissance fieldwork.
Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials
NASA Astrophysics Data System (ADS)
Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.
2007-03-01
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.
Orchestrating Bulk Data Movement in Grid Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vazhkudai, SS
2005-01-25
Data Grids provide a convenient environment for researchers to manage and access massively distributed bulk data by addressing several system and transfer challenges inherent to these environments. This work addresses issues involved in the efficient selection and access of replicated data in Grid environments in the context of the Globus Toolkit{trademark}, building middleware that (1) selects datasets in highly replicated environments, enabling efficient scheduling of data transfer requests; (2) predicts transfer times of bulk wide-area data transfers using extensive statistical analysis; and (3) co-allocates bulk data transfer requests, enabling parallel downloads from mirrored sites. These efforts have demonstrated a decentralizedmore » data scheduling architecture, a set of forecasting tools that predict bandwidth availability within 15% error and co-allocation architecture, and heuristics that expedites data downloads by up to 2 times.« less
A Simple XML Producer-Consumer Protocol
NASA Technical Reports Server (NTRS)
Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)
2001-01-01
There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.
Griffith Energy Project Final Environmental Impact Statement
DOE Office of Scientific and Technical Information (OSTI.GOV)
N /A
1999-04-02
Griffith Energy Limited Liability Corporation (Griffith) proposes to construct and operate the Griffith Energy Project (Project), a natural gas-fuel, combined cycle power plant, on private lands south of Kingman, Ariz. The Project would be a ''merchant plant'' which means that it is not owned by a utility and there is currently no long-term commitment or obligation by any utility to purchase the capacity and energy generated by the power plant. Griffith applied to interconnect its proposed power plant with the Western Area Power Administration's (Western) Pacific Northwest-Pacific Southwest Intertie and Parker-Davis transmission systems. Western, as a major transmission system owner,more » needs to provide access to its transmission system when it is requested by an eligible organization per existing policies, regulations and laws. The proposed interconnection would integrate the power generated by the Project into the regional transmission grid and would allow Griffith to supply its power to the competitive electric wholesale market. Based on the application, Western's proposed action is to enter into an interconnection and construction agreement with Griffith for the requested interconnections. The proposed action includes the power plant, water wells and transmission line, natural gas pipelines, new electrical transmission lines and a substation, upgrade of an existing transmission line, and access road to the power plant. Construction of segments of the transmission lines and a proposed natural gas pipeline also require a grant of right-of-way across Federal lands administered by the Bureau of Land Management. Public comments on the Draft EIS are addressed in the Final EIS, including addenda and modifications made as a result of the comments and/or new information.« less
NASA Astrophysics Data System (ADS)
Abad-Mota, S.; Guenni, L.; Salcedo, A.; Cardinale, Y.
2006-05-01
Climate variability, environmental degradation and poor livelihood conditions of an important proportion of the population are all key factors determining the high vulnerability of the population to natural disasters and vector-borne diseases as malaria and dengue in most tropical Latin American countries. It is not uncommon that basic bio-geophysical and hydro-meteorological data required for understanding vulnerability and risk of the population to these environmental hazards at present and on retrospective are disperse, have limited quality and are not easily accessible. In Venezuela for example, hydrometeorological data from ground based networks, are collected by different agencies for specific purposes and applications going from aviation, agriculture, hydropower generation and general public needs. In order to improve accessibility, visibility and output products, two public universities in Venezuela: Universidad Simón Bolívar (USB) and Universidad Central de Venezuela (UCV) have designed a data management project to integrate all these historical point data holdings together with the metadata relating to their origin, in a single data repository with facilities for storage, manipulation, extraction and dissemination. Several statistical analyses of the data will be presented as client tailored made products, for specific applications oriented to environmental and epidemiological risk assessments. The project has two main phases: modeling of the hidroclimatic data and its metadata and development of the web site through which services will be provided. We have collected historical data from different sources in the country. These sources use different formats and have their data at different levels of granularity. Our data model should be general enough to accomodate all these differences annotated with the appropriate metadata. The quality of these data will be evaluated, statistically and semantically. The modeled data will be stored in a database, so that queries are allowed. A web site specially designed for this project will provide an interface for querying the data, analyzing the data statistically and visualizing it in maps and images. A special module will be built to allow the execution of different applications and decision making procedures. In this module we plan to implement a scientific workflow facility which should simplify the construction of new applications over the existing data. In a final stage we will explore running some of these applications on a grid and interact with the Grid Venezuela Project being developed in our country by other groups of researchers. The development of this data project includes facilities to incorporate real time data from a newer generation of measurement devices to assure an ongoing data integration activity in the near future.
NASA Astrophysics Data System (ADS)
Lamy, Julian V.
Increasing the percentage of wind power in the United States electricity generation mix would facilitate the transition towards a more sustainable, low-pollution, and environmentally-conscious electricity grid. However, this effort is not without cost. Wind power generation is time-variable and typically not synchronized with electricity demand (i.e., load). In addition, the highest-output wind resources are often located in remote locations, necessitating transmission investment between generation sites and load. Furthermore, negative public perceptions of wind projects could prevent widespread wind development, especially for projects close to densely-populated communities. The work presented in my dissertation seeks to understand where it's best to locate wind energy projects while considering these various factors. First, in Chapter 2, I examine whether energy storage technologies, such as grid-scale batteries, could help reduce the transmission upgrade costs incurred when siting wind projects in distant locations. For a case study of a hypothetical 200 MW wind project in North Dakota that delivers power to Illinois, I present an optimization model that estimates the optimal size of transmission and energy storage capacity that yields the lowest average cost of generation and transmission (/MWh). I find that for this application of storage to be economical, energy storage costs would have to be 100/kWh or lower, which is well below current costs for available technologies. I conclude that there are likely better ways to use energy storage than for accessing distant wind projects. Following from this work, in Chapter 3, I present an optimization model to estimate the economics of accessing high quality wind resources in remote areas to comply with renewable energy policy targets. I include temporal aspects of wind power (variability costs and correlation to market prices) as well as total wind power produced from different farms. I assess the goal of providing 40 TWh of new wind generation in the Midwestern transmission system (MISO) while minimizing system costs. Results show that building wind farms in North/South Dakota (windiest states) compared to Illinois (less windy, but close to population centers) would only be economical if the incremental transmission costs to access them were below 360/kW of wind capacity (break-even value). Historically, the incremental transmission costs for wind development in North/South Dakota compared to in Illinois are about twice this value. However, the break-even incremental transmission cost for wind farms in Minnesota/Iowa (also windy states) is 250/kW, which is consistent with historical costs. I conclude that for the case in MISO, building wind projects in more distant locations (i.e., Minnesota/Iowa) is most economical. My two final chapters use semi-structured interviews (Chapter 4) and conjoint-based surveys (Chapter 5) to understand public perceptions and preferences for different wind project siting characteristics such as the distance between the project and a person's home (i.e., "not-in-my-backyard" or NIMBY) and offshore vs. onshore locations. The semi-structured interviews, conducted with members of a community in Massachusetts, revealed that economic benefit to the community is the most important factor driving perceptions about projects, along with aesthetics, noise impacts, environmental benefits, hazard to wildlife, and safety concerns. In Chapter 5, I show the results from the conjoint survey. The study's sample included participants from a coastal community in Massachusetts and a U.S.-wide sample from Amazon's Mechanical Turk. Results show that participants in the U.S.-wide sample perceived a small reduction in utility, equivalent to $1 per month, for living within 1 mile of a project. Surprisingly, I find no evidence of this effect for participants in the coastal community. The most important characteristic to both samples was the economic benefits from the project - both to their community through increased tax revenue, and to individuals through reduced monthly energy bills. Further, participants in both samples preferred onshore to offshore projects, but that preference was much stronger in the coastal community. I also find that participants from the coastal community preferred expanding an existing wind projects rather than building an entirely new one, whereas those in the U.S.-wide sample were indifferent, and equally supportive of the two options. These differences are likely driven by the prior positive experience the coastal community has had with an existing onshore wind project as well as their strong cultural identity that favors ocean views. I conclude that preference for increased distance from a wind project (NIMBY) is likely small or non-existent and that offshore wind projects within 5 miles from shore could cause large welfare losses to coastal communities. Finally, in Chapter 6, I provide a discussion and policy recommendations from my work. Importantly, I recommend that future research should combine the various topics throughout my chapters (i.e., transmission requirements, hourly power production, variability impacts to the grid, and public preferences) into a comprehensive model that identifies optimal locations for wind projects across the United States.
ESIF 2016: Modernizing Our Grid and Energy System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Becelaere, Kimberly
This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.
Generic Divide and Conquer Internet-Based Computing
NASA Technical Reports Server (NTRS)
Follen, Gregory J. (Technical Monitor); Radenski, Atanas
2003-01-01
The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.
OxfordGrid: a web interface for pairwise comparative map views.
Yang, Hongyu; Gingle, Alan R
2005-12-01
OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill
2000-01-01
We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.
Renewable Energy on the Grid: Redefining What's Possible | Energy Analysis
, new methods for accessing natural gas reserves and aging power plants are opening opportunities for methods for accessing natural gas reserves and aging power plants are opening opportunities for new
Faces of the Recovery Act: The Impact of Smart Grid
President Obama
2017-12-09
On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.
CILogon: An Integrated Identity and Access Management Platform for Science
NASA Astrophysics Data System (ADS)
Basney, J.
2016-12-01
When scientists work together, they use web sites and other software to share their ideas and data. To ensure the integrity of their work, these systems require the scientists to log in and verify that they are part of the team working on a particular science problem. Too often, the identity and access verification process is a stumbling block for the scientists. Scientific research projects are forced to invest time and effort into developing and supporting Identity and Access Management (IAM) services, distracting them from the core goals of their research collaboration. CILogon provides an IAM platform that enables scientists to work together to meet their IAM needs more effectively so they can allocate more time and effort to their core mission of scientific research. The CILogon platform enables federated identity management and collaborative organization management. Federated identity management enables researchers to use their home organization identities to access cyberinfrastructure, rather than requiring yet another username and password to log on. Collaborative organization management enables research projects to define user groups for authorization to collaboration platforms (e.g., wikis, mailing lists, and domain applications). CILogon's IAM platform serves the unique needs of research collaborations, namely the need to dynamically form collaboration groups across organizations and countries, sharing access to data, instruments, compute clusters, and other resources to enable scientific discovery. CILogon provides a software-as-a-service platform to ease integration with cyberinfrastructure, while making all software components publicly available under open source licenses to enable re-use. Figure 1 illustrates the components and interfaces of this platform. CILogon has been operational since 2010 and has been used by over 7,000 researchers from more than 170 identity providers to access cyberinfrastructure including Globus, LIGO, Open Science Grid, SeedMe, and XSEDE. The "CILogon 2.0" platform, launched in 2016, adds support for virtual organization (VO) membership management, identity linking, international collaborations, and standard integration protocols, through integration with the Internet2 COmanage collaboration software.
Electric Power Generation from Low to Intermediate Temperature Resourcces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosnold, William; Mann, Michael; Salehfar, Hossein
The UND-CLR Binary Geothermal Power Plant was a collaborative effort of the U.S. Department of Energy (DOE), Continental Resources, Inc. (CRL), Slope Electric Cooperative (SEC), Access Energy, LLC (AE), Basin Electric Cooperative (BEC), Olson Construction, the North Dakota Industrial Commission Renewable Energy Council (NDIC-REC), the North Dakota Department of Commerce Centers of Excellence Program (NDDC-COE), and the University of North Dakota (UND). The primary objective of project was to demonstrate/test the technical and economic feasibility of generating electricity from non-conventional, low-temperature (90 ºC to 150 °C) geothermal resources using binary technology. CLR provided the access to 98 ºC water flowingmore » at 51 l s-1 at the Davis Water Injection Plan in Bowman County, ND. Funding for the project was from DOE –GTO, NDIC-REC, NDD-COE, and BEC. Logistics, on-site construction, and power grid access were facilitated by Slope Electric Cooperative and Olson Construction. Access Energy supplied prototype organic Rankine Cycle engines for the project. The potential power output from this project is 250 kW at a cost of $3,400 per kW. A key factor in the economics of this project is a significant advance in binary power technology by Access Energy, LLC. Other commercially available ORC engines have efficiencies 8 to 10 percent and produce 50 to 250 kW per unit. The AE ORC units are designed to generate 125 kW with efficiencies up to 14 percent and they can be installed in arrays of tens of units to produce several MW of power where geothermal waters are available. This demonstration project is small but the potential for large-scale development in deeper, hotter formations is promising. The UND team’s analysis of the entire Williston Basin using data on porosity, formation thicknesses, and fluid temperatures reveals that 4.0 x 1019 Joules of energy is available and that 1.36 x 109 MWh of power could be produced using ORC binary power plants. Much of the infrastructure necessary to develop extensive geothermal power in the Williston Basin exists as abandoned oil and gas wells. Re-completing wells for water production could provide local power throughout the basin thus reducing power loss through transmission over long distances. Water production in normal oil and gas operations is relatively low by design, but it could be one to two orders of magnitude greater in wells completed and pumped for water production. A promising method for geothermal power production recognized in this project is drilling horizontal open-hole wells in the permeable carbonate aquifers. Horizontal drilling in the aquifers increases borehole exposure to the resource and consequently increases the capacity for fluid production by up to an order of magnitude.« less
Providing traceability for neuroimaging analyses.
McClatchey, Richard; Branson, Andrew; Anjum, Ashiq; Bloodsworth, Peter; Habib, Irfan; Munir, Kamran; Shamdasani, Jetendr; Soomro, Kamran
2013-09-01
With the increasingly digital nature of biomedical data and as the complexity of analyses in medical research increases, the need for accurate information capture, traceability and accessibility has become crucial to medical researchers in the pursuance of their research goals. Grid- or Cloud-based technologies, often based on so-called Service Oriented Architectures (SOA), are increasingly being seen as viable solutions for managing distributed data and algorithms in the bio-medical domain. For neuroscientific analyses, especially those centred on complex image analysis, traceability of processes and datasets is essential but up to now this has not been captured in a manner that facilitates collaborative study. Few examples exist, of deployed medical systems based on Grids that provide the traceability of research data needed to facilitate complex analyses and none have been evaluated in practice. Over the past decade, we have been working with mammographers, paediatricians and neuroscientists in three generations of projects to provide the data management and provenance services now required for 21st century medical research. This paper outlines the finding of a requirements study and a resulting system architecture for the production of services to support neuroscientific studies of biomarkers for Alzheimer's disease. The paper proposes a software infrastructure and services that provide the foundation for such support. It introduces the use of the CRISTAL software to provide provenance management as one of a number of services delivered on a SOA, deployed to manage neuroimaging projects that have been studying biomarkers for Alzheimer's disease. In the neuGRID and N4U projects a Provenance Service has been delivered that captures and reconstructs the workflow information needed to facilitate researchers in conducting neuroimaging analyses. The software enables neuroscientists to track the evolution of workflows and datasets. It also tracks the outcomes of various analyses and provides provenance traceability throughout the lifecycle of their studies. As the Provenance Service has been designed to be generic it can be applied across the medical domain as a reusable tool for supporting medical researchers thus providing communities of researchers for the first time with the necessary tools to conduct widely distributed collaborative programmes of medical analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Budde, M. E.; Galu, G.; Funk, C. C.; Verdin, J. P.; Rowland, J.
2014-12-01
The Planning for Resilience in East Africa through Policy, Adaptation, Research, and Economic Development (PREPARED) is a multi-organizational project aimed at mainstreaming climate-resilient development planning and program implementation into the East African Community (EAC). The Famine Early Warning Systems Network (FEWS NET) has partnered with the PREPARED project to address three key development challenges for the EAC; 1) increasing resiliency to climate change, 2) managing trans-boundary freshwater biodiversity and conservation and 3) improving access to drinking water supply and sanitation services. USGS FEWS NET has been instrumental in the development of gridded climate data sets that are the fundamental building blocks for climate change adaptation studies in the region. Tools such as the Geospatial Climate Tool (GeoCLIM) have been developed to interpolate time-series grids of precipitation and temperature values from station observations and associated satellite imagery, elevation data, and other spatially continuous fields. The GeoCLIM tool also allows the identification of anomalies and assessments of both their frequency of occurrence and directional trends. A major effort has been put forth to build the capacities of local and regional institutions to use GeoCLIM to integrate their station data (which is not typically available to the public) into improved national and regional gridded climate data sets. In addition to the improvements and capacity building activities related to geospatial analysis tools, FEWS NET will assist in two other areas; 1) downscaling of climate change scenarios and 2) vulnerability impact assessments. FEWS NET will provide expertise in statistical downscaling of Global Climate Model output fields and work with regional institutions to assess results of other downscaling methods. Completion of a vulnerability impact assessment (VIA) involves the examination of sectoral consequences in identified climate "hot spots". FEWS NET will lead the VIA for the agriculture and food security sector, but will also provide key geospatial layers needed by multiple sectors in the areas of exposure, sensitivity, and adaptive capacity. Project implementation will strengthen regional coordination in policy-making, planning, and response to climate change issues.
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.
2013-12-01
Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models, or seasonal. WRF4G is been used to run WRF simulations which are contributing to the CORDEX initiative and others projects like SPECS and EUPORIAS. This work is been partially funded by the European Regional Development Fund (ERDF) and the Spanish National R&D Plan 2008-2011 (CGL2011-28864)
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Resilient Energy Systems | Integrated Energy Solutions | NREL
of microgrids Business model and valuation analysis for resilience Photovoltaic plus storage analysis Framework for Mini-Grids NREL has teamed with the Global Lighting and Energy Access Partnership and the U.S mini-grids. NREL Enhances Energy Resiliency at Marine Corps Air Station Miramar NREL has partnered with
Augmenting the access grid using augmented reality
NASA Astrophysics Data System (ADS)
Li, Ying
2012-01-01
The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.
An Offload NIC for NASA, NLR, and Grid Computing
NASA Technical Reports Server (NTRS)
Awrach, James
2013-01-01
This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems, and to add several more capabilities while reducing space consumption and cost. Provisions were designed for interoperability with systems used in the NASA HEC (High-End Computing) program. The new acceleration engine consists of state-ofthe- art FPGA (field-programmable gate array) core IP, C, and Verilog code; novel communication protocol; and extensions to the Globus structure. The engine provides the functions of network acceleration, encryption, compression, packet-ordering, and security added to Globus grid or for cloud data transfer. This system is scalable in nX10-Gbps increments through 100-Gbps f-d. It can be interfaced to industry-standard system-side or network-side devices or core IP in increments of 10 GigE, scaling to provide IEEE 40/100 GigE compliance.
Vision-Based Navigation and Parallel Computing
1990-08-01
33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
RXIO: Design and implementation of high performance RDMA-capable GridFTP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Yuan; Yu, Weikuan; Vetter, Jeffrey S.
2011-12-21
For its low-latency, high bandwidth, and low CPU utilization, Remote Direct Memory Access (RDMA) has established itself as an effective data movement technology in many networking environments. However, the transport protocols of grid run-time systems, such as GridFTP in Globus, are not yet capable of utilizing RDMA. In this study, we examine the architecture of GridFTP for the feasibility of enabling RDMA. An RDMA-capable XIO (RXIO) framework is designed and implemented to extend its XIO system and match the characteristics of RDMA. Our experimental results demonstrate that RDMA can significantly improve the performance of GridFTP, reducing the latency by 32%more » and increasing the bandwidth by more than three times. In achieving such performance improvements, RDMA dramatically cuts down CPU utilization of GridFTP clients and servers. In conclusion, these results demonstrate that RXIO can effectively exploit the benefits of RDMA for GridFTP. It offers a good prototype to further leverage GridFTP on wide-area RDMA networks.« less
Fault tolerance in computational grids: perspectives, challenges, and issues.
Haider, Sajjad; Nazir, Babar
2016-01-01
Computational grids are established with the intention of providing shared access to hardware and software based resources with special reference to increased computational capabilities. Fault tolerance is one of the most important issues faced by the computational grids. The main contribution of this survey is the creation of an extended classification of problems that incur in the computational grid environments. The proposed classification will help researchers, developers, and maintainers of grids to understand the types of issues to be anticipated. Moreover, different types of problems, such as omission, interaction, and timing related have been identified that need to be handled on various layers of the computational grid. In this survey, an analysis and examination is also performed pertaining to the fault tolerance and fault detection mechanisms. Our conclusion is that a dependable and reliable grid can only be established when more emphasis is on fault identification. Moreover, our survey reveals that adaptive and intelligent fault identification, and tolerance techniques can improve the dependability of grid working environments.
Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment
NASA Technical Reports Server (NTRS)
Evans, R. W.; Brinza, D. E.
2014-01-01
Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model-minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050 and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.
Grid Modernization Laboratory Consortium - Testing and Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroposki, Benjamin; Skare, Paul; Pratt, Rob
This paper highlights some of the unique testing capabilities and projects being performed at several national laboratories as part of the U. S. Department of Energy Grid Modernization Laboratory Consortium. As part of this effort, the Grid Modernization Laboratory Consortium Testing Network isbeing developed to accelerate grid modernization by enablingaccess to a comprehensive testing infrastructure and creating a repository of validated models and simulation tools that will be publicly available. This work is key to accelerating thedevelopment, validation, standardization, adoption, and deployment of new grid technologies to help meet U. S. energy goals.
Irvine Smart Grid Demonstration, a Regional Smart Grid Demonstration Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yinger, Robert; Irwin, Mark
ISGD was a comprehensive demonstration that spanned the electricity delivery system and extended into customer homes. The project used phasor measurement technology to enable substation-level situational awareness, and demonstrated SCE’s next-generation substation automation system. It extended beyond the substation to evaluate the latest generation of distribution automation technologies, including looped 12-kV distribution circuit topology using URCIs. The project team used DVVC capabilities to demonstrate CVR. In customer homes, the project evaluated HAN devices such as smart appliances, programmable communicating thermostats, and home energy management components. The homes were also equipped with energy storage, solar PV systems, and a number ofmore » energy efficiency measures (EEMs). The team used one block of homes to evaluate strategies and technologies for achieving ZNE. A home achieves ZNE when it produces at least as much renewable energy as the amount of energy it consumes annually. The project also assessed the impact of device-specific demand response (DR), as well as load management capabilities involving energy storage devices and plug-in electric vehicle charging equipment. In addition, the ISGD project sought to better understand the impact of ZNE homes on the electric grid. ISGD’s SENet enabled end-to-end interoperability between multiple vendors’ systems and devices, while also providing a level of cybersecurity that is essential to smart grid development and adoption across the nation. The ISGD project includes a series of sub-projects grouped into four logical technology domains: Smart Energy Customer Solutions, Next-Generation Distribution System, Interoperability and Cybersecurity, and Workforce of the Future. Section 2.3 provides a more detailed overview of these domains.« less
Contributing opportunistic resources to the grid with HTCondor-CE-Bosco
NASA Astrophysics Data System (ADS)
Weitzel, Derek; Bockelman, Brian
2017-10-01
The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.
Colling, D.; Britton, D.; Gordon, J.; Lloyd, S.; Doyle, A.; Gronbech, P.; Coles, J.; Sansum, A.; Patrick, G.; Jones, R.; Middleton, R.; Kelsey, D.; Cass, A.; Geddes, N.; Clark, P.; Barnby, L.
2013-01-01
The Large Hadron Collider (LHC) is one of the greatest scientific endeavours to date. The construction of the collider itself and the experiments that collect data from it represent a huge investment, both financially and in terms of human effort, in our hope to understand the way the Universe works at a deeper level. Yet the volumes of data produced are so large that they cannot be analysed at any single computing centre. Instead, the experiments have all adopted distributed computing models based on the LHC Computing Grid. Without the correct functioning of this grid infrastructure the experiments would not be able to understand the data that they have collected. Within the UK, the Grid infrastructure needed by the experiments is provided by the GridPP project. We report on the operations, performance and contributions made to the experiments by the GridPP project during the years of 2010 and 2011—the first two significant years of the running of the LHC. PMID:23230163
NASA Astrophysics Data System (ADS)
Hanisch, R. J.
2014-11-01
The concept of the Virtual Observatory arose more-or-less simultaneously in the United States and Europe circa 2000. Ten pages of Astronomy and Astrophysics in the New Millennium: Panel Reports (National Academy Press, Washington, 2001), that is, the detailed recommendations of the Panel on Theory, Computation, and Data Exploration of the 2000 Decadal Survey in Astronomy, are dedicated to describing the motivation for, scientific value of, and major components required in implementing the National Virtual Observatory. European initiatives included the Astrophysical Virtual Observatory at the European Southern Observatory, the AstroGrid project in the United Kingdom, and the Euro-VO (sponsored by the European Union). Organizational/conceptual meetings were held in the US at the California Institute of Technology (Virtual Observatories of the Future, June 13-16, 2000) and at ESO Headquarters in Garching, Germany (Mining the Sky, July 31-August 4, 2000; Toward an International Virtual Observatory, June 10-14, 2002). The nascent US, UK, and European VO projects formed the International Virtual Observatory Alliance (IVOA) at the June 2002 meeting in Garching, with yours truly as the first chair. The IVOA has grown to a membership of twenty-one national projects and programs on six continents, and has developed a broad suite of data access protocols and standards that have been widely implemented. Astronomers can now discover, access, and compare data from hundreds of telescopes and facilities, hosted at hundreds of organizations worldwide, stored in thousands of databases, all with a single query.
Space-based Science Operations Grid Prototype
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Welch, Clara L.; Redman, Sandra
2004-01-01
Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based science experimenters. There is an international aspect to the Grid involving the America's Pathway (AMPath) network, the Chilean REUNA Research and Education Network and the University of Chile in Santiago that will further demonstrate how extensive these services can be used. From the user's perspective, the Prototype will provide a single interface and logon to these varied services without the complexity of knowing the where's and how's of each service. There is a separate and deliberate emphasis on security. Security will be addressed by specifically outlining the different approaches and tools used. Grid technology, unlike the Internet, is being designed with security in mind. In addition we will show the locations, configurations and network paths associated with each service and virtual organization. We will discuss the separate virtual organizations that we define for the varied user communities. These will include certain, as yet undetermined, space-based science functions and/or processes and will include specific virtual organizations required for public and educational outreach and science and engineering collaboration. We will also discuss the Grid Prototype performance and the potential for further Grid applications both space-based and ground based projects and processes. In this paper and presentation we will detail each service and how they are integrated using Grid
Snow and Ice Products from the Moderate Resolution Imaging Spectroradiometer
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Salomonson, Vincent V.; Riggs, George A.; Klein, Andrew G.
2003-01-01
Snow and sea ice products, derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument, flown on the Terra and Aqua satellites, are or will be available through the National Snow and Ice Data Center Distributed Active Archive Center (DAAC). The algorithms that produce the products are automated, thus providing a consistent global data set that is suitable for climate studies. The suite of MODIS snow products begins with a 500-m resolution, 2330-km swath snow-cover map that is then projected onto a sinusoidal grid to produce daily and 8-day composite tile products. The sequence proceeds to daily and 8-day composite climate-modeling grid (CMG) products at 0.05 resolution. A daily snow albedo product will be available in early 2003 as a beta test product. The sequence of sea ice products begins with a swath product at 1-km resolution that provides sea ice extent and ice-surface temperature (IST). The sea ice swath products are then mapped onto the Lambert azimuthal equal area or EASE-Grid projection to create a daily and 8-day composite sea ice tile product, also at 1 -km resolution. Climate-Modeling Grid (CMG) sea ice products in the EASE-Grid projection at 4-km resolution are planned for early 2003.
NASA Astrophysics Data System (ADS)
Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang
2017-11-01
To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.
TEP Power Partners Project [Tucson Electric Power
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2014-02-06
The Arizona Governor’s Office of Energy Policy, in partnership with Tucson Electric Power (TEP), Tendril, and Next Phase Energy (NPE), formed the TEP Power Partners pilot project to demonstrate how residential customers could access their energy usage data and third party applications using data obtained from an Automatic Meter Reading (AMR) network. The project applied for and was awarded a Smart Grid Data Access grant through the U.S. Department of Energy. The project participants’ goal for Phase I is to actively engage 1,700 residential customers to demonstrate sustained participation, reduction in energy usage (kWh) and cost ($), and measure relatedmore » aspects of customer satisfaction. This Demonstration report presents a summary of the findings, effectiveness, and customer satisfaction with the 15-month TEP Power Partners pilot project. The objective of the program is to provide residential customers with energy consumption data from AMR metering and empower these participants to better manage their electricity use. The pilot recruitment goals included migrating 700 existing customers from the completed Power Partners Demand Response Load Control Project (DRLC), and enrolling 1,000 new participants. Upon conclusion of the project on November 19, 2013; 1,390 Home Area Networks (HANs) were registered; 797 new participants installed a HAN; Survey respondents’ are satisfied with the program and found value with a variety of specific program components; Survey respondents report feeling greater control over their energy usage and report taking energy savings actions in their homes after participating in the program; On average, 43 % of the participants returned to the web portal monthly and 15% returned weekly; and An impact evaluation was completed by Opinion Dynamics and found average participant savings for the treatment period1 to be 2.3% of their household use during this period.2 In total, the program saved 163 MWh in the treatment period of 2013.« less
BOREAS Regional Soils Data in Raster Format and AEAC Projection
NASA Technical Reports Server (NTRS)
Monette, Bryan; Knapp, David; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor)
2000-01-01
This data set was gridded by BOREAS Information System (BORIS) Staff from a vector data set received from the Canadian Soil Information System (CanSIS). The original data came in two parts that covered Saskatchewan and Manitoba. The data were gridded and merged into one data set of 84 files covering the BOREAS region. The data were gridded into the AEAC projection. Because the mapping of the two provinces was done separately in the original vector data, there may be discontinuities in some of the soil layers because of different interpretations of certain soil properties. The data are stored in binary, image format files.
Tools and Techniques for Measuring and Improving Grid Performance
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Frumkin, M.; Smith, W.; VanderWijngaart, R.; Wong, P.; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on NASA's geographically dispersed computing resources, and the various methods by which the disparate technologies are integrated within a nationwide computational grid. Many large-scale science and engineering projects are accomplished through the interaction of people, heterogeneous computing resources, information systems and instruments at different locations. The overall goal is to facilitate the routine interactions of these resources to reduce the time spent in design cycles, particularly for NASA's mission critical projects. The IPG (Information Power Grid) seeks to implement NASA's diverse computing resources in a fashion similar to the way in which electric power is made available.
caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oster, S.; Langella, S.; Hastings, S.
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL:
Random access actuation of nanowire grid metamaterial
NASA Astrophysics Data System (ADS)
Cencillo-Abad, Pablo; Ou, Jun-Yu; Plum, Eric; Valente, João; Zheludev, Nikolay I.
2016-12-01
While metamaterials offer engineered static optical properties, future artificial media with dynamic random-access control over shape and position of meta-molecules will provide arbitrary control of light propagation. The simplest example of such a reconfigurable metamaterial is a nanowire grid metasurface with subwavelength wire spacing. Recently we demonstrated computationally that such a metadevice with individually controlled wire positions could be used as dynamic diffraction grating, beam steering module and tunable focusing element. Here we report on the nanomembrane realization of such a nanowire grid metasurface constructed from individually addressable plasmonic chevron nanowires with a 230 nm × 100 nm cross-section, which consist of gold and silicon nitride. The active structure of the metadevice consists of 15 nanowires each 18 μm long and is fabricated by a combination of electron beam lithography and ion beam milling. It is packaged as a microchip device where the nanowires can be individually actuated by control currents via differential thermal expansion.
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation
NASA Astrophysics Data System (ADS)
Anisenkov, A. V.
2018-03-01
In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).
Distributed and grid computing projects with research focus in human health.
Diomidous, Marianna; Zikos, Dimitrios
2012-01-01
Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.
Oregon Magnetic and Gravity Maps and Data: A Web Site for Distribution of Data
Roberts, Carter W.; Kucks, Robert P.; Hill, Patricia L.
2008-01-01
This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each State. The results for the State of Oregon are presented here on this site. Files of aeromagnetic and gravity grids and images are available for these States for downloading. In Oregon, 49 magnetic surveys have been knit together to form a single digital grid and map. Also, a complete Bouguer gravity anomaly grid and map was generated from 40,665 gravity station measurements in and adjacent to Oregon. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.
SLGRID: spectral synthesis software in the grid
NASA Astrophysics Data System (ADS)
Sabater, J.; Sánchez, S.; Verdes-Montenegro, L.
2011-11-01
SLGRID (http://www.e-ciencia.es/wiki/index.php/Slgrid) is a pilot project proposed by the e-Science Initiative of Andalusia (eCA) and supported by the Spanish e-Science Network in the frame of the European Grid Initiative (EGI). The aim of the project was to adapt the spectral synthesis software Starlight (Cid-Fernandes et al. 2005) to the Grid infrastructure. Starlight is used to estimate the underlying stellar populations (their ages and metallicities) using an optical spectrum, hence, it is possible to obtain a clean nebular spectrum that can be used for the diagnostic of the presence of an Active Galactic Nucleus (Sabater et al. 2008, 2009). The typical serial execution of the code for big samples of galaxies made it ideal to be integrated into the Grid. We obtain an improvement on the computational time of order N, being N the number of nodes available in the Grid. In a real case we obtained our results in 3 hours with SLGRID instead of the 60 days spent using Starlight in a PC. The code has already been ported to the Grid. The first tests were made within the e-CA infrastrusture and, later, itwas tested and improved with the colaboration of the CETA-CIEMAT. The SLGRID project has been recently renewed. In a future it is planned to adapt the code for the reduction of data from Integral Field Units where each dataset is composed of hundreds of spectra. Electronic version of the poster at http://www.iaa.es/~jsm/SEA2010
Design and evaluation of a grid reciprocation scheme for use in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Patel, Tushita; Sporkin, Helen; Peppard, Heather; Williams, Mark B.
2016-03-01
This work describes a methodology for efficient removal of scatter radiation during digital breast tomosynthesis (DBT). The goal of this approach is to enable grid image obscuration without a large increase in radiation dose by minimizing misalignment of the grid focal point (GFP) and x-ray focal spot (XFS) during grid reciprocation. Hardware for the motion scheme was built and tested on the dual modality breast tomosynthesis (DMT) scanner, which combines DBT and molecular breast tomosynthesis (MBT) on a single gantry. The DMT scanner uses fully isocentric rotation of tube and x-ray detector for maintaining a fixed tube-detector alignment during DBT imaging. A cellular focused copper prototype grid with 80 cm focal length, 3.85 mm height, 0.1 mm thick lamellae, and 1.1 mm hole pitch was tested. Primary transmission of the grid at 28 kV tube voltage was on average 74% with the grid stationary and aligned for maximum transmission. It fell to 72% during grid reciprocation by the proposed method. Residual grid line artifacts (GLAs) in projection views and reconstructed DBT images are characterized and methods for reducing the visibility of GLAs in the reconstructed volume through projection image flat-field correction and spatial frequency-based filtering of the DBT slices are described and evaluated. The software correction methods reduce the visibility of these artifacts in the reconstructed volume, making them imperceptible both in the reconstructed DBT images and their Fourier transforms.
NASA Astrophysics Data System (ADS)
Niedzielski, T.; Włosińska, M.; Miziński, B.; Hewelt, M.; Migoń, P.; Kosek, W.; Priede, I. G.
2012-04-01
The poster aims to provide a broad scientific audience with a general overview of a project on sea level change modelling and prediction that has just commenced at the University of Wrocław, Poland. The initiative that the project fits, called the Homing Plus programme, is organised by the Foundation for Polish Science and financially supported by the European Union through the European Regional Development Fund and the Innovative Economy Programme. There are two key research objectives of the project that complement each other. First, emphasis is put on modern satellite altimetric gridded time series from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) repository. Daily sea level anomaly maps, access to which in near-real time is courtesy of AVISO, are being steadily downloaded every day to our local server in Wroclaw, Poland. These data will be processed within a general framework of modelling and prediction of sea level change in short, medium and long term. Secondly, sea level change over geological time is scrutinised in order to cover very long time scales that go far beyond a history of altimetric and tide-gauge measurements. The aforementioned approaches comprise a few tasks that aim to solve the following detailed problems. Within the first one, our objective is to seek spatio-temporal dependencies in the gridded sea level anomaly time series. Subsequently, predictions that make use of such cross-correlations shall be derived, and near-real time service for automatic update with validation will be implemented. Concurrently, (i.e. apart from spatio-temporal dependencies and their use in the process of forecasting variable sea level topography), threshold models shall be utilised for predicting the El Niño/Southern Oscillation (ENSO) signal that is normally present in sea level anomaly time series of the equatorial Pacific. Within the second approach, however, the entirely different methods are proposed. Links between sea floor topography and sea level change will be quantified, with a particular emphasis placed on the hypsometric curve and its semi-empirical modelling. Very long-term projections of sea level change will be based on testing statistical hypotheses and trend analyses, but input data will be calculated from theoretical models. Slightly apart from this topic is a notion of nonlinearity that was earlier shown to be present in gridded sea level anomaly time series. Thus, the list of intermediate tasks concludes with a need for a comprehensive interpretation of such irregularities.
Using virtualization to protect the proprietary material science applications in volunteer computing
NASA Astrophysics Data System (ADS)
Khrapov, Nikolay P.; Rozen, Valery V.; Samtsevich, Artem I.; Posypkin, Mikhail A.; Sukhomlin, Vladimir A.; Oganov, Artem R.
2018-04-01
USPEX is a world-leading software for computational material design. In essence, USPEX splits simulation into a large number of workunits that can be processed independently. This scheme ideally fits the desktop grid architecture. Workunit processing is done by a simulation package aimed at energy minimization. Many of such packages are proprietary and should be protected from unauthorized access when running on a volunteer PC. In this paper we present an original approach based on virtualization. In a nutshell, the proprietary code and input files are stored in an encrypted folder and run inside a virtual machine image that is also password protected. The paper describes this approach in detail and discusses its application in USPEX@home volunteer project.
Analysis of the World Experience of Smart Grid Deployment: Economic Effectiveness Issues
NASA Astrophysics Data System (ADS)
Ratner, S. V.; Nizhegorodtsev, R. M.
2018-06-01
Despite the positive dynamics in the growth of RES-based power production in electric power systems of many countries, the further development of commercially mature technologies of wind and solar generation is often constrained by the existing grid infrastructure and conventional energy supply practices. The integration of large wind and solar power plants into a single power grid and the development of microgeneration require the widespread introduction of a new smart grid technology cluster (smart power grids), whose technical advantages over the conventional ones have been fairly well studied, while issues of their economic effectiveness remain open. Estimation and forecasting potential economic effects from the introduction of innovative technologies in the power sector during the stage preceding commercial development is a methodologically difficult task that requires the use of knowledge from different sciences. This paper contains the analysis of smart grid project implementation in Europe and the United States. Interval estimates are obtained for their basic economic parameters. It was revealed that the majority of smart grid implemented projects are not yet commercially effective, since their positive externalities are usually not recognized on the revenue side due to the lack of universal methods for public benefits monetization. The results of the research can be used in modernization and development planning for the existing grid infrastructure both at the federal level and at the level of certain regions and territories.
NASA Astrophysics Data System (ADS)
Reerink, Thomas J.; van de Berg, Willem Jan; van de Wal, Roderik S. W.
2016-11-01
This paper accompanies the second OBLIMAP open-source release. The package is developed to map climate fields between a general circulation model (GCM) and an ice sheet model (ISM) in both directions by using optimal aligned oblique projections, which minimize distortions. The curvature of the surfaces of the GCM and ISM grid differ, both grids may be irregularly spaced and the ratio of the grids is allowed to differ largely. OBLIMAP's stand-alone version is able to map data sets that differ in various aspects on the same ISM grid. Each grid may either coincide with the surface of a sphere, an ellipsoid or a flat plane, while the grid types might differ. Re-projection of, for example, ISM data sets is also facilitated. This is demonstrated by relevant applications concerning the major ice caps. As the stand-alone version also applies to the reverse mapping direction, it can be used as an offline coupler. Furthermore, OBLIMAP 2.0 is an embeddable GCM-ISM coupler, suited for high-frequency online coupled experiments. A new fast scan method is presented for structured grids as an alternative for the former time-consuming grid search strategy, realising a performance gain of several orders of magnitude and enabling the mapping of high-resolution data sets with a much larger number of grid nodes. Further, a highly flexible masked mapping option is added. The limitation of the fast scan method with respect to unstructured and adaptive grids is discussed together with a possible future parallel Message Passing Interface (MPI) implementation.
Smart Grid Information Clearinghouse (SGIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, Saifur
Since the Energy Independence and Security Act of 2007 was enacted, there has been a large number of websites that discusses smart grid and relevant information, including those from government, academia, industry, private sector and regulatory. These websites collect information independently. Therefore, smart grid information was quite scattered and dispersed. The objective of this work was to develop, populate, manage and maintain the public Smart Grid Information Clearinghouse (SGIC) web portal. The information in the SGIC website is comprehensive that includes smart grid information, research & development, demonstration projects, technical standards, costs & benefit analyses, business cases, legislation, policy &more » regulation, and other information on lesson learned and best practices. The content in the SGIC website is logically grouped to allow easily browse, search and sort. In addition to providing the browse and search feature, the SGIC web portal also allow users to share their smart grid information with others though our online content submission platform. The Clearinghouse web portal, therefore, serves as the first stop shop for smart grid information that collects smart grid information in a non-bias, non-promotional manner and can provide a missing link from information sources to end users and better serve users’ needs. The web portal is available at www.sgiclearinghouse.org. This report summarizes the work performed during the course of the project (September 2009 – August 2014). Section 2.0 lists SGIC Advisory Committee and User Group members. Section 3.0 discusses SGIC information architecture and web-based database application functionalities. Section 4.0 summarizes SGIC features and functionalities, including its search, browse and sort capabilities, web portal social networking, online content submission platform and security measures implemented. Section 5.0 discusses SGIC web portal contents, including smart grid 101, smart grid projects, deployment experience (i.e., use cases, lessons learned, cost-benefit analyses and business cases), in-depth information (i.e., standards, technology, cyber security, legislation, education and training and demand response), as well as international information. Section 6.0 summarizes SGIC statistics from the launch of the portal on July 07, 2010 to August 31, 2014. Section 7.0 summarizes publicly available information as a result of this work.« less
Grid Computing at GSI for ALICE and FAIR - present and future
NASA Astrophysics Data System (ADS)
Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten
2012-12-01
The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.
A Community-Based Approach to Leading the Nation in Smart Energy Use
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2013-12-31
Project Objectives The AEP Ohio gridSMART® Demonstration Project (Project) achieved the following objectives: • Built a secure, interoperable, and integrated smart grid infrastructure in northeast central Ohio that demonstrated the ability to maximize distribution system efficiency and reliability and consumer use of demand response programs that reduced energy consumption, peak demand, and fossil fuel emissions. • Actively attracted, educated, enlisted, and retained consumers in innovative business models that provided tools and information reducing consumption and peak demand. • Provided the U.S. Department of Energy (DOE) information to evaluate technologies and preferred smart grid business models to be extended nationally. Projectmore » Description Ohio Power Company (the surviving company of a merger with Columbus Southern Power Company), doing business as AEP Ohio (AEP Ohio), took a community-based approach and incorporated a full suite of advanced smart grid technologies for 110,000 consumers in an area selected for its concentration and diversity of distribution infrastructure and consumers. It was organized and aligned around: • Technology, implementation, and operations • Consumer and stakeholder acceptance • Data management and benefit assessment Combined, these functional areas served as the foundation of the Project to integrate commercially available products, innovative technologies, and new consumer products and services within a secure two-way communication network between the utility and consumers. The Project included Advanced Metering Infrastructure (AMI), Distribution Management System (DMS), Distribution Automation Circuit Reconfiguration (DACR), Volt VAR Optimization (VVO), and Consumer Programs (CP). These technologies were combined with two-way consumer communication and information sharing, demand response, dynamic pricing, and consumer products, such as plug-in electric vehicles and smart appliances. In addition, the Project incorporated comprehensive cyber security capabilities, interoperability, and a data assessment that, with grid simulation capabilities, made the demonstration results an adaptable, integrated solution for AEP Ohio and the nation.« less
Thematic mapper-derived mineral distribution maps of Idaho, Nevada, and western Montana
Raines, Gary L.
2006-01-01
This report provides mineral distribution maps based on TM spectral information of minerals commonly associated with hydrothermal alteration in Nevada, Idaho, and western Montana. The product of the processing is provided as four ESRI GRID files with 30 m resolution by state. UTM Zone 11 projection is used for Nevada (grid clsnv) and western Idaho (grid clsid), UTM Zone 12 is used for eastern Idaho and western Montana (grid clsid_mt). A fourth grid with a special Albers projection is used for the Headwaters project covering Idaho and western Montana (grid crccls_hs). Symbolization for all four grids is stored in the ESRI layer or LYR files and color or CLR files. Objectives of the analyses were to cover a large area very quickly and to provide data that could be used at a scale of 1:100,000 or smaller. Thus, the image processing was standardized for speed while still achieving the desired 1:100,000-scale level of detail. Consequently, some subtle features of mineralogy may be missed. The hydrothermal alteration data were not field checked to separate mineral occurrences due to hydrothermal alteration from those due to other natural occurrences. The data were evaluated by overlaying the results with 1:100,000 scale topographic maps to confirm correlation with known mineralized areas. The data were also tested in the Battle Mountain area of north-central Nevada by a weights-of-evidence correlation analysis with metallic mineral sites from the USGS Mineral Resources Data System and were found to have significant spatial correlation. On the basis of on these analyses, the data are considered useful for regional studies at scales of 1:100,000.
Changing from computing grid to knowledge grid in life-science grid.
Talukdar, Veera; Konar, Amit; Datta, Ayan; Choudhury, Anamika Roy
2009-09-01
Grid computing has a great potential to become a standard cyber infrastructure for life sciences that often require high-performance computing and large data handling, which exceeds the computing capacity of a single institution. Grid computer applies the resources of many computers in a network to a single problem at the same time. It is useful to scientific problems that require a great number of computer processing cycles or access to a large amount of data.As biologists,we are constantly discovering millions of genes and genome features, which are assembled in a library and distributed on computers around the world.This means that new, innovative methods must be developed that exploit the re-sources available for extensive calculations - for example grid computing.This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing a "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. By extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.
Alternatives to steel grid decks - phase II.
DOT National Transportation Integrated Search
2012-09-01
The primary objective of this research project was to investigate alternatives to open grid steel decks for movable bridges. Three alternative deck systems, including aluminum deck, ultra-high performance concrete (UHPC)-high-strength steel (HSS) dec...
NASA Astrophysics Data System (ADS)
Baudel, S.; Blanc, F.; Jolibois, T.; Rosmorduc, V.
2004-12-01
The Products and Services (P&S) department in the Space Oceanography Division at CLS is in charge of diffusing and promoting altimetry and operational oceanography data. P&S is so involved in Aviso satellite altimetry project, in Mercator ocean operational forecasting system, and in the European Godae /Mersea ocean portal. Aiming to a standardisation and a common vision and management of all these ocean data, these projects led to the implementation of several OPeNDAP/LAS Internet servers. OPeNDAP allows the user to extract via a client software (like IDL, Matlab or Ferret) the data he is interested in and only this data, avoiding him to download full information files. OPeNDAP allows to extract a geographic area, a period time, an oceanic variable, and an output format. LAS is an OPeNDAP data access web server whose special feature consists in the facility for unify in a single vision the access to multiple types of data from distributed data sources. The LAS can make requests to different remote OPeNDAP servers. This enables to make comparisons or statistics upon several different data types. Aviso is the CNES/CLS service which distributes altimetry products since 1993. The Aviso LAS distributes several Ssalto/Duacs altimetry products such as delayed and near-real time mean sea level anomaly, absolute dynamic topography, absolute geostrophic velocities, gridded significant wave height and gridded wind speed modulus. Mercator-Ocean is a French operational oceanography centre which distributes its products by several means among them LAS/OPeNDAP servers as part of Mercator Mersea-strand1 contribution. 3D ocean description (temperature, salinity, current and other oceanic variables) of the North Atlantic and Mediterranean are real-time available and weekly updated. LAS special feature consisting in the possibility of making requests to several remote data centres with same OPeNDAP configurations particularly fitted to Mersea strand-1 problematics. This European project (June 2003 to June 2004) sponsored by the European Commission was the first experience of an integrated operational oceanography project. The objective was the assessment of several existing operational in situ and satellite monitoring and numerical forecasting systems for the future elaboration (Mersea Integrated Project, 2004-2008) of an integrated system able to deliver, operationally, information products (physical, chemical, biological) towards end-users in several domains related to environment, security and safety. Five forecasting ocean models with data assimilation coming from operational in situ or satellite data centres, have been intercompared. The main difficulty of this LAS implementation has lied in the ocean model metrics definition and a common file format adoption which forced the model teams to produce the same datasets in the same formats (NetCDF, COARDS/CF convention). Notice that this was a pioneer approach and that it has been adopted by Godae standards (see F. Blanc's paper in this session). Going on these web technologies implementation and entering a more user-oriented issue, perspectives deal with the implementation of a Map Server, a GIS opensource server which will communicate with the OPeNDAP server. The Map server will be able to manipulate simultaneously raster and vector multidisciplinary remote data. The aim is to construct a full complete web oceanic data distribution service. The projects in which we are involved allow us to progress towards that.
Interoperability of GADU in using heterogeneous Grid resources for bioinformatics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulakhe, D.; Rodriguez, A.; Wilde, M.
2008-03-01
Bioinformatics tools used for efficient and computationally intensive analysis of genetic sequences require large-scale computational resources to accommodate the growing data. Grid computational resources such as the Open Science Grid and TeraGrid have proved useful for scientific discovery. The genome analysis and database update system (GADU) is a high-throughput computational system developed to automate the steps involved in accessing the Grid resources for running bioinformatics applications. This paper describes the requirements for building an automated scalable system such as GADU that can run jobs on different Grids. The paper describes the resource-independent configuration of GADU using the Pegasus-based virtual datamore » system that makes high-throughput computational tools interoperable on heterogeneous Grid resources. The paper also highlights the features implemented to make GADU a gateway to computationally intensive bioinformatics applications on the Grid. The paper will not go into the details of problems involved or the lessons learned in using individual Grid resources as it has already been published in our paper on genome analysis research environment (GNARE) and will focus primarily on the architecture that makes GADU resource independent and interoperable across heterogeneous Grid resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basso, T.; DeBlasio, R.
The IEEE American National Standards smart grid publications and standards development projects IEEE 2030, which addresses smart grid interoperability, and IEEE 1547TM, which addresses distributed resources interconnection with the grid, have made substantial progress since 2009. The IEEE 2030TM and 1547 standards series focus on systems-level aspects and cover many of the technical integration issues involved in a mature smart grid. The status and highlights of these two IEEE series of standards, which are sponsored by IEEE Standards Coordinating Committee 21 (SCC21), are provided in this paper.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Evaluating Connectivity between Marine Protected Areas Using CODAR High-Frequency Radar
2010-06-01
SMCA/SMR, (6) Big Creek SMCA/SMR, (7) Piedras Blancas SMCA/SMR, (8) Cambria SMCA/White Rock SMCA, (9) Pt. Buchon SMCA/SMR, and (10) Vandenberg SMR...52 grid- points, (7) Piedras Blancas 47 grid-points, (8) Cambria 20 grid-points, (9) Pt. Buchon 45 grid- points, and (10) the Vandenberg MPA had 62...COLUMN HEADERS. Back-projected from: (Sorted north- to-south) Año Nuevo Soquel Canyon Portuguese Ledge Point Lobos Point Sur Big Creek Piedras
Improving collaboration between Primary Care Research Networks using Access Grid technology.
Nagykaldi, Zsolt; Fox, Chester; Gallo, Steve; Stone, Joseph; Fontaine, Patricia; Peterson, Kevin; Arvanitis, Theodoros
2008-01-01
Access Grid (AG) is an Internet2-driven, high performance audio-visual conferencing technology used worldwide by academic and government organisations to enhance communication, human interaction and group collaboration. AG technology is particularly promising for improving academic multi-centre research collaborations. This manuscript describes how the AG technology was utilised by the electronic Primary Care Research Network (ePCRN) that is part of the National Institutes of Health (NIH) Roadmap initiative to improve primary care research and collaboration among practice-based research networks (PBRNs) in the USA. It discusses the design, installation and use of AG implementations, potential future applications, barriers to adoption, and suggested solutions.
Can developing countries leapfrog the centralized electrification paradigm?
Levin, Todd; Thomas, Valerie M.
2016-02-04
Due to the rapidly decreasing costs of small renewable electricity generation systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. Furthermore, by looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, Andrew P.; Carrillo, Ismael M.; Jin, Xin
This document is the final report of a two-year development, test, and demonstration project, 'Cohesive Application of Standards- Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL's) Integrated Network Testbed for Energy Grid Research and Technology (INTEGRATE) initiative hosted at Energy Systems Integration Facility (ESIF). This project demonstrated techniques to control distribution grid events using the coordination of traditional distribution grid devices and high-penetration renewable resources and demand response. Using standard communication protocols and semantic standards, the project examined the use cases of high/low distribution voltage, requests for volt-ampere-reactive (VAR)more » power support, and transactive energy strategies using Volttron. Open source software, written by EPRI to control distributed energy resources (DER) and demand response (DR), was used by an advanced distribution management system (ADMS) to abstract the resources reporting to a collection of capabilities rather than needing to know specific resource types. This architecture allows for scaling both horizontally and vertically. Several new technologies were developed and tested. Messages from the ADMS based on the common information model (CIM) were developed to control the DER and DR management systems. The OpenADR standard was used to help manage grid events by turning loads off and on. Volttron technology was used to simulate a homeowner choosing the price at which to enter the demand response market. Finally, the ADMS used newly developed algorithms to coordinate these resources with a capacitor bank and voltage regulator to respond to grid events.« less
Distributed Information System for Dynamic Ocean Data in Indonesia
NASA Astrophysics Data System (ADS)
Romero, Laia; Sala, Joan; Polo, Isabel; Cases, Oscar; López, Alejandro; Jolibois, Tony; Carbou, Jérome
2014-05-01
Information systems are widely used to enable access to scientific data by different user communities. MyOcean information system is a good example of such applications in Europe. The present work describes a specific distributed information system for Ocean Numerical Model (ONM) data in the scope of the INDESO project, a project focused on Infrastructure Development of Space Oceanography in Indonesia. INDESO, as part of the Blue Revolution policy conducted by the Indonesian government for the sustainable development of fisheries and aquaculture, presents challenging service requirements in terms of services performance, reliability, security and overall usability. Following state-of-the-art technologies on scientific data networks, this robust information system provides a high level of interoperability of services to discover, view and access INDESO dynamic ONM scientific data. The entire system is automatically updated four times a day, including dataset metadata, taking into account every new file available in the data repositories. The INDESO system architecture has been designed in great part around the extension and integration of open-source flexible and mature technologies. It involves three separate modules: web portal, dissemination gateway, and user administration. Supporting different gridded and non-gridded data, the INDESO information system features search-based data discovery, data access by temporal and spatial subset extraction, direct download and ftp, and multiple-layer visualization of datasets. A complex authorization system has been designed and applied throughout all components, in order to enable services authorization at dataset level, according to the different user profiles stated in the data policy. Finally, a web portal has been developed as the single entry point and standardized interface to all data services (discover, view, and access). Apache SOLR has been implemented as the search server, allowing faceted browsing among ocean data products and the connection to an external catalogue of metadata records. ncWMS and Godiva2 have been the basis of the viewing server and client technologies developed, MOTU has been used for data subsetting and intelligent management of data queues, and has allowed the deployment of a centralised download interface applicable to all ONM products. Unidata's Thredds server has been employed to provide file metadata and remote access to ONM data. CAS has been used as the single sign-on protocol for all data services. The user management application developed has been based on GOSA2. Joomla and Bootstrap have been the technologies used for the web portal, compatible with mobile phone and tablet devices. The INDESO information system comes up as an information system that is scalable, extremely easy to use, operate and maintain. This will facilitate the extensive use of ocean numerical model data by the scientific community in Indonesia. Constituted mostly of open-source solutions, the system is able to meet strict operational requirements, and carry out complex functions. It is feasible to adapt this architecture to different static and dynamic oceanographic data sources and large data volumes, in an accessible, fast, and comprehensive manner.
Three-dimensional elliptic grid generation for an F-16
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.
1988-01-01
A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean Schneider; Michael Martin; Renee Berry
2012-07-31
This report describes the results of the final implementation and testing of a hybrid micro-grid system designed for off-grid applications in underserved Colonias along the Texas/Mexico border. The project is a federally funded follow-on to a project funded by the Texas State Energy Conservation Office in 2007 that developed and demonstrated initial prototype hybrid generation systems consisting of a proprietary energy storage technology, high efficiency charging and inverting systems, photovoltaic cells, a wind turbine, and bio-diesel generators. This combination of technologies provided continuous power to dwellings that are not grid connected, with a significant savings in fuel by allowing powermore » generation at highly efficient operating conditions. The objective of this project was to complete development of the prototype systems and to finalize and engineering design; to install and operate the systems in the intended environment, and to evaluate the technical and economic effectiveness of the systems. The objectives of this project were met. This report documents the final design that was achieved and includes the engineering design documents for the system. The system operated as designed, with the system availability limited by maintenance requirements of the diesel gensets. Overall, the system achieved a 96% availability over the operation of the three deployed systems. Capital costs of the systems were dependent upon both the size of the generation system and the scope of the distribution grid, but, in this instance, the systems averaged $0.72/kWh delivered. This cost would decrease significantly as utilization of the system increased. The system with the highest utilization achieved a capitol cost amortized value of $0.34/kWh produced. The average amortized fuel and maintenance cost was $0.48/kWh which was dependent upon the amount of maintenance required by the diesel generator. Economically, the system is difficult to justify as an alternative to grid power. However, the operational costs are reasonable if grid power is unavailable, e.g. in a remote area or in a disaster recovery situation. In fact, avoided fuel costs for the smaller of the systems in use during this project would have a payback of the capital costs of that system in 2.3 years, far short of the effective system life.« less
An Extensible Information Grid for Risk Management
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David G.
2003-01-01
This paper describes recent work on developing an extensible information grid for risk management at NASA - a RISK INFORMATION GRID. This grid is being developed by integrating information grid technology with risk management processes for a variety of risk related applications. To date, RISK GRID applications are being developed for three main NASA processes: risk management - a closed-loop iterative process for explicit risk management, program/project management - a proactive process that includes risk management, and mishap management - a feedback loop for learning from historical risks that escaped other processes. This is enabled through an architecture involving an extensible database, structuring information with XML, schemaless mapping of XML, and secure server-mediated communication using standard protocols.
Illinois, Indiana, and Ohio Magnetic and Gravity Maps and Data: A Website for Distribution of Data
Daniels, David L.; Kucks, Robert P.; Hill, Patricia L.
2008-01-01
This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each state. The results for the three states, Illinois, Indiana, and Ohio are presented here in one site. Files of aeromagnetic and gravity grids and images are available for these states for downloading. In Illinois, Indiana, and Ohio, 19 magnetic surveys have been knit together to form a single digital grid and map. And, a complete Bouguer gravity anomaly grid and map was generated from 128,227 gravity station measurements in and adjacent to Illinois, Indiana, and Ohio. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.
Technical Analysis Feasibility Study on Smart Microgrid System in Sekolah Tinggi Teknik PLN
NASA Astrophysics Data System (ADS)
Suyanto, Heri
2018-02-01
Nowadays application of new and renewable energy as main resource of power plant has greatly increased. High penetration of renewable energy into the grid will influence the quality and reliability of the electricity system, due to the intermittent characteristic of new and renewable energy resources. Smart grid or microgrid technology has the ability to deal with this intermittent characteristic especially if these renewable energy resources integrated to grid in large scale, so it can improve the reliability and efficiency of the grid. We plan to implement smart microgrid system at Sekolah Tinggi Teknik PLN as a pilot project. Before the pilot project start, the feasibility study must be conducted. In this feasibility study, the renewable energy resources and load characteristic at the site will be measured. Then the technical aspect of this feasibility study will be analyzed. This paper explains that analysis of ths feasibility study.
IGI (the Italian Grid initiative) and its impact on the Astrophysics community
NASA Astrophysics Data System (ADS)
Pasian, F.; Vuerli, C.; Taffoni, G.
IGI - the Association for the Italian Grid Infrastructure - has been established as a consortium of 14 different national institutions to provide long term sustainability to the Italian Grid. Its formal predecessor, the Grid.it project, has come to a close in 2006; to extend the benefits of this project, IGI has taken over and acts as the national coordinator for the different sectors of the Italian e-Infrastructure present in EGEE. IGI plans to support activities in a vast range of scientificdisciplines - e.g. Physics, Astrophysics, Biology, Health, Chemistry, Geophysics, Economy, Finance - and any possible extensions to other sectors such as Civil Protection, e-Learning, dissemination in Universities and secondary schools. Among these, the Astrophysics community is active as a user, by porting applications of various kinds, but also as a resource provider in terms of computing power and storage, and as middleware developer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karali, Nihan; Flego, Gianluca; Yu, Jiancheng
Given the substantial investments required, there has been keen interest in conducting benefits analysis, i.e., quantifying, and often monetizing, the performance of smart grid technologies. In this study, we compare two different approaches; (1) Electric Power Research Institute (EPRI)’s benefits analysis method and its adaptation to the European contexts by the European Commission, Joint Research Centre (JRC), and (2) the Analytic Hierarchy Process (AHP) and fuzzy logic decision making method. These are applied to three case demonstration projects executed in three different countries; the U.S., China, and Italy, considering uncertainty in each case. This work is conducted under the U.S.more » (United States)-China Climate Change Working Group, smart grid, with an additional major contribution by the European Commission. The following is a brief description of the three demonstration projects.« less
NASA Astrophysics Data System (ADS)
Ramage, K.; Desbois, M.; Eymard, L.
2004-12-01
The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.
Applications of Optimal Building Energy System Selection and Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Stadler, Michael; Siddiqui, Afzal
2011-04-01
Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated bymore » description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.« less
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
Off-Grid Electricity Access and its Impact on Micro-Enterprises: Evidence from Rural Uganda
NASA Astrophysics Data System (ADS)
Muhoro, Peter N.
The history of development shows convincingly that no country has substantially reduced poverty without massively increasing the use of electricity. The development of micro-enterprises in rural areas of Uganda is linked with increased access and use of electricity services. In this study, I combine quantitative and qualitative methods, including informal surveys, intra-business energy allocation studies and historical analysis, to analyze off-grid electricity access among micro-enterprises in rural western Uganda. I explore the linkages between of grid electricity access and the influence it has on micro- enterprises. Data is obtained from 56 micro-enterprises located in 11 village-towns within 3 districts in Uganda. In studying the micro-enterprises. the focus is on the services that are provided by electricity from modern energy carriers. The type of equipment used, forms of transportation, technical support, level of understanding and education of the entrepreneur, financing for energy equipment, and the role of donors are discussed in this thesis. Qualitative methods are used to allow for new insights and prioritization of concepts to emerge from the field rattier than from theory. Micro-enterprises in rural Uganda create income for the poor; they are resources for poverty reduction. With price adjustments, it becomes possible for those who live below the poverty line, nominally less than $1 a day, to afford the products and services and therefore mitigating the vicious cycle of poverty. Energy consumption among the micro-enterprises is at an average of 0.13kWh/day. The cost of accessing this amount of electricity attributes to about 50% of total revenue. I find that the "practices" used in off-grid electricity access lead to situations where the entrepreneurs have to evaluate pricing and output of products and services to generate higher profits. Such numbers indicate the need for appropriate technologies and profitable policies to be implemented. The data indicates that without subsidies, credit-based sales and better financing options, it is unlikely that access to electricity will increase beyond the levels established in the existing cash market. Concerns about equity and other social issues indicate a need for careful attention to the implications of policy choices and the processes that influence the use of technology.
NASA Astrophysics Data System (ADS)
Feng, Jun-shu; Jin, Yan-ming; Hao, Wei-hua
2017-01-01
Based on modelling the environmental influence index of power transmission and transformation project and energy-saving and emission-reducing index of source-grid-load of power system, this paper establishes an objective decision model of power grid environmental protection, with constraints of power grid environmental protection objectives being legal and economical, and considering both positive and negative influences of grid on the environmental in all-life grid cycle. This model can be used to guide the programming work of power grid environmental protection. A numerical simulation of Jiangsu province’s power grid environmental protection objective decision model has been operated, and the results shows that the maximum goal of energy-saving and emission-reducing benefits would be reached firstly as investment increasing, and then the minimum goal of environmental influence.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)
2000-01-01
The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.
SEE-GRID eInfrastructure for Regional eScience
NASA Astrophysics Data System (ADS)
Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel
In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure compatible with European developments, and empowering the scientists in the region in equal participation in the use of pan- European infrastructures, is materializing through the above initiatives. This model has a number of concrete operational and organizational guidelines which can be adapted to help e-Infrastructure developments in other world regions. In this paper we review the most important developments and contributions by the SEEGRID- SCI project.
NASA Technical Reports Server (NTRS)
Evans, R. W.; Brinza, D. E.
2014-01-01
Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model--minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.
FERC examines transmission access pricing policies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkhart, L.A.
1993-02-15
The Federal Energy Regulatory Commission (FERC) has approved two orders dealing with transmission pricing issues that evolved from its March 1992 Pennsylvania Electric Co. (Penelec) decision dealing with cost recovery of transmission plant expansion. Commissioner Betsy Moler said the cases represented the first time the FERC has used incremental rates for wheeling transactions. In the first new order, Public Service Electric Gas Co. proposed charging for third-party transmission services to recover the sum of its standard embedded cost rate and the net incremental cost of making upgrades to its integrated transmission system. Public Service sought to provide service to EEAmore » I Limited, which proposed building and operating a cogeneration facility at a United States Gypsum Co. plant in New Jersey. Consolidated Edison Company of New York Inc. had agreed to purchase the project's output. In rejecting Public Service's proposal, the FERC ruled the utility must charge a transmission rate that is the higher of either its embedded cost rate or its incremental cost rate of accelerated expansion. (In this instance, the incremental rate was higher). In the second new order, Public Service Co. of Colorado (PSCC) filed a proposed open access transmission service tariff related to its acquisition of facilities from Colorado-Ute Electric Association Inc. The FERC rejected PSCC's tariff request that would have required a new transmission customer (in the event PSCC modified its integrated transmission grid) to pay the sum of PSCC's standard transmission rate (reflecting the average cost of the grid facilities) plus the expansion cost (reflecting the incremental facility cost).« less
The SuperCOSMOS Science Archive
NASA Astrophysics Data System (ADS)
Hambly, N.; Read, M.; Mann, R.; Sutorius, E.; Bond, I.; MacGillivray, H.; Williams, P.; Lawrence, A.
2004-07-01
The SuperCOSMOS Sky Survey (SSS {http://www-wfau.roe.ac.uk/sss}; Hambly et al., 2001) consists of digitised scans of Schmidt photographic survey material in a multi-colour (BRI), multi-epoch, uniformly calibrated product. It covers the whole southern hemisphere, with an extension into the north currently underway. Public online access to the 2 Tbytes of SSS pixel data and object catalogues has been available for some time; data are being downloaded at a rate of several gigabytes per week, and many new science results are emerging from community use of the data. In this poster we describe the terabyte-scale SuperCOSMOS Science Archive {http://thoth.roe.ac.uk/ssa} (SSA), which is a recasting of the SSS object catalogue system from flat files into an RDBMS, with an enhanced user interface. We describe some aspects of the hardware and schema design of the SSA, which aims to produce a high performance, VO-compatible database, suitable for data mining by `power users', while maintaining the ease of use praised in the old SSS system. Initially, the SSA will allow access through web forms and a flexible SQL interface. It acts as the prototype for the next generation survey archives to be hosted by the University of Edinburgh's Wide Field Astronomy Unit, such as the WFCAM Science Archive of infrared sky survey data, as well as being a scalability testbed for use by AstroGrid, the UK's Virtual Observatory project. As a result of these roles, it will display subsequently an expanding functionality, as web - and later, Grid - services are deployed on it.
Solution for Data Security Challenges Faced by Smart Grid Evolution - Video
the same utility - different business units that are dealing with generation, transmission, and smart grid, the consumers now also have access to information about zero utilization and the different alive to sense what's going on. And then there's certainly variety with the various different elements
Grid computing technology for hydrological applications
NASA Astrophysics Data System (ADS)
Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V.
2011-06-01
SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.
omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling
Phan, John H.; Kothari, Sonal; Wang, May D.
2016-01-01
Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062
SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan: Hodge, Bri-Mathias
This presentation provides a Smart-DS project overview and status update for the ARPA-e GRID DATA program meeting 2017, including distribution systems, models, and scenarios, as well as opportunities for GRID DATA collaborations.
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1994-01-01
Several reports are attached to this document which contain the results of our research at the end of this contract period. Three of the reports deal with our work on generating surface grids. One is a preprint of a paper which will appear in the journal Applied Mathematics and Computation. Another is the abstract from a dissertation which has been prepared by Ahmed Khamayseh, a graduate student who has been supported by this grant for the last two years. The last report on surface grids is the extended abstract of a paper to be presented at the 14th IMACS World Congress in July. This report contains results on conformal mappings of surfaces, which are closely related to elliptic methods for surface grid generation. A preliminary report is included on new methods for dealing with block interfaces in multiblock grid systems. The development work is complete and the methods will eventually be incorporated into the National Grid Project (NGP) grid generation code. Thus, the attached report contains only a simple grid system which was used to test the algorithms to prove that the concepts are sound. These developments will greatly aid grid control when using elliptic systems and prevent unwanted grid movement. The last report is a brief summary of some timings that were obtained when the multiblock grid generation code was run on the Intel IPSC/860 hypercube. Since most of the data in a grid code is local to a particular block, only a small fraction of the total data must be passed between processors. The data is also distributed among the processors so that the total size of the grid can be increase along with the number of processors. This work is only in a preliminary stage. However, one of the ERC graduate students has taken an interest in the project and is presently extending these results as a part of his master's thesis.
Long Island Smart Energy Corridor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mui, Ming
The Long Island Power Authority (LIPA) has teamed with Stony Brook University (Stony Brook or SBU) and Farmingdale State College (Farmingdale or FSC), two branches of the State University of New York (SUNY), to create a “Smart Energy Corridor.” The project, located along the Route 110 business corridor on Long Island, New York, demonstrated the integration of a suite of Smart Grid technologies from substations to end-use loads. The Smart Energy Corridor Project included the following key features: -TECHNOLOGY: Demonstrated a full range of smart energy technologies, including substations and distribution feeder automation, fiber and radio communications backbone, advanced meteringmore » infrastructure (AM”), meter data management (MDM) system (which LIPA implemented outside of this project), field tools automation, customer-level energy management including automated energy management systems, and integration with distributed generation and plug-in hybrid electric vehicles. -MARKETING: A rigorous market test that identified customer response to an alternative time-of-use pricing plan and varying levels of information and analytical support. -CYBER SECURITY: Tested cyber security vulnerabilities in Smart Grid hardware, network, and application layers. Developed recommendations for policies, procedures, and technical controls to prevent or foil cyber-attacks and to harden the Smart Grid infrastructure. -RELIABILITY: Leveraged new Smart Grid-enabled data to increase system efficiency and reliability. Developed enhanced load forecasting, phase balancing, and voltage control techniques designed to work hand-in-hand with the Smart Grid technologies. -OUTREACH: Implemented public outreach and educational initiatives that were linked directly to the demonstration of Smart Grid technologies, tools, techniques, and system configurations. This included creation of full-scale operating models demonstrating application of Smart Grid technologies in business and residential settings. Farmingdale State College held three international conferences on energy and sustainability and Smart Grid related technologies and policies. These conferences, in addition to public seminars increased understanding and acceptance of Smart Grid transformation by the general public, business, industry, and municipalities in the Long Island and greater New York region. - JOB CREATION: Provided training for the Smart Grid and clean energy jobs of the future at both Farmingdale and Stony Brook. Stony Brook focused its “Cradle to Fortune 500” suite of economic development resources on the opportunities emerging from the project, helping to create new technologies, new businesses, and new jobs. To achieve these features, LIPA and its sub-recipients, FSC and SBU, each have separate but complementary objectives. At LIPA, the Smart Energy Corridor (1) meant validating Smart Grid technologies; (2) quantifying Smart Grid costs and benefits; and (3) providing insights into how Smart Grid applications can be better implemented, readily adapted, and replicated in individual homes and businesses. LIPA installed 2,550 AMI meters (exceeding the 500 AMI meters in the original plan), created three “smart” substations serving the Corridor, and installed additional distribution automation elements including two-way communications and digital controls over various feeders and capacitor banks. It gathered and analyzed customer behavior information on how they responded to a new “smart” TOU rate and to various levels of information and analytical tools.« less
Grid scale drives the scale and long-term stability of place maps
Mallory, Caitlin S; Hardcastle, Kiah; Bant, Jason S; Giocomo, Lisa M
2018-01-01
Medial entorhinal cortex (MEC) grid cells fire at regular spatial intervals and project to the hippocampus, where place cells are active in spatially restricted locations. One feature of the grid population is the increase in grid spatial scale along the dorsal-ventral MEC axis. However, the difficulty in perturbing grid scale without impacting the properties of other functionally-defined MEC cell types has obscured how grid scale influences hippocampal coding and spatial memory. Here, we use a targeted viral approach to knock out HCN1 channels selectively in MEC, causing grid scale to expand while leaving other MEC spatial and velocity signals intact. Grid scale expansion resulted in place scale expansion in fields located far from environmental boundaries, reduced long-term place field stability and impaired spatial learning. These observations, combined with simulations of a grid-to-place cell model and position decoding of place cells, illuminate how grid scale impacts place coding and spatial memory. PMID:29335607
The research on multi-projection correction based on color coding grid array
NASA Astrophysics Data System (ADS)
Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu
2017-10-01
There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.
Thundercloud: Domain specific information security training for the smart grid
NASA Astrophysics Data System (ADS)
Stites, Joseph
In this paper, we describe a cloud-based virtual smart grid test bed: ThunderCloud, which is intended to be used for domain-specific security training applicable to the smart grid environment. The test bed consists of virtual machines connected using a virtual internal network. ThunderCloud is remotely accessible, allowing students to undergo educational exercises online. We also describe a series of practical exercises that we have developed for providing the domain-specific training using ThunderCloud. The training exercises and attacks are designed to be realistic and to reflect known vulnerabilities and attacks reported in the smart grid environment. We were able to use ThunderCloud to offer practical domain-specific security training for smart grid environment to computer science students at little or no cost to the department and no risk to any real networks or systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbus, David A; Jacobson, Mark D; Tan, Jin
As the deployment of wind and solar technologies increases at an unprecedented rate across the United States and in many world markets, the variability of power output from these technologies expands the need for increased power system flexibility. Energy storage can play an important role in the transition to a more flexible power system that can accommodate high penetrations of variable renewable technologies. This project focuses on how ternary pumped storage hydropower (T-PSH) coupled with dynamic transmission can help this transition by defining the system-wide benefits of deploying this technology in specific U.S. markets. T-PSH technology is the fastest respondingmore » pumped hydro technology equipment available today for grid services. T-PSH efficiencies are competitive with lithium-ion (Li-ion) batteries, and T-PSH can provide increased storage capacity with minimal degradation during a 50-year lifetime. This project evaluates T-PSH for grid services ranging from fast frequency response (FFR) for power system contingency events and enhanced power system stability to longer time periods for power system flexibility to accommodate ramping from wind and solar variability and energy arbitrage. In summary, this project: Compares power grid services and costs, including ancillary services and essential reliability services, for T-PSH and conventional pumped storage hydropower (PSH) - Evaluates the dynamic response of T-PSH and PSH technologies and their contribution to essential reliability services for grid stability by developing new power system model representations for T-PSH and performing simulations in the Western Interconnection - Evaluates production costs, operational impacts, and energy storage revenue streams for future power system scenarios with T-PSH focusing on time frames of 5 minutes and more - Assesses the electricity market-transforming capabilities of T-PSH technology coupled with transmission monitoring and dynamic control. This paper presents an overview of the methodology and initial, first-year preliminary findings of a 2-year in-depth study into how advanced PSH and dynamic transmission contribute to the transformation and modernization of the U.S. electric grid. This project is part of the HydroNEXT Initiative funded by the U.S. Department of Energy (DOE) that is focused on the development of innovative technologies to advance nonpowered dams and PSH. The project team consists of the National Renewable Energy Laboratory (project lead), Absaroka Energy, LLC (Montana-based PSH project developer), GE Renewable Energy (PSH pump/turbine equipment supplier), Grid Dynamics, and Auburn University (lead for NREL/Auburn dynamic modeling team).« less
e-Science and its implications.
Hey, Tony; Trefethen, Anne
2003-08-15
After a definition of e-science and the Grid, the paper begins with an overview of the technological context of Grid developments. NASA's Information Power Grid is described as an early example of a 'prototype production Grid'. The discussion of e-science and the Grid is then set in the context of the UK e-Science Programme and is illustrated with reference to some UK e-science projects in science, engineering and medicine. The Open Standards approach to Grid middleware adopted by the community in the Global Grid Forum is described and compared with community-based standardization processes used for the Internet, MPI, Linux and the Web. Some implications of the imminent data deluge that will arise from the new generation of e-science experiments in terms of archiving and curation are then considered. The paper concludes with remarks about social and technological issues posed by Grid-enabled 'collaboratories' in both scientific and commercial contexts.
7 CFR 1709.109 - Eligible projects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...
Electric Vehicle Grid Experiments and Analysis
DOT National Transportation Integrated Search
2018-02-02
This project developed a low cost building energy management system (EMS) and conducted vehicle-to-grid (V2G) experiments on a commercial office building. The V2G effort included theinstallation and operation of a Princeton Power System CA-30 bi-dire...
Conservative Overset Grids for Overflow For The Sonic Wave Atmospheric Propagation Project
NASA Technical Reports Server (NTRS)
Onufer, Jeff T.; Cummings, Russell M.
1999-01-01
Methods are presented that can be used to make multiple, overset grids communicate in a conservative manner. The methods are developed for use with the Chimera overset method using the PEGSUS code and the OVERFLOW solver.
Smart Grid Cybersecurity: Job Performance Model Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neil, Lori Ross; Assante, Michael; Tobey, David
2012-08-01
This is the project report to DOE OE-30 for the completion of Phase 1 of a 3 phase report. This report outlines the work done to develop a smart grid cybersecurity certification. This work is being done with the subcontractor NBISE.
7 CFR 1709.109 - Eligible projects.
Code of Federal Regulations, 2011 CFR
2011-01-01
... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...
7 CFR 1709.109 - Eligible projects.
Code of Federal Regulations, 2012 CFR
2012-01-01
... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...
7 CFR 1709.109 - Eligible projects.
Code of Federal Regulations, 2013 CFR
2013-01-01
... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...
7 CFR 1709.109 - Eligible projects.
Code of Federal Regulations, 2014 CFR
2014-01-01
... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...
NASA Astrophysics Data System (ADS)
Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.
2017-12-01
NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing
2015-09-01
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 AGENDA 1. Non-Tactical Vehicle-to-Grid (V2G) Projects • Smart Power...Vehicle Technology Expo and the Battery Show Conference Novi, MI, 15-17 Sep 2015 2 For the Nation • Help stabilize smart grid and can generate revenue...demonstration of a smart , aggregated, ad-hoc capable, vehicle to grid (V2G) and Vehicle to Vehicle (V2V) capable fleet power system to support
Job optimization in ATLAS TAG-based distributed analysis
NASA Astrophysics Data System (ADS)
Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.
2010-04-01
The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.
NASA Astrophysics Data System (ADS)
Alstone, Peter Michael
This work explores the intersections of information technology and off-grid electricity deployment in the developing world with focus on a key instance: the emergence of pay-as-you-go (PAYG) solar household-scale energy systems. It is grounded in detailed field study by my research team in Kenya between 2013-2014 that included primary data collection across the solar supply chain from global businesses through national and local distribution and to the end-users. We supplement the information with business process and national survey data to develop a detailed view of the markets, technology systems, and individuals who interact within those frameworks. The findings are presented in this dissertation as a series of four chapters with introductory, bridging, and synthesis material between them. The first chapter, Decentralized Energy Systems for Clean Electricity Access, presents a global view of the emerging off-grid power sector. Long-run trends in technology create "a unique moment in history" for closing the gap between global population and access to electricity, which has stubbornly held at 1-2 billion people without power since the initiation of the electric utility business model in the late 1800's. We show the potential for widespread near-term adoption of off-grid solar, which could lead to ten times less inequality in access and also ten times lower household-level climate impacts. Decentralized power systems that replace fuel-based incumbent lighting can advance the causes of climate stabilization, economic and social freedom and human health. Chapters two and three are focused on market and institutional dynamics present circa 2014 in for off-grid solar with a focus on the Kenya market. Chapter 2, "Off-grid Power and Connectivity", presents our findings related to the widespread influence of information technology across the supply chain for solar and in PAYG approaches. Using digital financing and embedded payment verification technology, PAYG businesses can help overcome key barriers to adoption of off-grid energy systems. The framework provides financing (or energy service payment structures) for users of off-grid solar, and we show is also instrumental for building trust in off-grid solar technology, facilitating supply chain coordination, and creating mechanisms and incentives for after-sales service. Chapter 3, Quality Communication, delves into detail on the information channels (both incumbent and ICT-based) that link retailers with regional and global markets for solar goods. In it we uncover the linked structure of physical distribution networks and the pathway for information about product characteristics (including, critically, the quality of products). The work shows that a few key decisions about product purchasing at the wholesale level, in places like Nairobi (the capital city for Kenya) create the bulk of the choice set for retail buyers, and show how targeting those wholesale purchasers is critically important for ensuring good-quality products are available. Chapter 4, the last in this dissertation, is titled Off-grid solar energy services enabled and evaluated through information technology and presents an analytic framework for using remote monitoring data from PAYG systems to assess the joint technological and behavioral drivers for energy access through solar home systems. Using large-scale (n ~ 1,000) data from a large PAYG business in Kenya (M-KOPA), we show that people tend to co-optimize between the quantity and reliability of service, using 55% of the energy technically possible but with only 5% system down time. Half of the users move their solar panel frequently (in response to concerns about theft, for the most part) and these users experienced 20% lower energy service quantities. The findings illustrate the implications of key trends for off-grid power: evolving system component technology architectures, opportunities for improved support to markets, and the use of background data from business and technology systems. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Long, D. G.; Hardman, M.
2016-02-01
The historical record of satellite-derived passive microwave brightness temperatures comprises data from multiple imaging radiometers (SMMR, SSM/I-SSMIS, AMSR-E), spanning nearly 40 years of Earth observations from 1978 to the present. Passive microwave data are used to monitor time series of many climatological variables, including ocean wind speeds, cloud liquid water and sea ice concentrations and ice velocity. Gridded versions of passive microwave data have been produced using various map projections (polar stereographic, Lambert azimuthal equal-area, cylindrical equal-area, quarter-degree Platte-Carree) and data formats (flat binary, HDF). However, none of the currently available versions can be rendered in the common visualization standard, geoTIFF, without requiring cartographic reprojection. Furthermore, the reprojection details are complicated and often require expert knowledge of obscure software package options. We are producing a consistently calibrated, completely reprocessed data set of this valuable multi-sensor satellite record, using EASE-Grid 2.0, an improved equal-area projection definition that will require no reprojection for translation into geoTIFF. Our approach has been twofold: 1) define the projection ellipsoid to match the reference datum of the satellite data, and 2) include required file-level metadata for standard projection software to correctly render the data in the geoTIFF standard. The Calibrated, Enhanced Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR), leverages image reconstruction techniques to enhance gridded spatial resolution to 3 km and uses newly available intersensor calibrations to improve the quality of derived geophysical products. We expect that our attention to easy geoTIFF compatibility will foster higher-quality analysis with the CETB product by enabling easy and correct intercomparison with other gridded and in situ data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perelmutov, T.; Bakken, J.; Petravick, D.
Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less
Scalable global grid catalogue for Run3 and beyond
NASA Astrophysics Data System (ADS)
Martinez Pedreira, M.; Grigoras, C.;
2017-10-01
The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.
ICASE Workshop on Programming Computational Grids
2001-09-01
ICASE Workshop on Programming Computational Grids Thomas M. Eidson and Merrell L. Patrick ICASE, Hampton, Virginia ICASE NASA Langley Research Center...Computational Grids Contract Number Grant Number Program Element Number Author(s) Thomas M. Eidson and Merrell L. Patrick Project Number Task Number...clear that neither group fully understood the ideas and problems of the other. It was also clear that neither group is given the time and support to
ERIC Educational Resources Information Center
Noah, Philip D., Jr.
2013-01-01
The purpose of this research project was to explore what the core factors are that play a role in the development of the smart-grid. This research study examined The Energy Independence and Security Act (EISA) of 2007 as it pertains to the smart-grid, the economic and security effects of the smart grid, and key factors for its success. The…
Army Communicator. Volume 37, Number 2, Summer 2012
2012-01-01
solution will have to meet four criteria: FIPS 140-2 validated crypto; approved data-at-rest; Common Access Card enablement; and, enterprise management...Information Grid. Common Access Cards , Federal Information Processing Standard 140-2 certifications, and software compliance are just a few of the...and Evaluation Command BMC – Brigade Modernization Command CAC – Common Access Card FIPS – Federal Information Processing Standard GIG – Global
ERIC Educational Resources Information Center
Agbemabiese, Lawrence
2009-01-01
Advances in energy access in developing countries over the past 25 years have been remarkable with more than 1 billion unserved people gaining access to electricity and modern fuels. However, as impressive as this may sound, large gaps remain: 1.6 billion people still lack access to electricity and another 2.5 billion continue to rely on…
NASA Astrophysics Data System (ADS)
Descoqs, Benoit; Bhattacharyya, Subhes
2018-02-01
With more than one billion people lacking access to electricity in the world, ensuring universal access to electricity by 2030 remains a major challenge which cannot be left to the government initiatives alone. Access to local information and identification of potential areas for investment can be a challenge for investors. This paper provides a tool for preliminary assessment of potential markets for off-grid electrification in developing countries and applies this to Ghana to demonstrate its applicability. A multi-criteria approach is used to rank the districts according to the overall potential and the best markets and least favourable areas for investment are identified. The tool offers flexibility to include new inputs to the analysis and the factor weights can be adjusted as appropriate. The case study shows that the tool can effectively identify potential areas from a list of candidates and offers support to analysts.
NASA Astrophysics Data System (ADS)
Li, Xingfeng; Gan, Chaoqin; Liu, Zongkang; Yan, Yuqi; Qiao, HuBao
2018-01-01
In this paper, a novel architecture of hybrid PON for smart grid is proposed by introducing a wavelength-routing module (WRM). By using conventional optical passive components, a WRM with M ports is designed. The symmetry and passivity of the WRM makes it be easily integrated and very cheap in practice. Via the WRM, two types of network based on different ONU-interconnected manner can realize online access. Depending on optical switches and interconnecting fibers, full-fiber-fault protection and dynamic bandwidth allocation are realized in these networks. With the help of amplitude modulation, DPSK modulation and RSOA technology, wavelength triple-reuse is achieved. By means of injecting signals into left and right branches in access ring simultaneously, the transmission delay is decreased. Finally, the performance analysis and simulation of the network verifies the feasibility of the proposed architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, Dave; Garzoglio, Gabriele; Kim, Hyunwoo
As of 2012, a number of US Department of Energy (DOE) National Laboratories have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s Ethernet technology. The ANI network will support DOE's science research programs. A 100 Gb/s network test bed is a key component of the ANI project. The test bed offers the opportunity for early evaluation of 100Gb/s network infrastructure for supporting the high impact data movement typical of science collaborations and experiments. In order to make effective use of thismore » advanced infrastructure, the applications and middleware currently used by the distributed computing systems of large-scale science need to be adapted and tested within the new environment, with gaps in functionality identified and corrected. As a user of the ANI test bed, Fermilab aims to study the issues related to end-to-end integration and use of 100 Gb/s networks for the event simulation and analysis applications of physics experiments. In this paper we discuss our findings from evaluating existing HEP Physics middleware and application components, including GridFTP, Globus Online, etc. in the high-speed environment. These will include possible recommendations to the system administrators, application and middleware developers on changes that would make production use of the 100 Gb/s networks, including data storage, caching and wide area access.« less
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Carbotte, S. M.
2016-02-01
The Marine Geoscience Data System (MGDS: www.marine-geo.org) provides a suite of tools and services for free public access to data acquired throughout the global oceans including maps, grids, near-bottom photos, and geologic interpretations that are essential for habitat characterization and marine spatial planning. Users can explore, discover, and download data through a combination of APIs and front-end interfaces that include dynamic service-driven maps, a geospatially enabled search engine, and an easy to navigate user interface for browsing and discovering related data. MGDS offers domain-specific data curation with a team of scientists and data specialists who utilize a suite of back-end tools for introspection of data files and metadata assembly to verify data quality and ensure that data are well-documented for long-term preservation and re-use. Funded by the NSF as part of the multi-disciplinary IEDA Data Facility, MGDS also offers Data DOI registration and links between data and scientific publications. MGDS produces and curates the Global Multi-Resolution Topography Synthesis (GMRT: gmrt.marine-geo.org), a continuously updated Digital Elevation Model that seamlessly integrates multi-resolutional elevation data from a variety of sources including the GEBCO 2014 ( 1 km resolution) and International Bathymetric Chart of the Southern Ocean ( 500 m) compilations. A significant component of GMRT includes ship-based multibeam sonar data, publicly available through NOAA's National Centers for Environmental Information, that are cleaned and quality controlled by the MGDS Team and gridded at their full spatial resolution (typically 100 m resolution in the deep sea). Additional components include gridded bathymetry products contributed by individual scientists (up to meter scale resolution in places), publicly accessible regional bathymetry, and high-resolution terrestrial elevation data. New data are added to GMRT on an ongoing basis, with two scheduled releases per year. GMRT is available as both gridded data and images that can be viewed and downloaded directly through the Java application GeoMapApp (www.geomapapp.org) and the web-based GMRT MapTool. In addition, the GMRT GridServer API provides programmatic access to grids, imagery, profiles, and single point elevation values.
A Grid Metadata Service for Earth and Environmental Sciences
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni
2010-05-01
Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.
Web service module for access to g-Lite
NASA Astrophysics Data System (ADS)
Goranova, R.; Goranov, G.
2012-10-01
G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.
Stoker, Jason M.; Tyler, Dean J.; Turnipseed, D. Phil; Van Wilson, K.; Oimoen, Michael J.
2009-01-01
Hurricane Katrina was one of the largest natural disasters in U.S. history. Due to the sheer size of the affected areas, an unprecedented regional analysis at very high resolution and accuracy was needed to properly quantify and understand the effects of the hurricane and the storm tide. Many disparate sources of lidar data were acquired and processed for varying environmental reasons by pre- and post-Katrina projects. The datasets were in several formats and projections and were processed to varying phases of completion, and as a result the task of producing a seamless digital elevation dataset required a high level of coordination, research, and revision. To create a seamless digital elevation dataset, many technical issues had to be resolved before producing the desired 1/9-arc-second (3meter) grid needed as the map base for projecting the Katrina peak storm tide throughout the affected coastal region. This report presents the methodology that was developed to construct seamless digital elevation datasets from multipurpose, multi-use, and disparate lidar datasets, and describes an easily accessible Web application for viewing the maximum storm tide caused by Hurricane Katrina in southeastern Louisiana, Mississippi, and Alabama.
Image-guided laser projection for port placement in minimally invasive surgery.
Marmurek, Jonathan; Wedlake, Chris; Pardasani, Utsav; Eagleson, Roy; Peters, Terry
2006-01-01
We present an application of an augmented reality laser projection system in which procedure-specific optimal incision sites, computed from pre-operative image acquisition, are superimposed on a patient to guide port placement in minimally invasive surgery. Tests were conducted to evaluate the fidelity of computed and measured port configurations, and to validate the accuracy with which a surgical tool-tip can be placed at an identified virtual target. A high resolution volumetric image of a thorax phantom was acquired using helical computed tomography imaging. Oriented within the thorax, a phantom organ with marked targets was visualized in a virtual environment. A graphical interface enabled marking the locations of target anatomy, and calculation of a grid of potential port locations along the intercostal rib lines. Optimal configurations of port positions and tool orientations were determined by an objective measure reflecting image-based indices of surgical dexterity, hand-eye alignment, and collision detection. Intra-operative registration of the computed virtual model and the phantom anatomy was performed using an optical tracking system. Initial trials demonstrated that computed and projected port placement provided direct access to target anatomy with an accuracy of 2 mm.
LANDFIRE: A nationally consistent vegetation, wildland fire, and fuel assessment
Rollins, Matthew G.
2009-01-01
LANDFIRE is a 5-year, multipartner project producing consistent and comprehensive maps and data describing vegetation, wildland fuel, fire regimes and ecological departure from historical conditions across the United States. It is a shared project between the wildland fire management and research and development programs of the US Department of Agriculture Forest Service and US Department of the Interior. LANDFIRE meets agency and partner needs for comprehensive, integrated data to support landscape-level fire management planning and prioritization, community and firefighter protection, effective resource allocation, and collaboration between agencies and the public. The LANDFIRE data production framework is interdisciplinary, science-based and fully repeatable, and integrates many geospatial technologies including biophysical gradient analyses, remote sensing, vegetation modelling, ecological simulation, and landscape disturbance and successional modelling. LANDFIRE data products are created as 30-m raster grids and are available over the internet at www.landfire.gov, accessed 22 April 2009. The data products are produced at scales that may be useful for prioritizing and planning individual hazardous fuel reduction and ecosystem restoration projects; however, the applicability of data products varies by location and specific use, and products may need to be adjusted by local users.
Can developing countries leapfrog the centralized electrification paradigm?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Todd; Thomas, Valerie M.
Due to the rapidly decreasing costs of small renewable electricity generation 'systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. By looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model. (C) 2016 International Energy Initiative. Published by Elsevier Inc. All rights reserved.« less
caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909
caGrid 1.0: an enterprise Grid infrastructure for biomedical research.
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.
Energy Systems Integration: Demonstrating Distributed Resource Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-01-01
Overview fact sheet about the Electric Power Research Institute (EPRI) and Schneider Electric Integrated Network Testbed for Energy Grid Research and Technology Experimentation (INTEGRATE) project at the Energy Systems Integration Facility. INTEGRATE is part of the U.S. Department of Energy's Grid Modernization Initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
President Obama
On October 27th, Baltimore Gas & Electric was selected to receive $200 million for Smart Grid innovation projects under the Recovery Act. Watch as members of their team, along with President Obama, explain how building a smarter grid will help consumers cut their utility bills, battle climate change and create jobs.
Grid today, clouds on the horizon
NASA Astrophysics Data System (ADS)
Shiers, Jamie
2009-04-01
By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.
The Czech National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.
2017-10-01
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.
Probabilistic Learning by Rodent Grid Cells
Cheung, Allen
2016-01-01
Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723
Semantic web data warehousing for caGrid.
McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael
2009-10-01
The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.
PVUSA: The value of photovoltaics in the distribution system. The Kerman Grid-Support Project
NASA Astrophysics Data System (ADS)
Wenger, Howard J.; Hoff, Thomas E.
1995-05-01
As part of the Photovoltaics for Utility Scale Applications Applications (PVUSA) Project Pacific Gas Electric Company (PG&E) built the Kerman 500-kW photovoltaic power plant. Located near the end of a distribution feeder in a rural section of Fresno County, the plant was not built so much to demonstrate PV technology, but to evaluate its interaction with the local distribution grid and quantify available nontraditional grid-support benefits (those other than energy and capacity). As demand for new generation began to languish in the 1980s, and siting and permitting of power plants and transmission lines became more involved, utilities began considering smaller, distributed power sources. Potential benefits include shorter construction lead time, less capital outlay, and better utilization of existing assets. The results of a PG&E study in 1990/1991 of the benefits from a PV system to the distribution grid prompted the PVUSA Project to construct a plant at Kerman. Completed in 1993, the plant is believed to be the first one specifically built to evaluate the multiple benefits to the grid of a strategically sited plant. Each of nine discrete benefits were evaluated in detail by first establishing the technical impact, then translating the results into present economic value. Benefits span the entire system from distribution feeder to the generation fleet. This work breaks new ground in evaluation of distributed resources, and suggests that resource planning practices be expanded to account for these non-traditional benefits.
SoilInfo App: global soil information on your palm
NASA Astrophysics Data System (ADS)
Hengl, Tomislav; Mendes de Jesus, Jorge
2015-04-01
ISRIC ' World Soil Information has released in 2014 and app for mobile de- vices called 'SoilInfo' (http://soilinfo-app.org) and which aims at providing free access to the global soil data. SoilInfo App (available for Android v.4.0 Ice Cream Sandwhich or higher, and Apple v.6.x and v.7.x iOS) currently serves the Soil- Grids1km data ' a stack of soil property and class maps at six standard depths at a resolution of 1 km (30 arc second) predicted using automated geostatistical mapping and global soil data models. The list of served soil data includes: soil organic carbon (), soil pH, sand, silt and clay fractions (%), bulk density (kg/m3), cation exchange capacity of the fine earth fraction (cmol+/kg), coarse fragments (%), World Reference Base soil groups, and USDA Soil Taxonomy suborders (DOI: 10.1371/journal.pone.0105992). New soil properties and classes will be continuously added to the system. SoilGrids1km are available for download under a Creative Commons non-commercial license via http://soilgrids.org. They are also accessible via a Representational State Transfer API (http://rest.soilgrids.org) service. SoilInfo App mimics common weather apps, but is also largely inspired by the crowdsourcing systems such as the OpenStreetMap, Geo-wiki and similar. Two development aspects of the SoilInfo App and SoilGrids are constantly being worked on: Data quality in terms of accuracy of spatial predictions and derived information, and Data usability in terms of ease of access and ease of use (i.e. flexibility of the cyberinfrastructure / functionalities such as the REST SoilGrids API, SoilInfo App etc). The development focus in 2015 is on improving the thematic and spatial accuracy of SoilGrids predictions, primarily by using finer resolution covariates (250 m) and machine learning algorithms (such as random forests) to improve spatial predictions.
Considerations for Future Climate Data Stewardship
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P. T.; Chapman, D. R.
2009-12-01
In this talk, we will describe the lessons learned based on processing and generating a decade of gridded AIRS and MODIS IR sounding data. We describe the challenges faced in accessing and sharing very large data sets, maintaining data provenance under evolving technologies, obtaining access to legacy calibration data and the permanent preservation of Earth science data records for on demand services. These lessons suggest a new approach to data stewardship will be required for the next decade of hyper spectral instruments combined with cloud resolving models. It will not be sufficient for stewards of future data centers to just provide the public with access to archived data but our experience indicates that data needs to reside close to computers with ultra large disc farms and tens of thousands of processors to deliver complex services on demand over very high speed networks much like the offerings of search engines today. Over the first decade of the 21st century, petabyte data records were acquired from the AIRS instrument on Aqua and the MODIS instrument on Aqua and Terra. NOAA data centers also maintain petabytes of operational IR sounders collected over the past four decades. The UMBC Multicore Computational Center (MC2) developed a Service Oriented Atmospheric Radiance gridding system (SOAR) to allow users to select IR sounding instruments from multiple archives and choose space-time- spectral periods of Level 1B data to download, grid, visualize and analyze on demand. Providing this service requires high data rate bandwidth access to the on line disks at Goddard. After 10 years, cost effective disk storage technology finally caught up with the MODIS data volume making it possible for Level 1B MODIS data to be available on line. However, 10Ge fiber optic networks to access large volumes of data are still not available from CSFC to serve the broader community. Data transfer rates are well below 10MB/s limiting their usefulness for climate studies. During this decade, processor performance hit a power wall leading computer vendors to design multicore processor chips. High performance computer systems obtained petaflop performance by clustering tens of thousands of multicore processor chips. Thus, power consumption and autonomic recovery from processor and disc failures have become major cost and technical considerations for future data archives. To address these new architecture requirements, a transparent parallel programming paradigm, the Hadoop MapReduce cloud computing system, became available as an open S/W system. In addition, the Hadoop File System and manages the distribution of data to these processors as well as backs up the processing in the event of any processor or disc failure. However, to employ this paradigm, the data needs to be stored on the computer system. We conclude this talk with a climate data preservation approach that addresses the scalability crisis to exabyte data requirements for the next decade based on projections of processor, disc data density and bandwidth doubling rates.
Development of a Web Based Simulating System for Earthquake Modeling on the Grid
NASA Astrophysics Data System (ADS)
Seber, D.; Youn, C.; Kaiser, T.
2007-12-01
Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Steve; Holland, Marika
The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.
The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’smore » t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.« less
Sedimentary Geothermal Feasibility Study: October 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Augustine, Chad; Zerpa, Luis
The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne E.
2013-01-01
We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.
NASA Astrophysics Data System (ADS)
Parodi, A.; Craig, G. C.; Clematis, A.; Kranzlmueller, D.; Schiffers, M.; Morando, M.; Rebora, N.; Trasforini, E.; D'Agostino, D.; Keil, K.
2010-09-01
Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modeling tools, post processing methodologies and observational data and corresponding ICT (Information and Communication Technology) technologies are available. Recent European efforts in developing a platform for e-Science, such as EGEE (Enabling Grids for E-sciencE), SEEGRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, have demonstrated their abilities to provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind and the fact that European ICT-infrastructures are in the progress of transferring to a sustainable and permanent service utility as underlined by the European Grid Initiative (EGI) and the Partnership for Advanced Computing in Europe (PRACE), the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS, co-Founded by the EC under the 7th Framework Programme) project has been initiated. The goal of DRIHMS is the promotion of the Grids in particular and e-Infrastructures in general within the European hydrometeorological research (HMR) community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change. This paper will be devoted to provide an overview of DRIHMS ideas and to present the results of the DRIHMS HMR and ICT surveys.
Deployment of 802.15.4 Sensor Networks for C4ISR Operations
2006-06-01
43 Figure 20.MSP410CA Dense Grid Monitoring (Crossbow User’s Manual, 2005). ....................................44 Figure 21.(a)MICA2 without...Deployment of Sensor Grid (COASTS OPORD, 2006). ...56 Figure 27.Topology View of Two Nodes and Base Station .......57 Figure 28.Nodes Employing Multi...Random Access Memory TCP/IP Transmission Control Protocol/Internet Protocol TinyOS Tiny Micro Threading Operating System UARTs Universal
NASA Astrophysics Data System (ADS)
Williams, D. N.
2015-12-01
Progress in understanding and predicting climate change requires advanced tools to securely store, manage, access, process, analyze, and visualize enormous and distributed data sets. Only then can climate researchers understand the effects of climate change across all scales and use this information to inform policy decisions. With the advent of major international climate modeling intercomparisons, a need emerged within the climate-change research community to develop efficient, community-based tools to obtain relevant meteorological and other observational data, develop custom computational models, and export analysis tools for climate-change simulations. While many nascent efforts to fill these gaps appeared, they were not integrated and therefore did not benefit from collaborative development. Sharing huge data sets was difficult, and the lack of data standards prevented the merger of output data from different modeling groups. Thus began one of the largest-ever collaborative data efforts in climate science, resulting in the Earth System Grid Federation (ESGF), which is now used to disseminate model, observational, and reanalysis data for research assessed by the Intergovernmental Panel on Climate Change (IPCC). Today, ESGF is an open-source petabyte-level data storage and dissemination operational code-base that manages secure resources essential for climate change study. It is designed to remain robust even as data volumes grow exponentially. The internationally distributed, peer-to-peer ESGF "data cloud" archive represents the culmination of an effort that began in the late 1990s. ESGF portals are gateways to scientific data collections hosted at sites around the globe that allow the user to register and potentially access the entire ESGF network of data and services. The growing international interest in ESGF development efforts has attracted many others who want to make their data more widely available and easy to use. For example, the World Climate Research Program, which provides governance for CMIP, has now endorsed the ESGF software foundation to be used for ~70 other model intercomparison projects (MIPs), such as obs4MIPs, TAMIP, CFMIP, and GeoMIP. At present, more than 40 projects disseminate their data via ESGF.
NASA Astrophysics Data System (ADS)
Lee, J.; Waliser, D. E.; Lee, H.; Loikith, P. C.; Kunkel, K.
2017-12-01
Monitoring temporal changes in key climate variables, such as surface air temperature and precipitation, is an integral part of the ongoing efforts of the United States National Climate Assessment (NCA). Climate models participating in CMIP5 provide future trends for four different emissions scenarios. In order to have confidence in the future projections of surface air temperature and precipitation, it is crucial to evaluate the ability of CMIP5 models to reproduce observed trends for three different time periods (1895-1939, 1940-1979, and 1980-2005). Towards this goal, trends in surface air temperature and precipitation obtained from the NOAA nClimGrid 5 km gridded station observation-based product are compared during all three time periods to the 206 CMIP5 historical simulations from 48 unique GCMs and their multi-model ensemble (MME) for NCA-defined climate regions during summer (JJA) and winter (DJF). This evaluation quantitatively examines the biases of simulated trends of the spatially averaged temperature and precipitation in the NCA climate regions. The CMIP5 MME reproduces historical surface air temperature trends for JJA for all time period and all regions, except the Northern Great Plains from 1895-1939 and Southeast during 1980-2005. Likewise, for DJF, the MME reproduces historical surface air temperature trends across all time periods over all regions except the Southeast from 1895-1939 and the Midwest during 1940-1979. The Regional Climate Model Evaluation System (RCMES), an analysis tool which supports the NCA by providing access to data and tools for regional climate model validation, facilitates the comparisons between the models and observation. The RCMES Toolkit is designed to assist in the analysis of climate variables and the procedure of the evaluation of climate projection models to support the decision-making processes. This tool is used in conjunction with the above analysis and results will be presented to demonstrate its capability to access observation and model datasets, calculate evaluation metrics, and visualize the results. Several other examples of the RCMES capabilities can be found at https://rcmes.jpl.nasa.gov.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
NASA Astrophysics Data System (ADS)
Hamlet, A. F.; Chiu, C. M.; Sharma, A.; Byun, K.; Hanson, Z.
2016-12-01
Physically based hydrologic modeling of surface and groundwater resources that can be flexibly and efficiently applied to support water resources policy/planning/management decisions at a wide range of spatial and temporal scales are greatly needed in the Midwest, where stakeholder access to such tools is currently a fundamental barrier to basic climate change assessment and adaptation efforts, and also the co-production of useful products to support detailed decision making. Based on earlier pilot studies in the Pacific Northwest Region, we are currently assembling a suite of end-to-end tools and resources to support various kinds of water resources planning and management applications across the region. One of the key aspects of these integrated tools is that the user community can access gridded products at any point along the end-to-end chain of models, looking backwards in time about 100 years (1915-2015), and forwards in time about 85 years using CMIP5 climate model projections. The integrated model is composed of historical and projected future meteorological data based on station observations and statistical and dynamically downscaled climate model output respectively. These gridded meteorological data sets serve as forcing data for the macro-scale VIC hydrologic model implemented over the Midwest at 1/16 degree resolution. High-resolution climate model (4km WRF) output provides inputs for the analyses of urban impacts, hydrologic extremes, agricultural impacts, and impacts to the Great Lakes. Groundwater recharge estimated by the surface water model provides input data for fine-scale and macro-scale groundwater models needed for specific applications. To highlight the multi-scale use of the integrated models in support of co-production of scientific information for decision making, we briefly describe three current case studies addressing different spatial scales of analysis: 1) Effects of climate change on the water balance of the Great Lakes, 2) Future hydropower resources in the St. Joseph River basin, 3) Effects of climate change on carbon cycling in small lakes in the Northern Highland Lakes District.
Abdoulhadi, Dalia; Chevalet, Pascal; Moret, Leila; Fix, Marie-Hélène; Gégu, Marine; Jaulin, Philippe; Berrut, Gilles; de Decker, Laure
2015-03-01
The patient population staying in nursing homes is increasingly vulnerable and dependent and should benefit from a direct access to an acute care geriatric unit. Nevertheless, the easy access by a simple phone call from the general practitioner to the geriatrician, as well as the lack of orientation of these patients by emergency units, might lead to inappropriate admissions. This work studied the appropriateness of direct admissions of 40 patients living in nursing home in an acute care geriatric unit. Based on the AEPf assessment grid, 82.5% of these admissions were considered as appropriate (52.5%) or justified (30% based on an expert panel decision), and 17.5% were inappropriate. In conclusion, the process of direct admission does not seem to increase the rate of inappropriate admissions. Some actions could decrease this rate: implementation of geriatric mobile teams or psychogeriatric mobile teams intervening in nursing home, a better and more adapted use of ambulatory structures, a better information to the general practitioners. In order to reduce the intervention of the panel of experts, an adaptation of the AEPf assessment grid to these geriatric patients has been proposed. The "AEPg" assessment grid should benefit from a validation study.
The International Symposium on Grids and Clouds and the Open Grid Forum
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be addressed while OGF exposed the state of current developments and issues to be resolved if commonalities are to be exploited. Another first is for the Proceedings for 2011, an open access online publishing scheme will ensure these Proceedings will appear more quickly and more people will have access to the results, providing a long-term online archive of the event. The symposium attracted more than 212 participants from 29 countries spanning Asia, Europe and the Americas. Coming so soon after the earthquake and tsunami in Japan, the participation of our Japanese colleagues was particularly appreciated. Keynotes by invited speakers highlighted the impact of distributed computing infrastructures in the social sciences and humanities, high energy physics, earth and life sciences. Plenary sessions entitled Grid Activities in Asia Pacific surveyed the state of grid deployment across 11 Asian countries. Through the parallel sessions, the impact of distributed computing infrastructures in a range of research disciplines was highlighted. Operational procedures, middleware and security aspects were addressed in a dedicated sessions. The symposium was covered online in real-time by the GridCast team from the GridTalk project. A running blog including summarises of specific sessions as well as video interviews with keynote speakers and personalities and photos. As with all regions of the world, grid and cloud computing has to be prove it is adding value to researchers if it is be accepted by them and demonstrate its impact on society as a while if it to be supported by national governments, funding agencies and the general public. ISGC has helped foster the emergence of a strong regional interest in the earth and life sciences, notably for natural disaster mitigation and bioinformatics studies. Prof. Simon C. Lin organised an intense social programme with a gastronomic tour of Taipei culminating with a banquet for all the symposium's participants at the hotel Palais de Chine. I would like to thank all the members of the programme committee, the participants and above all our hosts, Prof. Simon C. Lin and his excellent support team at Academia Sinica. Dr. Bob Jones Programme Chair 1 http://event.twgrid.org/isgc2011/ 2 http://www.gridforum.org/
NASA Astrophysics Data System (ADS)
Sorteberg, Hilleborg K.
2010-05-01
In the hydropower industry, it is important to have precise information about snow deposits at all times, to allow for effective planning and optimal use of the water. In Norway, it is common to measure snow density using a manual method, i.e. the depth and weight of the snow is measured. In recent years, radar measurements have been taken from snowmobiles; however, few energy supply companies use this method operatively - it has mostly been used in connection with research projects. Agder Energi is the first Norwegian power producer in using radar tecnology from helicopter in monitoring mountain snow levels. Measurement accuracy is crucial when obtaining input data for snow reservoir estimates. Radar screening by helicopter makes remote areas more easily accessible and provides larger quantities of data than traditional ground level measurement methods. In order to draw up a snow survey system, it is assumed as a basis that the snow distribution is influenced by vegetation, climate and topography. In order to take these factors into consideration, a snow survey system for fields in high mountain areas has been designed in which the data collection is carried out by following the lines of a grid system. The lines of this grid system is placed in order to effectively capture the distribution of elevation, x-coordinates, y-coordinates, aspect, slope and curvature in the field. Variation in climatic conditions are also captured better when using a grid, and dominant weather patterns will largely be captured in this measurement system.
Model Data Interoperability for the United States Integrated Ocean Observing System (IOOS)
NASA Astrophysics Data System (ADS)
Signell, Richard P.
2010-05-01
Model data interoperability for the United States Integrated Ocean Observing System (IOOS) was initiated with a focused one year project. The problem was that there were many regional and national providers of oceanographic model data; each had unique file conventions, distribution techniques and analysis tools that made it difficult to compare model results and observational data. To solve this problem, a distributed system was built utilizing a customized middleware layer and a common data model. This allowed each model data provider to keep their existing model and data files unchanged, yet deliver model data via web services in a common form. With standards-based applications that used these web services, end users then had a common way to access data from any of the models. These applications included: (1) a 2D mapping and animation using a web browser application, (2) an advanced 3D visualization and animation using a desktop application, and (3) a toolkit for a common scientific analysis environment. Due to the flexibility and low impact of the approach on providers, rapid progress was made. The system was implemented in all eleven US IOOS regions and at the NOAA National Coastal Data Development Center, allowing common delivery of regional and national oceanographic model forecast and archived results that cover all US waters. The system, based heavily on software technology from the NSF-sponsored Unidata Program Center, is applicable to any structured gridded data, not just oceanographic model data. There is a clear pathway to expand the system to include unstructured grid (e.g. triangular grid) data.
Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community
NASA Astrophysics Data System (ADS)
Ahmad, Mushtaq
2008-05-01
The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.
Enhanced Cyberspace Defense through Covert Publish-Subscribe Broker Pattern Communications
2008-06-01
http://www.cnn.com/TECH/computing/9902/26/t_t/internet.time/, accessed June 2008. [3] C. V. Clausewitz, On War, 1st ed.. London, England: Kegan ...34 2003. [17] T. Greene . (2008 , Apr.) Network World. [Online]. http://www.networkworld.com/news/2008/040908-rsa-hack-power-grid.html, accessed
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-22
...: Antrim Micro-Hydropower Project. f. Location: The proposed Antrim Micro-Hydropower Project will be..., protests, and/or motions filed. k. Description of Project: The proposed Antrim Micro-Hydropower Project... and the project will not be connected to an interstate grid. When a Declaration of Intention is filed...
78 FR 24190 - Transcontinental Gas Pipe Line Company, LLC; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-24
... Northeast Connector Project (Project) in New York. The Project is an expansion of Transco's existing... Rockaway Delivery Lateral. The Project will include compressor unit modifications and the net addition of... required. In addition to the firm service to be provided by the Project, National Grid NY can use its...
Violi, Ianina L; Perez, M Dolores; Fuertes, M Cecilia; Soler-Illia, Galo J A A
2012-08-01
Highly porous (V(mesopore) = 25-50%) and ordered mesoporous titania thin films (MTTF) were prepared on ITO (indium tin oxide)-covered glass by a fast two-step method. The effects of substrate surface modification and thermal treatment on pore order, accessibility and crystallinity of the MTTF were systematically studied for MTTF deposited onto bare and titania-modified ITO. MTTF exposed briefly to 550 °C resulted in highly ordered films with grid-like structures, enlarged pore size, and increased accessible pore volume when prepared onto the modified ITO substrate. Mesostructure collapse and no significant change in pore volume were observed for MTTF deposited on bare ITO substrates. Highly crystalline anatase was obtained for MTTF prepared on the modified-ITO treated at high temperatures, establishing the relationship between grid-like structures and titania crystallization. Photocatalytic activity was maximized for samples with increased crystallization and high accessible pore volume. In this manner, a simple way of designing materials with optimized characteristics for optoelectronic applications was achieved through the modification of the ITO surface and a controlled thermal treatment.
33 CFR 3.01-1 - General description.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administration (NOAA) using the NAD 1983 coordinate system and projected to the WGS 1984 grid system. Both coordinate systems are geocentric and similar such that they are Global Positioning System (GPS) compatible... based upon boundaries and points located using the WGS 1984 world grid system. When referenced, the...