Sample records for core grid services

  1. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  2. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  3. caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oster, S.; Langella, S.; Hastings, S.

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: .« less

  4. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  5. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  6. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  7. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  8. Grid Technology as a Cyber Infrastructure for Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.

    2004-01-01

    This paper describes how grids and grid service technologies can be used to develop an infrastructure for the Earth Science community. This cyberinfrastructure would be populated with a hierarchy of services, including discipline specific services such those needed by the Earth Science community as well as a set of core services that are needed by most applications. This core would include data-oriented services used for accessing and moving data as well as computer-oriented services used to broker access to resources and control the execution of tasks on the grid. The availability of such an Earth Science cyberinfrastructure would ease the development of Earth Science applications. With such a cyberinfrastructure, application work flows could be created to extract data from one or more of the Earth Science archives and then process it by passing it through various persistent services that are part of the persistent cyberinfrastructure, such as services to perform subsetting, reformatting, data mining and map projections.

  9. Integrating Grid Services into the Cray XT4 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy

    2009-05-01

    The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less

  10. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  11. A data grid for imaging-based clinical trials

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  12. Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment

    PubMed Central

    Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel

    2008-01-01

    Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979

  13. Grid-wide neuroimaging data federation in the context of the NeuroLOG project

    PubMed Central

    Michel, Franck; Gaignard, Alban; Ahmad, Farooq; Barillot, Christian; Batrancourt, Bénédicte; Dojat, Michel; Gibaud, Bernard; Girard, Pascal; Godard, David; Kassel, Gilles; Lingrand, Diane; Malandain, Grégoire; Montagnat, Johan; Pélégrini-Issac, Mélanie; Pennec, Xavier; Rojas Balderrama, Javier; Wali, Bacem

    2010-01-01

    Grid technologies are appealing to deal with the challenges raised by computational neurosciences and support multi-centric brain studies. However, core grids middleware hardly cope with the complex neuroimaging data representation and multi-layer data federation needs. Moreover, legacy neuroscience environments need to be preserved and cannot be simply superseded by grid services. This paper describes the NeuroLOG platform design and implementation, shedding light on its Data Management Layer. It addresses the integration of brain image files, associated relational metadata and neuroscience semantic data in a heterogeneous distributed environment, integrating legacy data managers through a mediation layer. PMID:20543431

  14. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  15. Grid Task Execution

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2007-01-01

    IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.

  16. Design and implementation of spatial knowledge grid for integrated spatial analysis

    NASA Astrophysics Data System (ADS)

    Liu, Xiangnan; Guan, Li; Wang, Ping

    2006-10-01

    Supported by spatial information grid(SIG), the spatial knowledge grid (SKG) for integrated spatial analysis utilizes the middleware technology in constructing the spatial information grid computation environment and spatial information service system, develops spatial entity oriented spatial data organization technology, carries out the profound computation of the spatial structure and spatial process pattern on the basis of Grid GIS infrastructure, spatial data grid and spatial information grid (specialized definition). At the same time, it realizes the complex spatial pattern expression and the spatial function process simulation by taking the spatial intelligent agent as the core to establish space initiative computation. Moreover through the establishment of virtual geographical environment with man-machine interactivity and blending, complex spatial modeling, network cooperation work and spatial community decision knowledge driven are achieved. The framework of SKG is discussed systematically in this paper. Its implement flow and the key technology with examples of overlay analysis are proposed as well.

  17. Outlook for grid service technologies within the @neurIST eHealth environment.

    PubMed

    Arbona, A; Benkner, S; Fingberg, J; Frangi, A F; Hofmann, M; Hose, D R; Lonsdale, G; Ruefenacht, D; Viceconti, M

    2006-01-01

    The aim of the @neurIST project is to create an IT infrastructure for the management of all processes linked to research, diagnosis and treatment development for complex and multi-factorial diseases. The IT infrastructure will be developed for one such disease, cerebral aneurysm and subarachnoid haemorrhage, but its core technologies will be transferable to meet the needs of other medical areas. Since the IT infrastructure for @neurIST will need to encompass data repositories, computational analysis services and information systems handling multi-scale, multi-modal information at distributed sites, the natural basis for the IT infrastructure is a Grid Service middleware. The project will adopt a service-oriented architecture because it aims to provide a system addressing the needs of medical researchers, clinicians and health care specialists (and their IT providers/systems) and medical supplier/consulting industries.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sparn, Bethany; Hunsberger, Randolph

    Water and wastewater treatment plants and distribution systems use significant amounts of energy, around 2 - 4% of the total electricity used in the US, and their energy use is projected to increase as populations increase and regulations become more stringent. Water and wastewater systems have largely been disconnected from the electric utilities' efforts to improve energy efficiency and provide energy efficiency and provide grid services, likely because their core mission is to provide clean water and treated wastewater. Energy efficiency has slowly crept into the water and wastewater industry as the economic benefit has become more apparent, but theremore » is still potential for significant improvement. Some of the larger, more progressive water utilities are starting to consider providing grid services; however, it remains a foreign concept to many. This report explores intrinsic mechanisms by which the water and wastewater industries can provide exchangeable services, the benefit to the parties involved, and the barriers to implementation. It also highlights relevant case studies and next steps. Although opportunities for increasing process efficiencies are certainly available, this report focuses on the exchangeable services that water and wastewater loads can provide to help maintain grid reliability, keep overall costs down, and increase the penetration of distributed renewables on the electric grid. These services have potential to provide water utilities additional value streams, using existing equipment with modest or negligible upgrade cost.« less

  19. The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    PubMed Central

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-01

    Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. Conclusion The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development. PMID:16398930

  20. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  1. Whooping crane stopover site use intensity within the Great Plains

    USGS Publications Warehouse

    Pearse, Aaron T.; Brandt, David A.; Harrell, Wade C.; Metzger, Kristine L.; Baasch, David M.; Hefley, Trevor J.

    2015-09-23

    Whooping cranes (Grus americana) of the Aransas-Wood Buffalo population migrate twice each year through the Great Plains in North America. Recovery activities for this endangered species include providing adequate places to stop and rest during migration, which are generally referred to as stopover sites. To assist in recovery efforts, initial estimates of stopover site use intensity are presented, which provide opportunity to identify areas across the migration range used more intensively by whooping cranes. We used location data acquired from 58 unique individuals fitted with platform transmitting terminals that collected global position system locations. Radio-tagged birds provided 2,158 stopover sites over 10 migrations and 5 years (2010–14). Using a grid-based approach, we identified 1,095 20-square-kilometer grid cells that contained stopover sites. We categorized occupied grid cells based on density of stopover sites and the amount of time cranes spent in the area. This assessment resulted in four categories of stopover site use: unoccupied, low intensity, core intensity, and extended-use core intensity. Although provisional, this evaluation of stopover site use intensity offers the U.S. Fish and Wildlife Service and partners a tool to identify landscapes that may be of greater conservation significance to migrating whooping cranes. Initially, the tool will be used by the U.S. Fish and Wildlife Service and other interested parties in evaluating the Great Plains Wind Energy Habitat Conservation Plan.

  2. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  3. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  4. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  5. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  6. CMS Connect

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Bockelman, B.; Gardner, R., Jr.; Hurtado Anampa, K.; Jayatilaka, B.; Aftab Khan, F.; Lannon, K.; Larson, K.; Letts, J.; Marra Da Silva, J.; Mascheroni, M.; Mason, D.; Perez-Calero Yzquierdo, A.; Tiradani, A.

    2017-10-01

    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users.

  7. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  8. Nuclear reactor spacer grid and ductless core component

    DOEpatents

    Christiansen, David W.; Karnesky, Richard A.

    1989-01-01

    The invention relates to a nuclear reactor spacer grid member for use in a liquid cooled nuclear reactor and to a ductless core component employing a plurality of these spacer grid members. The spacer grid member is of the egg-shell type and is constructed so that the walls of the cell members of the grid member are formed of a single thickness of metal to avoid tolerance problems. Within each cell member is a hydraulic spring which laterally constrains the nuclear material bearing rod which passes through each cell member against a hardstop in response to coolant flow through the cell member. This hydraulic spring is also suitable for use in a water cooled nuclear reactor. A core component constructed of, among other components, a plurality of these spacer grid members, avoids the use of a full length duct by providing spacer sleeves about the sodium tubes passing through the spacer grid members at locations between the grid members, thereby maintaining a predetermined space between adjacent grid members.

  9. The International Solid Earth Research Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Fox, G.; Pierce, M.; Rundle, J.; Donnellan, A.; Parker, J.; Granat, R.; Lyzenga, G.; McLeod, D.; Grant, L.

    2004-12-01

    We describe the architecture and initial implementation of the International Solid Earth Research Virtual Observatory (iSERVO). This has been prototyped within the USA as SERVOGrid and expansion is planned to Australia, China, Japan and other countries. We base our design on a globally scalable distributed "cyber-infrastructure" or Grid built around a Web Services-based approach consistent with the extended Web Service Interoperability approach. The Solid Earth Science Working Group of NASA has identified several challenges for Earth Science research. In order to investigate these, we need to couple numerical simulation codes and data mining tools to observational data sets. This observational data are now available on-line in internet-accessible forms, and the quantity of this data is expected to grow explosively over the next decade. We architect iSERVO as a loosely federated Grid of Grids with each country involved supporting a national Solid Earth Research Grid. The national Grid Operations, possibly with dedicated control centers, are linked together to support iSERVO where an International Grid control center may eventually be necessary. We address the difficult multi-administrative domain security and ownership issues by exposing capabilities as services for which the risk of abuse is minimized. We support large scale simulations within a single domain using service-hosted tools (mesh generation, data repository and sensor access, GIS, visualization). Simulations typically involve sequential or parallel machines in a single domain supported by cross-continent services. We use Web Services implement Service Oriented Architecture (SOA) using WSDL for service description and SOAP for message formats. These are augmented by UDDI, WS-Security, WS-Notification/Eventing and WS-ReliableMessaging in the WS-I+ approach. Support for the latter two capabilities will be available over the next 6 months from the NaradaBrokering messaging system. We augment these specifications with the powerful portlet architecture using WSRP and JSR168 supported by such portal containers as uPortal, WebSphere, and Apache JetSpeed2. The latter portal aggregates component user interfaces for each iSERVO service allowing flexible customization of the user interface. We exploit the portlets produced by the NSF NMI (Middleware initiative) OGCE activity. iSERVO also uses specifications from the Open Geographical Information Systems (GIS) Consortium (OGC) that defines a number of standards for modeling earth surface feature data and services for interacting with this data. The data models are expressed in the XML-based Geography Markup Language (GML), and the OGC service framework are being adapted to use the Web Service model. The SERVO prototype includes a GIS Grid that currently includes the core WMS and WFS (Map and Feature) services. We will follow the best practice in the Grid and Web Service field and will adapt our technology as appropriate. For example, we expect to support services built on WS-RF when is finalized and to make use of the database interfaces OGSA-DAI and its WS-I+ versions. Finally, we review advances in Web Service scripting (such as HPSearch) and workflow systems (such as GCF) and their applications to iSERVO.

  10. A high throughput geocomputing system for remote sensing quantitative retrieval and a case study

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting

    2011-12-01

    The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.

  11. Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid

    NASA Astrophysics Data System (ADS)

    Lee, Jasper; Le, Anh; Liu, Brent

    2008-03-01

    The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.

  12. E-Science and Grids in Europe

    NASA Astrophysics Data System (ADS)

    Hey, Tony

    2002-08-01

    After defining what is meant by the term 'e-Science', this talk will survey the activity on e-Science and Grids in Europe. The two largest initiatives in Europe are the European Commission's portfolio of Grid projects and the UK e-Science program. The EU under its R Framework Program are funding nearly twenty Grid projects in a wide variety of application areas. These projects are in varying stages of maturity and this talk will focus on a subset that have most significant progress. These include the EU DataGrid project led by CERN and two projects - EuroGrid and Grip - that evolved from the German national Unicore project. A summary of the other EU Grid projects will be included. The UK e-Science initiative is a 180M program entirely focused on e-Science applications requiring resource sharing, a virtual organization and a Grid infrastructure. The UK program is unique for three reasons: (1) the program covers all areas of science and engineering; (2) all of the funding is devoted to Grid application and middleware development and not to funding major hardware platforms; and (3) there is an explicit connection with industry to produce robust and secure industrial-strength versions of Grid middleware that could be used in business-critical applications. A part of the funding, around 50M, but requiring an additional 'matching' $30M from industry in collaborative projects, forms the UK e-Science 'Core Program'. It is the responsibility of the Core Program to identify and support a set of generic middleware requirements that have emerged from a requirements analysis of the e-Science application projects. This has led to a much more data-centric vision for 'the Grid' in the UK in which access to HPC facilities forms only one element. More important for the UK projects are issues such as enabling access and federation of scientific data held in files, relational databases and other archives. Automatic annotation of data generated by high throughput experiments with XML-based metadata is seen as a key step towards developing higher-level Grid services for information retrieval and knowledge discovery. The talk will conclude with a survey of other Grid initiatives across Europe and look at possible future European projects.

  13. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, C.M.; Lane, S.M.

    1995-10-03

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 {micro}m pore size, 80% open area, and 4 mm thickness. 2 figs.

  14. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, Clinton M.; Lane, Stephen M.

    1995-01-01

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 .mu.m pore size, 80% open area, and 4 mm thickness.

  15. Gridded snow water equivalent reconstruction for Utah using Forest Inventory and Analysis tree-ring data

    Treesearch

    Daniel Barandiaran; S.-Y. Simon Wang; R. Justin DeRose

    2017-01-01

    Snowpack observations in the Intermountain West are sparse and short, making them difficult for use in depicting past variability and extremes. This study presents a reconstruction of April 1 snow water equivalent (SWE) for the period of 1850–1989 using increment cores collected by the U.S. Forest Service, Interior West Forest Inventory and Analysis program (FIA). In...

  16. ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows

    NASA Technical Reports Server (NTRS)

    McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush

    2004-01-01

    With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.

  17. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  18. A New Dynamical Core Based on the Prediction of the Curl of the Horizontal Vorticity

    NASA Astrophysics Data System (ADS)

    Konor, C. S.; Randall, D. A.; Heikes, R. P.

    2015-12-01

    The Vector-Vorticity Dynamical core (VVM) developed by Jung and Arakawa (2008) has important advantages for the use with the anelastic and unified systems of equations. The VVM predicts the horizontal vorticity vector (HVV) at each interface and the vertical vorticity at the top layer of the model. To guarantee that the three-dimensional vorticity is nondivergent, the vertical vorticity at the interior layers is diagnosed from the horizontal divergence of the HVV through a vertical integral from the top to down. To our knowledge, this is the only dynamical core that guarantees the nondivergence of the three-dimensional vorticity. The VVM uses a C-type horizontal grid, which allows a computational mode. While the computational mode does not seem to be serious in the Cartesian grid applications, it may be serious in the icosahedral grid applications because of the extra degree of freedom in such grids. Although there are special filters to minimize the effects of this computational mode, we prefer to eliminate it altogether. We have developed a new dynamical core, which uses a Z-grid to avoid the computational mode mentioned above. The dynamical core predicts the curl of the HVV and diagnoses the horizontal divergence of the HVV from the predicted vertical vorticity. The three-dimensional vorticity is guaranteed to be nondivergent as in the VVM. In this presentation, we will introduce the new dynamical core and show results obtained by using Cartesian and hexagonal grids. We will also compare the solutions to that obtained by the VVM.

  19. Physicists Get INSPIREd: INSPIRE Project and Grid Applications

    NASA Astrophysics Data System (ADS)

    Klem, Jukka; Iwaszkiewicz, Jan

    2011-12-01

    INSPIRE is the new high-energy physics scientific information system developed by CERN, DESY, Fermilab and SLAC. INSPIRE combines the curated and trusted contents of SPIRES database with Invenio digital library technology. INSPIRE contains the entire HEP literature with about one million records and in addition to becoming the reference HEP scientific information platform, it aims to provide new kinds of data mining services and metrics to assess the impact of articles and authors. Grid and cloud computing provide new opportunities to offer better services in areas that require large CPU and storage resources including document Optical Character Recognition (OCR) processing, full-text indexing of articles and improved metrics. D4Science-II is a European project that develops and operates an e-Infrastructure supporting Virtual Research Environments (VREs). It develops an enabling technology (gCube) which implements a mechanism for facilitating the interoperation of its e-Infrastructure with other autonomously running data e-Infrastructures. As a result, this creates the core of an e-Infrastructure ecosystem. INSPIRE is one of the e-Infrastructures participating in D4Science-II project. In the context of the D4Science-II project, the INSPIRE e-Infrastructure makes available some of its resources and services to other members of the resulting ecosystem. Moreover, it benefits from the ecosystem via a dedicated Virtual Organization giving access to an array of resources ranging from computing and storage resources of grid infrastructures to data and services.

  20. Grid Technology as a Cyberinfrastructure for Delivering High-End Services to the Earth and Space Science Community

    NASA Technical Reports Server (NTRS)

    Hinke, Thomas H.

    2004-01-01

    Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.

  1. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  2. Production experience with the ATLAS Event Service

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  3. GEMSS: grid-infrastructure for medical service provision.

    PubMed

    Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R

    2005-01-01

    The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.

  4. Preparing CAM-SE for Multi-Tracer Applications: CAM-SE-Cslam

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Taylor, M.; Goldhaber, S.

    2014-12-01

    The NCAR-DOE spectral element (SE) dynamical core comes from the HOMME (High-Order Modeling Environment; Dennis et al., 2012) and it is available in CAM. The CAM-SE dynamical core is designed with intrinsic mimetic properties guaranteeing total energy conservation (to time-truncation errors) and mass-conservation, and has demonstrated excellent scalability on massively parallel compute platforms (Taylor, 2011). For applications involving many tracers such as chemistry and biochemistry modeling, CAM-SE has been found to be significantly more computationally costly than the current "workhorse" model CAM-FV (Finite-Volume; Lin 2004). Hence a multi-tracer efficient scheme, called the CSLAM (Conservative Semi-Lagrangian Multi-tracer; Lauritzen et al., 2011) scheme, has been implemented in the HOMME (Erath et al., 2012). The CSLAM scheme has recently been cast in flux-form in HOMME so that it can be coupled to the SE dynamical core through conventional flux-coupling methods where the SE dynamical core provides background air mass fluxes to CSLAM. Since the CSLAM scheme makes use of a finite-volume gnomonic cubed-sphere grid and hence does not operate on the SE quadrature grid, the capability of running tracer advection, the physical parameterization suite and dynamics on separate grids has been implemented in CAM-SE. The default CAM-SE-CSLAM setup is to run physics on the quasi-equal area CSLAM grid. The capability of running physics on a different grid than the SE dynamical core may provide a more consistent coupling since the physics grid option operates with quasi-equal-area cell average values rather than non-equi-distant grid-point (SE quadrature point) values. Preliminary results on the performance of CAM-SE-CSLAM will be presented.

  5. Analysis of the ORNL/TSF GCFR Grid-Plate Shield Design Confirmation Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, C.O.; Cramer, S.N.; Ingersoll, D.T.

    1979-08-01

    The results of the analysis of the GCFR Grid-Plate Shield Design Confirmation Experiment are presented. The experiment, performed at the ORNL Tower Shielding Facility, was designed to test the adequacy of methods and data used in the analysis of the GCFR design. In particular, the experiment tested the adequacy of methods to calculate: (1) axial neutron streaming in the GCFR core and axial blanket, (2) the amount and location of the maximum fast-neutron exposure to the grid plate, and (3) the neutron source leaving the top of the grid plate and entering the upper plenum. Other objectives of the experimentmore » were to verify the grid-plate shielding effectiveness and to assess the effects of fuel-pin and subassembly spacing on radiation levels in the GCFR. The experimental mockups contained regions representing the GCFR core/blanket region, the grid-plate shield section, and the grid plate. Most core design options were covered by allowing: (1) three different spacings between fuel subassemblies, (2) two different void fractions within a subassembly by variation of the number of fuel pins, and (3) a mockup of a control-rod channel.« less

  6. Service engineering for grid services in medicine and life science.

    PubMed

    Weisbecker, Anette; Falkner, Jürgen

    2009-01-01

    Clearly defined services with appropriate business models are necessary in order to exploit the benefit of grid computing for industrial and academic users in medicine and life sciences. In the project Services@MediGRID the service engineering approach is used to develop those clearly defined grid services and to provide sustainable business models for their usage.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unneberg, L.

    The main features of the 16 core grids (top guides) designed by ABB ATOM AB are briefly described and the evolution of the design is discussed. One important characteristic of the first nine grids is the existence of bolts securing guide bars to the core grid plates. These bolts are made of precipitation hardened or solution annealed stainless steel. During operation, bolts in all none grids have cracked. The failure analyses indicate that intergranular stress corrosion cracking (IGSCC), possibly accelerated by crevice conditions and/or irradiation, was the cause of failure. Fast neutron fluences approaching or exceeding the levels considered asmore » critical for irradiation assisted stress corrosion cracking (IASCC) will be reached in a few cases only. Temporary measures were taken immediately after the discovery of the cracking. For five of the nine reactors affected, it was decided to replace the complete grids. Two of these replacements have been successfully carried out to date. IASCC as a potential future problem is discussed and it is pointed out that, during their life times, the ABB ATOM core grids will be exposed to sufficiently high fast neutron fluences to cause some concern.« less

  8. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    PubMed

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Unstructured Grids for Sonic Boom Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Nayani, Sudheer N.

    2015-01-01

    An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.

  10. caGrid 1.0: a Grid enterprise architecture for cancer research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  11. Final Report for DOE Project: Portal Web Services: Support of DOE SciDAC Collaboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mary Thomas, PI; Geoffrey Fox, Co-PI; Gannon, D

    2007-10-01

    Grid portals provide the scientific community with familiar and simplified interfaces to the Grid and Grid services, and it is important to deploy grid portals onto the SciDAC grids and collaboratories. The goal of this project is the research, development and deployment of interoperable portal and web services that can be used on SciDAC National Collaboratory grids. This project has four primary task areas: development of portal systems; management of data collections; DOE science application integration; and development of web and grid services in support of the above activities.

  12. Fuel Cell Backup Power System for Grid Service and Micro-Grid in Telecommunication Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Zhiwen; Eichman, Joshua D; Kurtz, Jennifer M

    This paper presents the feasibility and economics of using fuel cell backup power systems in telecommunication cell towers to provide grid services (e.g., ancillary services, demand response). The fuel cells are able to provide power for the cell tower during emergency conditions. This study evaluates the strategic integration of clean, efficient, and reliable fuel cell systems with the grid for improved economic benefits. The backup systems have potential as enhanced capability through information exchanges with the power grid to add value as grid services that depend on location and time. The economic analysis has been focused on the potential revenuemore » for distributed telecommunications fuel cell backup units to provide value-added power supply. This paper shows case studies on current fuel cell backup power locations and regional grid service programs. The grid service benefits and system configurations for different operation modes provide opportunities for expanding backup fuel cell applications responsive to grid needs.« less

  13. Quantifying electric vehicle battery degradation from driving vs. vehicle-to-grid services

    NASA Astrophysics Data System (ADS)

    Wang, Dai; Coignard, Jonathan; Zeng, Teng; Zhang, Cong; Saxena, Samveg

    2016-11-01

    The risk of accelerated electric vehicle battery degradation is commonly cited as a concern inhibiting the implementation of vehicle-to-grid (V2G) technology. However, little quantitative evidence exists in prior literature to refute or substantiate these concerns for different grid services that vehicles may offer. In this paper, a methodology is proposed to quantify electric vehicle (EV) battery degradation from driving only vs. driving and several vehicle-grid services, based on a semi-empirical lithium-ion battery capacity fade model. A detailed EV battery pack thermal model and EV powertrain model are utilized to capture the time-varying battery temperature and working parameters including current, internal resistance and state-of-charge (SOC), while an EV is driving and offering various grid services. We use the proposed method to simulate the battery degradation impacts from multiple vehicle-grid services including peak load shaving, frequency regulation and net load shaping. The degradation impact of these grid services is compared against baseline cases for driving and uncontrolled charging only, for several different cases of vehicle itineraries, driving distances, and climate conditions. Over the lifetime of a vehicle, our results show that battery wear is indeed increased when vehicles offer V2G grid services. However, the increased wear from V2G is inconsequential compared with naturally occurring battery wear (i.e. from driving and calendar ageing) when V2G services are offered only on days of the greatest grid need (20 days/year in our study). In the case of frequency regulation and peak load shaving V2G grid services offered 2 hours each day, battery wear remains minimal even if this grid service is offered every day over the vehicle lifetime. Our results suggest that an attractive tradeoff exists where vehicles can offer grid services on the highest value days for the grid with minimal impact on vehicle battery life.

  14. Thermo-Physics Technical Note No. 60: thermal analysis of SNAP 10A reactor core during atmospheric reentry and resulting core disintegration and fuel element separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mouradian, E.M.

    1966-02-16

    A thermal analysis is carried out to determine the temperature distribution throughout a SNAP 10A reactor core, particularly in the vicinity of the grid plates, during atmospheric reentry. The transient temperatue distribution of the grid plate indicates when sufficient melting occurs so that fuel elements are free to be released and continue their descent individually.

  15. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  16. Final Technical Report for Contract No. DE-EE0006332, "Integrated Simulation Development and Decision Support Tool-Set for Utility Market and Distributed Solar Power Generation"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormier, Dallas; Edra, Sherwin; Espinoza, Michael

    This project will enable utilities to develop long-term strategic plans that integrate high levels of renewable energy generation, and to better plan power system operations under high renewable penetration. The program developed forecast data streams for decision support and effective integration of centralized and distributed solar power generation in utility operations. This toolset focused on real time simulation of distributed power generation within utility grids with the emphasis on potential applications in day ahead (market) and real time (reliability) utility operations. The project team developed and demonstrated methodologies for quantifying the impact of distributed solar generation on core utility operations,more » identified protocols for internal data communication requirements, and worked with utility personnel to adapt the new distributed generation (DG) forecasts seamlessly within existing Load and Generation procedures through a sophisticated DMS. This project supported the objectives of the SunShot Initiative and SUNRISE by enabling core utility operations to enhance their simulation capability to analyze and prepare for the impacts of high penetrations of solar on the power grid. The impact of high penetration solar PV on utility operations is not only limited to control centers, but across many core operations. Benefits of an enhanced DMS using state-of-the-art solar forecast data were demonstrated within this project and have had an immediate direct operational cost savings for Energy Marketing for Day Ahead generation commitments, Real Time Operations, Load Forecasting (at an aggregate system level for Day Ahead), Demand Response, Long term Planning (asset management), Distribution Operations, and core ancillary services as required for balancing and reliability. This provided power system operators with the necessary tools and processes to operate the grid in a reliable manner under high renewable penetration.« less

  17. caGrid 1.0: A Grid Enterprise Architecture for Cancer Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-01-01

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIGTM) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIGTM. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5. PMID:18693901

  18. Grid site availability evaluation and monitoring at CMS

    DOE PAGES

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  19. Grid site availability evaluation and monitoring at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less

  20. Grid site availability evaluation and monitoring at CMS

    NASA Astrophysics Data System (ADS)

    Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea

    2017-10-01

    The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.

  1. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  2. [caCORE: core architecture of bioinformation on cancer research in America].

    PubMed

    Gao, Qin; Zhang, Yan-lei; Xie, Zhi-yun; Zhang, Qi-peng; Hu, Zhang-zhi

    2006-04-18

    A critical factor in the advancement of biomedical research is the ease with which data can be integrated, redistributed and analyzed both within and across domains. This paper summarizes the Biomedical Information Core Infrastructure built by National Cancer Institute Center for Bioinformatics in America (NCICB). The main product from the Core Infrastructure is caCORE--cancer Common Ontologic Reference Environment, which is the infrastructure backbone supporting data management and application development at NCICB. The paper explains the structure and function of caCORE: (1) Enterprise Vocabulary Services (EVS). They provide controlled vocabulary, dictionary and thesaurus services, and EVS produces the NCI Thesaurus and the NCI Metathesaurus; (2) The Cancer Data Standards Repository (caDSR). It provides a metadata registry for common data elements. (3) Cancer Bioinformatics Infrastructure Objects (caBIO). They provide Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. The vision for caCORE is to provide a common data management framework that will support the consistency, clarity, and comparability of biomedical research data and information. In addition to providing facilities for data management and redistribution, caCORE helps solve problems of data integration. All NCICB-developed caCORE components are distributed under open-source licenses that support unrestricted usage by both non-profit and commercial entities, and caCORE has laid the foundation for a number of scientific and clinical applications. Based on it, the paper expounds caCORE-base applications simply in several NCI projects, of which one is CMAP (Cancer Molecular Analysis Project), and the other is caBIG (Cancer Biomedical Informatics Grid). In the end, the paper also gives good prospects of caCORE, and while caCORE was born out of the needs of the cancer research community, it is intended to serve as a general resource. Cancer research has historically contributed to many areas beyond tumor biology. At the same time, the paper makes some suggestions about the study at the present time on biomedical informatics in China.

  3. Semantics-enabled service discovery framework in the SIMDAT pharma grid.

    PubMed

    Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert

    2008-03-01

    We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.

  4. Web service module for access to g-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.; Goranov, G.

    2012-10-01

    G-Lite is a lightweight grid middleware for grid computing installed on all clusters of the European Grid Infrastructure (EGI). The middleware is partially service-oriented and does not provide well-defined Web services for job management. The existing Web services in the environment cannot be directly used by grid users for building service compositions in the EGI. In this article we present a module of well-defined Web services for job management in the EGI. We describe the architecture of the module and the design of the developed Web services. The presented Web services are composable and can participate in service compositions (workflows). An example of usage of the module with tools for service compositions in g-Lite is shown.

  5. NAVAJO ELECTRIFICATION DEMONSTRATION PROJECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry W. Battiest

    2008-06-11

    The Navajo Electrification Demonstration Project (NEDP) is a multi-year project which addresses the electricity needs of the unserved and underserved Navajo Nation, the largest American Indian tribe in the United States. The program serves to cumulatively provide off-grid electricty for families living away from the electricty infrastructure, line extensions for unserved families living nearby (less than 1/2 mile away from) the electricity, and, under the current project called NEDP-4, the construction of a substation to increase the capacity and improve the quality of service into the central core region of the Navajo Nation.

  6. Grid workflow job execution service 'Pilot'

    NASA Astrophysics Data System (ADS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  7. Grid-Scale Energy Storage Demonstration of Ancillary Services Using the UltraBattery Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seasholtz, Jeff

    2015-08-20

    The collaboration described in this document is being done as part of a cooperative research agreement under the Department of Energy’s Smart Grid Demonstration Program. This document represents the Final Technical Performance Report, from July 2012 through April 2015, for the East Penn Manufacturing Smart Grid Program demonstration project. This Smart Grid Demonstration project demonstrates Distributed Energy Storage for Grid Support, in particular the economic and technical viability of a grid-scale, advanced energy storage system using UltraBattery ® technology for frequency regulation ancillary services and demand management services. This project entailed the construction of a dedicated facility on the Eastmore » Penn campus in Lyon Station, PA that is being used as a working demonstration to provide regulation ancillary services to PJM and demand management services to Metropolitan Edison (Met-Ed).« less

  8. The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services

    NASA Astrophysics Data System (ADS)

    Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas

    Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.

  9. Operational flash flood forecasting platform based on grid technology

    NASA Astrophysics Data System (ADS)

    Thierion, V.; Ayral, P.-A.; Angelini, V.; Sauvagnargues-Lesage, S.; Nativi, S.; Payrastre, O.

    2009-04-01

    Flash flood events of south of France such as the 8th and 9th September 2002 in the Grand Delta territory caused important economic and human damages. Further to this catastrophic hydrological situation, a reform of flood warning services have been initiated (set in 2006). Thus, this political reform has transformed the 52 existing flood warning services (SAC) in 22 flood forecasting services (SPC), in assigning them territories more hydrological consistent and new effective hydrological forecasting mission. Furthermore, national central service (SCHAPI) has been created to ease this transformation and support local services in their new objectives. New functioning requirements have been identified: - SPC and SCHAPI carry the responsibility to clearly disseminate to public organisms, civil protection actors and population, crucial hydrologic information to better anticipate potential dramatic flood event, - a new effective hydrological forecasting mission to these flood forecasting services seems essential particularly for the flash floods phenomenon. Thus, models improvement and optimization was one of the most critical requirements. Initially dedicated to support forecaster in their monitoring mission, thanks to measuring stations and rainfall radar images analysis, hydrological models have to become more efficient in their capacity to anticipate hydrological situation. Understanding natural phenomenon occuring during flash floods mainly leads present hydrological research. Rather than trying to explain such complex processes, the presented research try to manage the well-known need of computational power and data storage capacities of these services. Since few years, Grid technology appears as a technological revolution in high performance computing (HPC) allowing large-scale resource sharing, computational power using and supporting collaboration across networks. Nowadays, EGEE (Enabling Grids for E-science in Europe) project represents the most important effort in term of grid technology development. This paper presents an operational flash flood forecasting platform which have been developed in the framework of CYCLOPS European project providing one of virtual organizations of EGEE project. This platform has been designed to enable multi-simulations processes to ease forecasting operations of several supervised watersheds on Grand Delta (SPC-GD) territory. Grid technology infrastructure, in providing multiple remote computing elements enables the processing of multiple rainfall scenarios, derived to the original meteorological forecasting transmitted by Meteo-France, and their respective hydrological simulations. First results show that from one forecasting scenario, this new presented approach can permit simulations of more than 200 different scenarios to support forecasters in their aforesaid mission and appears as an efficient hydrological decision-making tool. Although, this system seems operational, model validity has to be confirmed. So, further researches are necessary to improve models core to be more efficient in term of hydrological aspects. Finally, this platform could be an efficient tool for developing others modelling aspects as calibration or data assimilation in real time processing.

  10. mantisGRID: a grid platform for DICOM medical images management in Colombia and Latin America.

    PubMed

    Garcia Ruiz, Manuel; Garcia Chaves, Alvin; Ruiz Ibañez, Carlos; Gutierrez Mazo, Jorge Mario; Ramirez Giraldo, Juan Carlos; Pelaez Echavarria, Alejandro; Valencia Diaz, Edison; Pelaez Restrepo, Gustavo; Montoya Munera, Edwin Nelson; Garcia Loaiza, Bernardo; Gomez Gonzalez, Sebastian

    2011-04-01

    This paper presents the mantisGRID project, an interinstitutional initiative from Colombian medical and academic centers aiming to provide medical grid services for Colombia and Latin America. The mantisGRID is a GRID platform, based on open source grid infrastructure that provides the necessary services to access and exchange medical images and associated information following digital imaging and communications in medicine (DICOM) and health level 7 standards. The paper focuses first on the data abstraction architecture, which is achieved via Open Grid Services Architecture Data Access and Integration (OGSA-DAI) services and supported by the Globus Toolkit. The grid currently uses a 30-Mb bandwidth of the Colombian High Technology Academic Network, RENATA, connected to Internet 2. It also includes a discussion on the relational database created to handle the DICOM objects that were represented using Extensible Markup Language Schema documents, as well as other features implemented such as data security, user authentication, and patient confidentiality. Grid performance was tested using the three current operative nodes and the results demonstrated comparable query times between the mantisGRID (OGSA-DAI) and Distributed mySQL databases, especially for a large number of records.

  11. Spaceflight Operations Services Grid (SOSG)

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Thigpen, William W.

    2004-01-01

    In an effort to adapt existing space flight operations services to new emerging Grid technologies we are developing a Grid-based prototype space flight operations Grid. This prototype is based on the operational services being provided to the International Space Station's Payload operations located at the Marshall Space Flight Center, Alabama. The prototype services will be Grid or Web enabled and provided to four user communities through portal technology. Users will have the opportunity to assess the value and feasibility of Grid technologies to their specific areas or disciplines. In this presentation descriptions of the prototype development, User-based services, Grid-based services and status of the project will be presented. Expected benefits, findings and observations (if any) to date will also be discussed. The focus of the presentation will be on the project in general, status to date and future plans. The End-use services to be included in the prototype are voice, video, telemetry, commanding, collaboration tools and visualization among others. Security is addressed throughout the project and is being designed into the Grid technologies and standards development. The project is divided into three phases. Phase One establishes the baseline User-based services required for space flight operations listed above. Phase Two involves applying Gridlweb technologies to the User-based services and development of portals for access by users. Phase Three will allow NASA and end users to evaluate the services and determine the future of the technology as applied to space flight operational services. Although, Phase One, which includes the development of the quasi-operational User-based services of the prototype, development will be completed by March 2004, the application of Grid technologies to these services will have just begun. We will provide status of the Grid technologies to the individual User-based services. This effort will result in an extensible environment that incorporates existing and new spaceflight services into a standards-based framework providing current and future NASA programs with cost savings and new and evolvable methods to conduct science. This project will demonstrate how the use of new programming paradigms such as web and grid services can provide three significant benefits to the cost-effective delivery of spaceflight services. They will enable applications to operate more efficiently by being able to utilize pooled resources. They will also permit the reuse of common services to rapidly construct new and more powerful applications. Finally they will permit easy and secure access to services via a combination of grid and portal technology by a distributed user community consisting of NASA operations centers, scientists, the educational community and even the general population as outreach. The approach will be to deploy existing mission support applications such as the Telescience Resource Kit (TReK) and new applications under development, such as the Grid Video Distribution System (GViDS), together with existing grid applications and services such as high-performance computing and visualization services provided by NASA s Information Power Grid (IPG) in the MSFC s Payload Operations Integration Center (POIC) HOSC Annex. Once the initial applications have been moved to the grid, a process will begin to apply the new programming paradigms to integrate them where possible. For example, with GViDS, instead of viewing the Distribution service as an application that must run on a single node, the new approach is to build it such that it can be dispatched across a pool of resources in response to dynamic loads. To make this a reality, reusable services will be critical, such as a brokering service to locate appropriate resource within the pool. This brokering service can then be used by other applications such as the TReK. To expand further, if the GViDS application is constructed using a services-based mel, then other applications such as the Video Auditorium can then use GViDS as a service to easily incorporate these video streams into a collaborative conference. Finally, as these applications are re-factored into this new services-based paradigm, the construction of portals to integrate them will be a simple process. As a result, portals can be tailored to meet the requirements of specific user communities.

  12. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  13. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  14. Spaceflight Operations Services Grid (SOSG) Project

    NASA Technical Reports Server (NTRS)

    Bradford, Robert; Lisotta, Anthony

    2004-01-01

    The motivation, goals, and objectives of the Space Operations Services Grid Project (SOSG) are covered in this viewgraph presentation. The goals and objectives of SOSG include: 1) Developing a grid-enabled prototype providing Space-based ground operations end user services through a collaborative effort between NASA, academia, and industry to assess the technical and cost feasibility of implementation of Grid technologies in the Space Operations arena; 2) Provide to space operations organizations and processes, through a single secure portal(s), access to all the information technology (Grid and Web based) services necessary for program/project development, operations and the ultimate creation of new processes, information and knowledge.

  15. Towards Dynamic Service Level Agreement Negotiation:An Approach Based on WS-Agreement

    NASA Astrophysics Data System (ADS)

    Pichot, Antoine; Wäldrich, Oliver; Ziegler, Wolfgang; Wieder, Philipp

    In Grid, e-Science and e-Business environments, Service Level Agreements are often used to establish frameworks for the delivery of services between service providers and the organisations hosting the researchers. While this high level SLAs define the overall quality of the services, it is desirable for the end-user to have dedicated service quality also for individual services like the orchestration of resources necessary for composed services. Grid level scheduling services typically are responsible for the orchestration and co-ordination of resources in the Grid. Co-allocation e.g. requires the Grid level scheduler to co-ordinate resource management systems located in different domains. As the site autonomy has to be respected negotiation is the only way to achieve the intended co-ordination. SLAs emerged as a new way to negotiate and manage usage of resources in the Grid and are already adopted by a number of management systems. Therefore, it is natural to look for ways to adopt SLAs for Grid level scheduling. In order to do this, efficient and flexible protocols are needed, which support dynamic negotiation and creation of SLAs. In this paper we propose and discuss extensions to the WS-Agreement protocol addressing these issues.

  16. Resilient Core Networks for Energy Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuntze, Nicolai; Rudolph, Carsten; Leivesley, Sally

    2014-07-28

    Abstract—Substations and their control are crucial for the availability of electricity in today’s energy distribution. Ad- vanced energy grids with Distributed Energy Resources require higher complexity in substations, distributed functionality and communication between devices inside substations and between substations. Also, substations include more and more intelligent devices and ICT based systems. All these devices are connected to other systems by different types of communication links or are situated in uncontrolled environments. Therefore, the risk of ICT based attacks on energy grids is growing. Consequently, security measures to counter these risks need to be an intrinsic part of energy grids. Thismore » paper introduces the concept of a Resilient Core Network to interconnected substations. This core network provides essen- tial security features, enables fast detection of attacks and allows for a distributed and autonomous mitigation of ICT based risks.« less

  17. Global Precipitation Measurement (GPM) Mission: Precipitation Processing System (PPS) GPM Mission Gridded Text Products Provide Surface Precipitation Retrievals

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.

    2015-01-01

    In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a.25 degree x.25 degree hourly grid, which are packaged into daily ASCII (American Standard Code for Information Interchange) files that can downloaded from the PPS FTP (File Transfer Protocol) site. To reduce the download size, the files are compressed using the gzip utility.This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithmusing as input the PPS generated inter-calibrated 1C product for the radiometer.

  18. GPM Mission Gridded Text Products Providing Surface Precipitation Retrievals

    NASA Astrophysics Data System (ADS)

    Stocker, Erich Franz; Kelley, Owen; Huffman, George; Kummerow, Christian

    2015-04-01

    In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar), and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMI/DPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for reseachers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations. This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments - GMI, DPR, and combined GMI/DPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a .25 degree x .25 degree hourly grid, which are packaged into daily ASCII files that can downloaded from the PPS FTP site. To reduce the download size, the files are compressed using the gzip utility. This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithm using as input the PPS generated inter-calibrated 1C product for the radiometer.

  19. Nuclear reactor I

    DOEpatents

    Ference, Edward W.; Houtman, John L.; Waldby, Robert N.

    1977-01-01

    A nuclear reactor, particularly a liquid-metal breeder reactor whose upper internals include provision for channeling the liquid metal flowing from the core-component assemblies to the outlet plenum in vertical paths in direction generally along the direction of the respective assemblies. The metal is channeled by chimneys, each secured to, and extending from, a grid through whose openings the metal emitted by a plurality of core-component assemblies encompassed by the grid flows. To reduce the stresses resulting from structural interaction, or the transmissive of thermal strains due to large temperature differences in the liquid metal emitted from neighboring core-component assemblies, throughout the chimneys and the other components of the upper internals, the grids and the chimneys are supported from the heat plate and the core barrel by support columns (double portal support) which are secured to the head plate at the top and to a member, which supports the grids and is keyed to the core barrel, at the bottom. In addition to being restrained from lateral flow by the chimneys, the liquid metal is also restrained from flowing laterally by a peripheral seal around the top of the core. This seal limits the flow rate of liquid metal, which may be sharply cooled during a scram, to the outlet nozzles. The chimneys and the grids are formed of a highly-refractory, high corrosion-resistant nickel-chromium-iron alloy which can withstand the stresses produced by temperature differences in the liquid metal. The chimneys are supported by pairs of plates, each pair held together by hollow stubs coaxial with, and encircling, the chimneys. The plates and stubs are a welded structure but, in the interest of economy, are composed of stainless steel which is not weld compatible with the refractory metal. The chimneys and stubs are secured together by shells of another nickel-chromium-iron alloy which is weld compatible with, and is welded to, the stubs and has about the same coefficient of expansion as the highly-refractory, high corrosion-resistant alloy.

  20. Global Gridded Data from the Goddard Earth Observing System Data Assimilation System (GEOS-DAS)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    The Goddard Earth Observing System Data Assimilation System (GEOS-DAS) timeseries is a globally gridded atmospheric data set for use in climate research. This near real-time data set is produced by the Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center in direct support of the operational EOS instrument product generation from the Terra (12/1999 launch), Aqua (05/2002 launch) and Aura (01/2004 launch) spacecrafts. The data is archived in the EOS Core System (ECS) at the Goddard Earth Sciences Data and Information Services Center/Distributed Active Archive Center (GES DISC DAAC). The data is only a selection of the products available from the GEOS-DAS. The data is organized chronologically in timeseries format to facilitate the computation of statistics. GEOS-DAS data will be available for the time period January 1, 2000, through present.

  1. Optimal Sizing Tool for Battery Storage in Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-24

    The battery storage sizing tool developed at Pacific Northwest National Laboratory can be used to evaluate economic performance and determine the optimal size of battery storage in different use cases considering multiple power system applications. The considered use cases include i) utility owned battery storage, and ii) battery storage behind customer meter. The power system applications from energy storage include energy arbitrage, balancing services, T&D deferral, outage mitigation, demand charge reduction etc. Most of existing solutions consider only one or two grid services simultaneously, such as balancing service and energy arbitrage. ES-select developed by Sandia and KEMA is able tomore » consider multiple grid services but it stacks the grid services based on priorities instead of co-optimization. This tool is the first one that provides a co-optimization for systematic and local grid services.« less

  2. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  3. Axially shaped channel and integral flow trippers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, R.L.; Johansson, E.B.; Matzner, B.

    1988-06-07

    A fuel assembly is described comprising fuel rods positioned in spaced array by upper and lower tie-plates, an open ended flow channel surrounding the array for conducting coolant upward between a lower support plate having coolant communicated thereto to an upper support grid having a steam/water outlet communicated thereto. The flow channel surrounds the array for conducting coolant about the fuel rods. The open ended channel has a polygon shaped cross section with the channel constituting a closed conduit with flat side sections connected at corners to form the enclosed conduit; means separate from the channel for connecting the uppermore » and lower tie-plates together and maintaining the fuel rods in spaced array independent of the flow channel. The improvement in the flow channel comprises tapered side walls. The tapered side walls extend from an average thick cross section adjacent the lower support plate to an average thin cross section adjacent the upper core grid whereby the channel is reduced in thickness adjacent the upper core grid to correspond with the reduced pressure adjacent the upper core grid.« less

  4. Axially shaped channel and integral flow trippers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, R.L. Jr.; Johansson, E.B.; Matzner, B.

    1992-02-11

    This patent describes a fuel assembly. It comprises: fuel rods positioned in spaced array by upper and lower tie-plates, and open ended flow channel surrounding the array for conducting coolant upward between a lower support plate having coolant communicated thereto to an upper support grid having a steam/water outlet communicated thereto. The flow channel surrounding the array for conducting coolant about the fuel rods; the open ended channel having a polygon shaped cross section with the channel constituting a closed conduit with flat side sections connected at corners to form the enclosed conduit; means separate from the channel for connectingmore » the upper and lower tie-plates together and maintaining the fuel rods in spaced array independent of the flow channel, the improvement in the flow channel comprising tapered side walls, the tapered side walls extending from an average thick cross section adjacent the lower support plate to an average thin cross section adjacent the upper core grid whereby the channel is reduced in thickness adjacent the upper core grid to correspond with the reduced pressure adjacent the upper core grid.« less

  5. 76 FR 44323 - National Grid Transmission Services Corporation; Bangor Hydro Electric Company; Notice of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-25

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EL11-49-000] National Grid Transmission Services Corporation; Bangor Hydro Electric Company; Notice of Petition for Declaratory Order Take..., 18 CFR 385.207, National Grid Transmission Services Corporation and Bangor Hydro Electric Company...

  6. Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate

    NASA Astrophysics Data System (ADS)

    Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min

    2018-03-01

    Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.

  7. Uniformity on the grid via a configuration framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igor V Terekhov et al.

    2003-03-11

    As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less

  8. Smart grid as a service: a discussion on design issues.

    PubMed

    Chao, Hung-Lin; Tsai, Chen-Chou; Hsiung, Pao-Ann; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as "smart" as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system.

  9. Smart Grid as a Service: A Discussion on Design Issues

    PubMed Central

    Tsai, Chen-Chou; Chou, I-Hsin

    2014-01-01

    Smart grid allows the integration of distributed renewable energy resources into the conventional electricity distribution power grid such that the goals of reduction in power cost and in environment pollution can be met through an intelligent and efficient matching between power generators and power loads. Currently, this rapidly developing infrastructure is not as “smart” as it should be because of the lack of a flexible, scalable, and adaptive structure. As a solution, this work proposes smart grid as a service (SGaaS), which not only allows a smart grid to be composed out of basic services, but also allows power users to choose between different services based on their own requirements. The two important issues of service-level agreements and composition of services are also addressed in this work. Finally, we give the details of how SGaaS can be implemented using a FIPA-compliant JADE multiagent system. PMID:25243214

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-supportmore » functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.« less

  11. Application of Intel Many Integrated Core (MIC) architecture to the Yonsei University planetary boundary layer scheme in Weather Research and Forecasting model

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    The Weather Research and Forecasting (WRF) model provided operational services worldwide in many areas and has linked to our daily activity, in particular during severe weather events. The scheme of Yonsei University (YSU) is one of planetary boundary layer (PBL) models in WRF. The PBL is responsible for vertical sub-grid-scale fluxes due to eddy transports in the whole atmospheric column, determines the flux profiles within the well-mixed boundary layer and the stable layer, and thus provide atmospheric tendencies of temperature, moisture (including clouds), and horizontal momentum in the entire atmospheric column. The YSU scheme is very suitable for massively parallel computation as there are no interactions among horizontal grid points. To accelerate the computation process of the YSU scheme, we employ Intel Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.4x. Furthermore, the same CPU-based optimizations improved the performance on Intel Xeon E5-2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  12. Laminated grid and web magnetic cores

    DOEpatents

    Sefko, John; Pavlik, Norman M.

    1984-01-01

    A laminated magnetic core characterized by an electromagnetic core having core legs which comprise elongated apertures and edge notches disposed transversely to the longitudinal axis of the legs, such as high reluctance cores with linear magnetization characteristics for high voltage shunt reactors. In one embodiment the apertures include compact bodies of microlaminations for more flexibility and control in adjusting permeability and/or core reluctance.

  13. Space-based Science Operations Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Welch, Clara L.; Redman, Sandra

    2004-01-01

    Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based science experimenters. There is an international aspect to the Grid involving the America's Pathway (AMPath) network, the Chilean REUNA Research and Education Network and the University of Chile in Santiago that will further demonstrate how extensive these services can be used. From the user's perspective, the Prototype will provide a single interface and logon to these varied services without the complexity of knowing the where's and how's of each service. There is a separate and deliberate emphasis on security. Security will be addressed by specifically outlining the different approaches and tools used. Grid technology, unlike the Internet, is being designed with security in mind. In addition we will show the locations, configurations and network paths associated with each service and virtual organization. We will discuss the separate virtual organizations that we define for the varied user communities. These will include certain, as yet undetermined, space-based science functions and/or processes and will include specific virtual organizations required for public and educational outreach and science and engineering collaboration. We will also discuss the Grid Prototype performance and the potential for further Grid applications both space-based and ground based projects and processes. In this paper and presentation we will detail each service and how they are integrated using Grid

  14. Spaceflight Operations Services Grid (SOSG) Prototype Implementation and Feasibility Study

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Thigpen, William W.; Lisotta, Anthony J.; Redman, Sandra

    2004-01-01

    Science Operations Services Grid is focusing on building a prototype grid-based environment that incorporates existing and new spaceflight services to enable current and future NASA programs with cost savings and new and evolvable methods to conduct science in a distributed environment. The Science Operations Services Grid (SOSG) will provide a distributed environment for widely disparate organizations to conduct their systems and processes in a more efficient and cost effective manner. These organizations include those that: 1) engage in space-based science and operations, 2) develop space-based systems and processes, and 3) conduct scientific research, bringing together disparate scientific disciplines like geology and oceanography to create new information. In addition educational outreach will be significantly enhanced by providing to schools the same tools used by NASA with the ability of the schools to actively participate on many levels in the science generated by NASA from space and on the ground. The services range from voice, video and telemetry processing and display to data mining, high level processing and visualization tools all accessible from a single portal. In this environment, users would not require high end systems or processes at their home locations to use these services. Also, the user would need to know minimal details about the applications in order to utilize the services. In addition, security at all levels is an underlying goal of the project. The Science Operations Services Grid will focus on four tools that are currently used by the ISS Payload community along with nine more that are new to the community. Under the prototype four Grid virtual organizations PO) will be developed to represent four types of users. They are a Payload (experimenters) VO, a Flight Controllers VO, an Engineering and Science Collaborators VO and an Education and Public Outreach VO. The User-based services will be implemented to replicate the operational voice, video, telemetry and commanding systems. Once the User-based services are in place, they will be analyzed to establish feasibility for Grid enabling. If feasible then each User-based service will be Grid enabled. The remaining non-Grid services if not already Web enabled will be so enabled. In the end, four portals will be developed one for each VO. Each portal will contain the appropriate User-based services required for that VO to operate.

  15. An infrastructure for the integration of geoscience instruments and sensors on the Grid

    NASA Astrophysics Data System (ADS)

    Pugliese, R.; Prica, M.; Kourousias, G.; Del Linz, A.; Curri, A.

    2009-04-01

    The Grid, as a computing paradigm, has long been in the attention of both academia and industry[1]. The distributed and expandable nature of its general architecture result to scalability and more efficient utilisation of the computing infrastructures. The scientific community, including that of geosciences, often handles problems with very high requirements in data processing, transferring, and storing[2,3]. This has raised the interest on Grid technologies but these are often viewed solely as an access gateway to HPC. Suitable Grid infrastructures could provide the geoscience community with additional benefits like those of sharing, remote access and control of scientific systems. These systems can be scientific instruments, sensors, robots, cameras and any other device used in geosciences. The solution for practical, general, and feasible Grid-enabling of such devices requires non-intrusive extensions on core parts of the current Grid architecture. We propose an extended version of an architecture[4] that can serve as the solution to the problem. The solution we propose is called Grid Instrument Element (IE) [5]. It is an addition to the existing core Grid parts; the Computing Element (CE) and the Storage Element (SE) that serve the purposes that their name suggests. The IE that we will be referring to, and the related technologies have been developed in the EU project on the Deployment of Remote Instrumentation Infrastructure (DORII1). In DORII, partners of various scientific communities including those of Earthquake, Environmental science, and Experimental science, have adopted the technology of the Instrument Element in order to integrate to the Grid their devices. The Oceanographic and coastal observation and modelling Mediterranean Ocean Observing Network (OGS2), a DORII partner, is in the process of deploying the above mentioned Grid technologies on two types of observational modules: Argo profiling floats and a novel Autonomous Underwater Vehicle (AUV). In this paper i) we define the need for integration of instrumentation in the Grid, ii) we introduce the solution of the Instrument Element, iii) we demonstrate a suitable end-user web portal for accessing Grid resources, iv) we describe from the Grid-technological point of view the process of the integration to the Grid of two advanced environmental monitoring devices. References [1] M. Surridge, S. Taylor, D. De Roure, and E. Zaluska, "Experiences with GRIA—Industrial Applications on a Web Services Grid," e-Science and Grid Computing, First International Conference on e-Science and Grid Computing, 2005, pp. 98-105. [2] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, "The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets," Journal of Network and Computer Applications, vol. 23, 2000, pp. 187-200. [3] B. Allcock, J. Bester, J. Bresnahan, A.L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, "Data management and transfer in high-performance computational grid environments," Parallel Computing, vol. 28, 2002, pp. 749-771. [4] E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, and S. Traldi, "Instrument Element: A New Grid component that Enables the Control of Remote Instrumentation," Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)-Volume 00, IEEE Computer Society Washington, DC, USA, 2006. [5] R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, and M. Prica, "A Web-based Tool for Collaborative Access to Scientific Instruments in Cyberinfrastructures." 1 The DORII project is supported by the European Commission within the 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-213110. URL: http://www.dorii.eu 2 Istituto Nazionale di Oceanografia e di Geofisica Sperimentale. URL: http://www.ogs.trieste.it

  16. A Security Architecture for Grid-enabling OGC Web Services

    NASA Astrophysics Data System (ADS)

    Angelini, Valerio; Petronzio, Luca

    2010-05-01

    In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.

  17. Selection of battery technology to support grid-integrated renewable electricity

    NASA Astrophysics Data System (ADS)

    Leadbetter, Jason; Swan, Lukas G.

    2012-10-01

    Operation of the electricity grid has traditionally been done using slow responding base and intermediate load generators with fast responding peak load generators to capture the chaotic behavior of end-use demands. Many modern electricity grids are implementing intermittent non-dispatchable renewable energy resources. As a result, the existing support services are becoming inadequate and technological innovation in grid support services are necessary. Support services fall into short (seconds to minutes), medium (minutes to hours), and long duration (several hours) categories. Energy storage offers a method of providing these services and can enable increased penetration rates of renewable energy generators. Many energy storage technologies exist. Of these, batteries span a significant range of required storage capacity and power output. By assessing the energy to power ratio of electricity grid services, suitable battery technologies were selected. These include lead-acid, lithium-ion, sodium-sulfur, and vanadium-redox. Findings show the variety of grid services require different battery technologies and batteries are capable of meeting the short, medium, and long duration categories. A brief review of each battery technology and its present state of development, commercial implementation, and research frontiers is presented to support these classifications.

  18. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  19. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  20. On-site fuel cell field test support program

    NASA Astrophysics Data System (ADS)

    Staniunas, J. W.; Merten, G. P.

    1982-01-01

    In order to assess the impact of grid connection on the potential market for fuel cell service, applications studies were conducted to identify the fuel cell operating modes and corresponding fuel cell sizing criteria which offer the most potential for initial commercial service. The market for grid-connected fuel cell service was quantified using United's market analysis program and computerized building data base. Electric and gas consumption data for 268 buildings was added to our surveyed building data file, bringing the total to 407 buildings. These buildings were analyzed for grid-isolated and grid-connected fuel cell service. The results of the analyses indicated that the nursing home, restaurant and health club building sectors offer significant potential for fuel cell service.

  1. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  2. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; hide

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  3. Grist: Grid-based Data Mining for Astronomy

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.

    2005-12-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

  4. Quality Assurance Framework for Mini-Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baring-Gould, Ian; Burman, Kari; Singh, Mohit

    Providing clean and affordable energy services to the more than 1 billion people globally who lack access to electricity is a critical driver for poverty reduction, economic development, improved health, and social outcomes. More than 84% of populations without electricity are located in rural areas where traditional grid extension may not be cost-effective; therefore, distributed energy solutions such as mini-grids are critical. To address some of the root challenges of providing safe, quality, and financially viable mini-grid power systems to remote customers, the U.S. Department of Energy (DOE) teamed with the National Renewable Energy Laboratory (NREL) to develop a Qualitymore » Assurance Framework (QAF) for isolated mini-grids. The QAF for mini-grids aims to address some root challenges of providing safe, quality, and affordable power to remote customers via financially viable mini-grids through two key components: (1) Levels of service: Defines a standard set of tiers of end-user service and links them to technical parameters of power quality, power availability, and power reliability. These levels of service span the entire energy ladder, from basic energy service to high-quality, high-reliability, and high-availability service (often considered 'grid parity'); (2) Accountability and performance reporting framework: Provides a clear process of validating power delivery by providing trusted information to customers, funders, and/or regulators. The performance reporting protocol can also serve as a robust monitoring and evaluation tool for mini-grid operators and funding organizations. The QAF will provide a flexible alternative to rigid top-down standards for mini-grids in energy access contexts, outlining tiers of end-user service and linking them to relevant technical parameters. In addition, data generated through implementation of the QAF will provide the foundation for comparisons across projects, assessment of impacts, and greater confidence that will drive investment and scale-up in this sector. The QAF implementation process also defines a set of implementation guidelines that help the deployment of mini-grids on a regional or national scale, helping to insure successful rapid deployment of these relatively new remote energy options. Note that the QAF is technology agnostic, addressing both alternating current (AC) and direct current (DC) mini-grids, and is also applicable to renewable, fossil-fuel, and hybrid systems.« less

  5. Distinction of Concept and Discussion on Construction Idea of Smart Water Grid Project

    NASA Astrophysics Data System (ADS)

    Ye, Y.; Yizi, S., Sr.; Lili, L., Sr.; Sang, X.; Zhai, J.

    2016-12-01

    Smart water grid project includes construction of water physical grid consisting of various flow regulating infrastructures, construction of water information grid in line with the trend of intelligent technology and construction of water management grid featured by system & mechanism construction and systemization of regulation decision-making. It is the integrated platform and comprehensive carrier for water conservancy practices. Currently, there still is dispute over engineering construction idea of smart water grid which, however, represents the future development trend of water management and is increasingly emphasized. The paper, based on distinction of concept of water grid and water grid engineering, explains the concept of water grid intelligentization, actively probes into construction idea of Smart water grid project in our country and presents scientific problems to be solved as well as core technologies to be mastered for smart water grid construction.

  6. The architecture of a virtual grid GIS server

    NASA Astrophysics Data System (ADS)

    Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting

    2008-10-01

    The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.

  7. A new service-oriented grid-based method for AIoT application and implementation

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.

  8. Optimal Coordination of Building Loads and Energy Storage for Power Grid and End User Services

    DOE PAGES

    Hao, He; Wu, Di; Lian, Jianming; ...

    2017-01-18

    Demand response and energy storage play a profound role in the smart grid. The focus of this study is to evaluate benefits of coordinating flexible loads and energy storage to provide power grid and end user services. We present a Generalized Battery Model (GBM) to describe the flexibility of building loads and energy storage. An optimization-based approach is proposed to characterize the parameters (power and energy limits) of the GBM for flexible building loads. We then develop optimal coordination algorithms to provide power grid and end user services such as energy arbitrage, frequency regulation, spinning reserve, as well as energymore » cost and demand charge reduction. Several case studies have been performed to demonstrate the efficacy of the GBM and coordination algorithms, and evaluate the benefits of using their flexibility for power grid and end user services. We show that optimal coordination yields significant cost savings and revenue. Moreover, the best option for power grid services is to provide energy arbitrage and frequency regulation. Finally and furthermore, when coordinating flexible loads with energy storage to provide end user services, it is recommended to consider demand charge in addition to time-of-use price in order to flatten the aggregate power profile.« less

  9. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  10. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.

  11. The Open Science Grid - Support for Multi-Disciplinary Team Science - the Adolescent Years

    NASA Astrophysics Data System (ADS)

    Bauerdick, Lothar; Ernst, Michael; Fraser, Dan; Livny, Miron; Pordes, Ruth; Sehgal, Chander; Würthwein, Frank; Open Science Grid

    2012-12-01

    As it enters adolescence the Open Science Grid (OSG) is bringing a maturing fabric of Distributed High Throughput Computing (DHTC) services that supports an expanding HEP community to an increasingly diverse spectrum of domain scientists. Working closely with researchers on campuses throughout the US and in collaboration with national cyberinfrastructure initiatives, we transform their computing environment through new concepts, advanced tools and deep experience. We discuss examples of these including: the pilot-job overlay concepts and technologies now in use throughout OSG and delivering 1.4 Million CPU hours/day; the role of campus infrastructures- built out from concepts of sharing across multiple local faculty clusters (made good use of already by many of the HEP Tier-2 sites in the US); the work towards the use of clouds and access to high throughput parallel (multi-core and GPU) compute resources; and the progress we are making towards meeting the data management and access needs of non-HEP communities with general tools derived from the experience of the parochial tools in HEP (integration of Globus Online, prototyping with IRODS, investigations into Wide Area Lustre). We will also review our activities and experiences as HTC Service Provider to the recently awarded NSF XD XSEDE project, the evolution of the US NSF TeraGrid project, and how we are extending the reach of HTC through this activity to the increasingly broad national cyberinfrastructure. We believe that a coordinated view of the HPC and HTC resources in the US will further expand their impact on scientific discovery.

  12. FORTRAN programs to process Magsat data for lithospheric, external field, and residual core components

    NASA Technical Reports Server (NTRS)

    Alsdorf, Douglas E.; Vonfrese, Ralph R. B.

    1994-01-01

    The FORTRAN programs supplied in this document provide a complete processing package for statistically extracting residual core, external field and lithospheric components in Magsat observations. To process the individual passes: (1) orbits are separated into dawn and dusk local times and by altitude, (2) passes are selected based on the variance of the magnetic field observations after a least-squares fit of the core field is removed from each pass over the study area, and (3) spatially adjacent passes are processed with a Fourier correlation coefficient filter to separate coherent and non-coherent features between neighboring tracks. In the second state of map processing: (1) data from the passes are normalized to a common altitude and gridded into dawn and dusk maps with least squares collocation, (2) dawn and dusk maps are correlated with a Fourier correlation efficient filter to separate coherent and non-coherent features; the coherent features are averaged to produce a total field grid, (3) total field grids from all altitudes are continued to a common altitude, correlation filtered for coherent anomaly features, and subsequently averaged to produce the final total field grid for the study region, and (4) the total field map is differentially reduced to the pole.

  13. A System for Monitoring and Management of Computational Grids

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  14. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  15. Robust and efficient overset grid assembly for partitioned unstructured meshes

    NASA Astrophysics Data System (ADS)

    Roget, Beatrice; Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.

  16. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  17. A Qualitative Meta-Analysis of the Diffusion of Mandated and Subsidized Technology: United States Energy Security and Independence

    ERIC Educational Resources Information Center

    Noah, Philip D., Jr.

    2013-01-01

    The purpose of this research project was to explore what the core factors are that play a role in the development of the smart-grid. This research study examined The Energy Independence and Security Act (EISA) of 2007 as it pertains to the smart-grid, the economic and security effects of the smart grid, and key factors for its success. The…

  18. Vorticity-divergence semi-Lagrangian global atmospheric model SL-AV20: dynamical core

    NASA Astrophysics Data System (ADS)

    Tolstykh, Mikhail; Shashkin, Vladimir; Fadeev, Rostislav; Goyman, Gordey

    2017-05-01

    SL-AV (semi-Lagrangian, based on the absolute vorticity equation) is a global hydrostatic atmospheric model. Its latest version, SL-AV20, provides global operational medium-range weather forecast with 20 km resolution over Russia. The lower-resolution configurations of SL-AV20 are being tested for seasonal prediction and climate modeling. The article presents the model dynamical core. Its main features are a vorticity-divergence formulation at the unstaggered grid, high-order finite-difference approximations, semi-Lagrangian semi-implicit discretization and the reduced latitude-longitude grid with variable resolution in latitude. The accuracy of SL-AV20 numerical solutions using a reduced lat-lon grid and the variable resolution in latitude is tested with two idealized test cases. Accuracy and stability of SL-AV20 in the presence of the orography forcing are tested using the mountain-induced Rossby wave test case. The results of all three tests are in good agreement with other published model solutions. It is shown that the use of the reduced grid does not significantly affect the accuracy up to the 25 % reduction in the number of grid points with respect to the regular grid. Variable resolution in latitude allows us to improve the accuracy of a solution in the region of interest.

  19. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  20. A TEM analysis of nanoparticulates in a Polar ice core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esquivel, E.V.; Murr, L.E

    2004-03-15

    This paper explores the prospect for analyzing nanoparticulates in age-dated ice cores representing times in antiquity to establish a historical reference for atmospheric particulate regimes. Analytical transmission electron microscope (TEM) techniques were utilized to observe representative ice-melt water drops dried down on carbon/formvar or similar coated grids. A 10,000-year-old Greenland ice core was melted, and representative water drops were transferred to coated grids in a clean room environment. Essentially, all particulates observed were aggregates and either crystalline or complex mixtures of nanocrystals. Especially notable was the observation of carbon nanotubes and related fullerene-like nanocrystal forms. These observations are similar withmore » some aspects of contemporary airborne particulates including carbon nanotubes and complex nanocrystal aggregates.« less

  1. Modelling noise propagation using Grid Resources. Progress within GDI-Grid

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian; Mayer, Christian; Padberg, Alexander; Stapelfeld, Hartmut

    2010-05-01

    Modelling noise propagation using Grid Resources. Progress within GDI-Grid. GDI-Grid (english: SDI-Grid) is a research project funded by the German Ministry for Science and Education (BMBF). It aims at bridging the gaps between OGC Web Services (OWS) and Grid infrastructures and identifying the potential of utilizing the superior storage capacities and computational power of grid infrastructures for geospatial applications while keeping the well-known service interfaces specified by the OGC. The project considers all major OGC webservice interfaces for Web Mapping (WMS), Feature access (Web Feature Service), Coverage access (Web Coverage Service) and processing (Web Processing Service). The major challenge within GDI-Grid is the harmonization of diverging standards as defined by standardization bodies for Grid computing and spatial information exchange. The project started in 2007 and will continue until June 2010. The concept for the gridification of OWS developed by lat/lon GmbH and the Department of Geography of the University of Bonn is applied to three real-world scenarios in order to check its practicability: a flood simulation, a scenario for emergency routing and a noise propagation simulation. The latter scenario is addressed by the Stapelfeldt Ingenieurgesellschaft mbH located in Dortmund adapting their LimA software to utilize grid resources. Noise mapping of e.g. traffic noise in urban agglomerates and along major trunk roads is a reoccurring demand of the EU Noise Directive. Input data requires road net and traffic, terrain, buildings and noise protection screens as well as population distribution. Noise impact levels are generally calculated in 10 m grid and along relevant building facades. For each receiver position sources within a typical range of 2000 m are split down into small segments, depending on local geometry. For each of the segments propagation analysis includes diffraction effects caused by all obstacles on the path of sound propagation. This immense intensive calculation needs to be performed for a major part of European landscape. A LINUX version of the commercial LimA software for noise mapping analysis has been implemented on a test cluster within the German D-GRID computer network. Results and performance indicators will be presented. The presentation is an extension to last-years presentation "Spatial Data Infrastructures and Grid Computing: the GDI-Grid project" that described the gridification concept developed in the GDI-Grid project and provided an overview of the conceptual gaps between Grid Computing and Spatial Data Infrastructures. Results from the GDI-Grid project are incorporated in the OGC-OGF (Open Grid Forum) collaboration efforts as well as the OGC WPS 2.0 standards working group developing the next major version of the WPS specification.

  2. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  3. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  4. CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, John; Edwards, Jim; Evans, Kate J

    2012-01-01

    The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most othermore » aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.« less

  5. Corium protection assembly

    DOEpatents

    Gou, Perng-Fei; Townsend, Harold E.; Barbanti, Giancarlo

    1994-01-01

    A corium protection assembly includes a perforated base grid disposed below a pressure vessel containing a nuclear reactor core and spaced vertically above a containment vessel floor to define a sump therebetween. A plurality of layers of protective blocks are disposed on the grid for protecting the containment vessel floor from the corium.

  6. HappyFace as a generic monitoring tool for HEP experiments

    NASA Astrophysics Data System (ADS)

    Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard

    2015-12-01

    The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.

  7. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  8. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  9. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  10. Woven-grid sealed quasi-bipolar lead-acid battery construction and fabricating method

    NASA Technical Reports Server (NTRS)

    Rippel, Wally E. (Inventor)

    1989-01-01

    A quasi-bipolar lead-acid battery construction includes a plurality of bipolar cells disposed in side-by-side relation to form a stack, and a pair of monoplanar plates at opposite ends of the stack, the cell stack and monopolar plates being contained within a housing of the battery. Each bipolar cell is loaded with an electrolyte and composed of a bipolar electrode plate and a pair of separator plates disposed on opposite sides of the electrode plate and peripherally sealed thereto. Each bipolar electrode plate is composed of a partition sheet and two bipolar electrode elements folded into a hairpin configuration and applied over opposite edges of the partition sheet so as to cover the opposite surfaces of the opposite halves thereof. Each bipolar electrode element is comprised of a woven grid with a hot-melt strip applied to a central longitudinal region of the grid along which the grid is folded into the hairpin configuration, and layers of negative and positive active material pastes applied to opposite halves of the grid on opposite sides of the central hot-melt strip. The grid is made up of strands of conductive and non-conductive yarns composing the respective transverse and longitudinal weaves of the grid. The conductive yarn has a multi-stranded glass core surrounded and covered by a lead sheath, whereas the non-conductive yarn has a multi-stranded glass core surrounded and covered by a thermally activated sizing.

  11. A dual communicator and dual grid-resolution algorithm for petascale simulations of turbulent mixing at high Schmidt number

    NASA Astrophysics Data System (ADS)

    Clay, M. P.; Buaria, D.; Gotoh, T.; Yeung, P. K.

    2017-10-01

    A new dual-communicator algorithm with very favorable performance characteristics has been developed for direct numerical simulation (DNS) of turbulent mixing of a passive scalar governed by an advection-diffusion equation. We focus on the regime of high Schmidt number (S c), where because of low molecular diffusivity the grid-resolution requirements for the scalar field are stricter than those for the velocity field by a factor √{ S c }. Computational throughput is improved by simulating the velocity field on a coarse grid of Nv3 points with a Fourier pseudo-spectral (FPS) method, while the passive scalar is simulated on a fine grid of Nθ3 points with a combined compact finite difference (CCD) scheme which computes first and second derivatives at eighth-order accuracy. A static three-dimensional domain decomposition and a parallel solution algorithm for the CCD scheme are used to avoid the heavy communication cost of memory transposes. A kernel is used to evaluate several approaches to optimize the performance of the CCD routines, which account for 60% of the overall simulation cost. On the petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign, scalability is improved substantially with a hybrid MPI-OpenMP approach in which a dedicated thread per NUMA domain overlaps communication calls with computational tasks performed by a separate team of threads spawned using OpenMP nested parallelism. At a target production problem size of 81923 (0.5 trillion) grid points on 262,144 cores, CCD timings are reduced by 34% compared to a pure-MPI implementation. Timings for 163843 (4 trillion) grid points on 524,288 cores encouragingly maintain scalability greater than 90%, although the wall clock time is too high for production runs at this size. Performance monitoring with CrayPat for problem sizes up to 40963 shows that the CCD routines can achieve nearly 6% of the peak flop rate. The new DNS code is built upon two existing FPS and CCD codes. With the grid ratio Nθ /Nv = 8, the disparity in the computational requirements for the velocity and scalar problems is addressed by splitting the global communicator MPI_COMM_WORLD into disjoint communicators for the velocity and scalar fields, respectively. Inter-communicator transfer of the velocity field from the velocity communicator to the scalar communicator is handled with discrete send and non-blocking receive calls, which are overlapped with other operations on the scalar communicator. For production simulations at Nθ = 8192 and Nv = 1024 on 262,144 cores for the scalar field, the DNS code achieves 94% strong scaling relative to 65,536 cores and 92% weak scaling relative to Nθ = 1024 and Nv = 128 on 512 cores.

  12. DE-FG02-04ER25606 Identity Federation and Policy Management Guide: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphrey, Marty, A

    The goal of this 3-year project was to facilitate a more productive dynamic matching between resource providers and resource consumers in Grid environments by explicitly specifying policies. There were broadly two problems being addressed by this project. First, there was a lack of an Open Grid Services Architecture (OGSA)-compliant mechanism for expressing, storing and retrieving user policies and Virtual Organization (VO) policies. Second, there was a lack of tools to resolve and enforce policies in the Open Services Grid Architecture. To address these problems, our overall approach in this project was to make all policies explicit (e.g., virtual organization policies,more » resource provider policies, resource consumer policies), thereby facilitating policy matching and policy negotiation. Policies defined on a per-user basis were created, held, and updated in MyPolMan, thereby providing a Grid user to centralize (where appropriate) and manage his/her policies. Organizationally, the corresponding service was VOPolMan, in which the policies of the Virtual Organization are expressed, managed, and dynamically consulted. Overall, we successfully defined, prototyped, and evaluated policy-based resource management and access control for OGSA-based Grids. This DOE project partially supported 17 peer-reviewed publications on a number of different topics: General security for Grids, credential management, Web services/OGSA/OGSI, policy-based grid authorization (for remote execution and for access to information), policy-directed Grid data movement/placement, policies for large-scale virtual organizations, and large-scale policy-aware grid architectures. In addition to supporting the PI, this project partially supported the training of 5 PhD students.« less

  13. Fast and accurate 3D tensor calculation of the Fock operator in a general basis

    NASA Astrophysics Data System (ADS)

    Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.

    2012-11-01

    The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.

  14. Implementation of a SOA-Based Service Deployment Platform with Portal

    NASA Astrophysics Data System (ADS)

    Yang, Chao-Tung; Yu, Shih-Chi; Lai, Chung-Che; Liu, Jung-Chun; Chu, William C.

    In this paper we propose a Service Oriented Architecture to provide a flexible and serviceable environment. SOA comes up with commercial requirements; it integrates many techniques over ten years to find the solution in different platforms, programming languages and users. SOA provides the connection with a protocol between service providers and service users. After this, the performance and the reliability problems are reviewed. Finally we apply SOA into our Grid and Hadoop platform. Service acts as an interface in front of the Resource Broker in the Grid, and the Resource Broker is middleware that provides functions for developers. The Hadoop has a file replication feature to ensure file reliability. Services provided on the Grid and Hadoop are centralized. We design a portal, in which users can use services on it directly or register service through the service provider. The portal also offers a service workflow function so that users can customize services according to the need of their jobs.

  15. Development of a Grid-Independent Geos-Chem Chemical Transport Model (v9-02) as an Atmospheric Chemistry Module for Earth System Models

    NASA Technical Reports Server (NTRS)

    Long, M. S.; Yantosca, R.; Nielsen, J. E; Keller, C. A.; Da Silva, A.; Sulprizio, M. P.; Pawson, S.; Jacob, D. J.

    2015-01-01

    The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been re-engineered to also serve as an atmospheric chemistry module for Earth system models (ESMs). This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of the GEOSChem scientific code, permitting the exact same GEOSChem code to be used as an ESM module or as a standalone CTM. In this manner, the continual stream of updates contributed by the CTM user community is automatically passed on to the ESM module, which remains state of science and referenced to the latest version of the standard GEOS-Chem CTM. A major step in this re-engineering was to make GEOS-Chem grid independent, i.e., capable of using any geophysical grid specified at run time. GEOS-Chem data sockets were also created for communication between modules and with external ESM code. The grid-independent, ESMF-compatible GEOS-Chem is now the standard version of the GEOS-Chem CTM. It has been implemented as an atmospheric chemistry module into the NASA GEOS- 5 ESM. The coupled GEOS-5-GEOS-Chem system was tested for scalability and performance with a tropospheric oxidant-aerosol simulation (120 coupled species, 66 transported tracers) using 48-240 cores and message-passing interface (MPI) distributed-memory parallelization. Numerical experiments demonstrate that the GEOS-Chem chemistry module scales efficiently for the number of cores tested, with no degradation as the number of cores increases. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemistry module means that the relative cost goes down with increasing number of cores in a massively parallel environment.

  16. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    NASA Astrophysics Data System (ADS)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  17. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  18. Performance Evaluation of a SLA Negotiation Control Protocol for Grid Networks

    NASA Astrophysics Data System (ADS)

    Cergol, Igor; Mirchandani, Vinod; Verchere, Dominique

    A framework for an autonomous negotiation control protocol for service delivery is crucial to enable the support of heterogeneous service level agreements (SLAs) that will exist in distributed environments. We have first given a gist of our augmented service negotiation protocol to support distinct service elements. The augmentations also encompass related composition of the services and negotiation with several service providers simultaneously. All the incorporated augmentations will enable to consolidate the service negotiation operations for telecom networks, which are evolving towards Grid networks. Furthermore, our autonomous negotiation protocol is based on a distributed multi-agent framework to create an open market for Grid services. Second, we have concisely presented key simulation results of our work in progress. The results exhibit the usefulness of our negotiation protocol for realistic scenarios that involves different background traffic loading, message sizes and traffic flow asymmetry between background and negotiation traffics.

  19. [Research on tumor information grid framework].

    PubMed

    Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing

    2013-10-01

    In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.

  20. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  1. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  2. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  3. Evaluation of Service Level Agreement Approaches for Portfolio Management in the Financial Industry

    NASA Astrophysics Data System (ADS)

    Pontz, Tobias; Grauer, Manfred; Kuebert, Roland; Tenschert, Axel; Koller, Bastian

    The idea of service-oriented Grid computing seems to have the potential for fundamental paradigm change and a new architectural alignment concerning the design of IT infrastructures. There is a wide range of technical approaches from scientific communities which describe basic infrastructures and middlewares for integrating Grid resources in order that by now Grid applications are technically realizable. Hence, Grid computing needs viable business models and enhanced infrastructures to move from academic application right up to commercial application. For a commercial usage of these evolutions service level agreements are needed. The developed approaches are primary of academic interest and mostly have not been put into practice. Based on a business use case of the financial industry, five service level agreement approaches have been evaluated in this paper. Based on the evaluation, a management architecture has been designed and implemented as a prototype.

  4. Using CREAM and CEMonitor for job submission and management in the gLite middleware

    NASA Astrophysics Data System (ADS)

    Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Dalla Fina, S.; Dorigo, A.; Frizziero, E.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Mendez Lorenzo, P.; Miccio, V.; Sgaravatto, M.; Traldi, S.; Zangrando, L.

    2010-04-01

    In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.

  5. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  6. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  7. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Belforte, S.; Bockelman, B.

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid,more » cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.« less

  8. Development of stable Grid service at the next generation system of KEKCC

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.

    2017-10-01

    A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.

  9. Research on the architecture and key technologies of SIG

    NASA Astrophysics Data System (ADS)

    Fu, Zhongliang; Meng, Qingxiang; Huang, Yan; Liu, Shufan

    2007-06-01

    Along with the development of computer network, Grid has become one of the hottest issues of researches on sharing and cooperation of Internet resources throughout the world. This paper illustrates a new architecture of SIG-a five-hierarchy architecture (including Data Collecting Layer, Grid Layer, Service Layer, Application Layer and Client Layer) of SIG from the traditional three hierarchies (only including resource layer, service layer and client layer). In the paper, the author proposes a new mixed network mode of Spatial Information Grid which integrates CAG (Certificate Authority of Grid) and P2P (Peer to Peer) in the Grid Layer, besides, the author discusses some key technologies of SIG and analysis the functions of these key technologies.

  10. The Internet of things and Smart Grid

    NASA Astrophysics Data System (ADS)

    Li, Biao; Lv, Sen; Pan, Qing

    2018-02-01

    The Internet of things and smart grid are the frontier of information and Industry. The combination of Internet of things and smart grid will greatly enhance the ability of smart grid information and communication support. The key technologies of the Internet of things will be applied to the smart grid, and the grid operation and management information perception service centre will be built to support the commanding heights of the world’s smart grid.

  11. Design and implementation of GRID-based PACS in a hospital with multiple imaging departments

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.

  12. Implementing Production Grids

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Ziobarth, John (Technical Monitor)

    2002-01-01

    We have presented the essence of experience gained in building two production Grids, and provided some of the global context for this work. As the reader might imagine, there were a lot of false starts, refinements to the approaches and to the software, and several substantial integration projects (SRB and Condor integrated with Globus) to get where we are today. However, the point of this paper is to try and make it substantially easier for others to get to the point where Information Power Grids (IPG) and the DOE Science Grids are today. This is what is needed in order to move us toward the vision of a common cyber infrastructure for science. The author would also like to remind the readers that this paper primarily represents the actual experiences that resulted from specific architectural and software choices during the design and implementation of these two Grids. The choices made were dictated by the criteria laid out in section 1. There is a lot more Grid software available today that there was four years ago, and various of these packages are being integrated into IPG and the DOE Grids. However, the foundation choices of Globus, SRB, and Condor would not be significantly different today than they were four years ago. Nonetheless, if the GGF is successful in its work - and we have every reason to believe that it will be - then in a few years we will see that the 28 functions provided by these packages will be defined in terms of protocols and MIS, and there will be several robust implementations available for each of the basic components, especially the Grid Common Services. The impact of the emerging Web Grid Services work is not yet clear. It will likely have a substantial impact on building higher level services, however it is the opinion of the author that this will in no way obviate the need for the Grid Common Services. These are the foundation of Grids, and the focus of almost all of the operational and persistent infrastructure aspects of Grids.

  13. Development and implementation of a geographical area categorisation method with targeted performance indicators for nationwide EMS in Finland.

    PubMed

    Pappinen, Jukka; Laukkanen-Nevala, Päivi; Mäntyselkä, Pekka; Kurola, Jouni

    2018-05-15

    In Finland, hospital districts (HD) are required by law to determine the level and availability of Emergency Medical Services (EMS) for each 1-km 2 sized area (cell) within their administrative area. The cells are currently categorised into five risk categories based on the predicted number of missions. Methodological defects and insufficient instructions have led to incomparability between EMS services. The aim of this study was to describe a new, nationwide method for categorising the cells, analyse EMS response time data and describe possible differences in mission profiles between the new risk category areas. National databases of EMS missions, population and buildings were combined with an existing nationwide 1-km 2 hexagon-shaped cell grid. The cells were categorised into four groups, based on the Finnish Environment Institute's (FEI) national definition of urban and rural areas, population and historical EMS mission density within each cell. The EMS mission profiles of the cell categories were compared using risk ratios with confidence intervals in 12 mission groups. In total, 87.3% of the population lives and 87.5% of missions took place in core or other urban areas, which covered only 4.7% of the HDs' surface area. Trauma mission incidence per 1000 inhabitants was higher in core urban areas (42.2) than in other urban (24.2) or dispersed settlement areas (24.6). The results were similar for non-trauma missions (134.8, 93.2 and 92.2, respectively). Each cell category had a characteristic mission profile. High-energy trauma missions and cardiac problems were more common in rural and uninhabited cells, while violence, intoxication and non-specific problems dominated in urban areas. The proposed area categories and grid-based data collection appear to be a useful method for evaluating EMS demand and availability in different parts of the country for statistical purposes. Due to a similar rural/urban area definition, the method might also be usable for comparison between the Nordic countries.

  14. Frequency Regulation Services from Connected Residential Devices: Short Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Jin, Xin; Vaidhynathan, Deepthi

    In this paper, we demonstrate the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the- loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfortmore » bounds and physical hardware limitations. Future research will address the issues of cybersecurity threats, participation rates, and reducing equipment wear-and-tear while providing grid services.« less

  15. A policy system for Grid Management and Monitoring

    NASA Astrophysics Data System (ADS)

    Stagni, Federico; Santinelli, Roberto; LHCb Collaboration

    2011-12-01

    Organizations using a Grid computing model are faced with non-traditional administrative challenges: the heterogeneous nature of the underlying resources requires professionals acting as Grid Administrators. Members of a Virtual Organization (VO) can use a subset of available resources and services in the grid infrastructure and in an ideal world, the more resoures are exploited the better. In the real world, the less faulty services, the better: experienced Grid administrators apply procedures for adding and removing services, based on their status, as it is reported by an ever-growing set of monitoring tools. When a procedure is agreed and well-exercised, a formal policy could be derived. For this reason, using the DIRAC framework in the LHCb collaboration, we developed a policy system that can enforce management and operational policies, in a VO-specific fashion. A single policy makes an assessment on the status of a subject, relative to one or more monitoring information. Subjects of the policies are monitored entities of an established Grid ontology. The status of a same entity is evaluated against a number of policies, whose results are then combined by a Policy Decision Point. Such results are enforced in a Policy Enforcing Point, which provides plug-ins for actions, like raising alarms, sending notifications, automatic addition and removal of services and resources from the Grid mask. Policy results are shown in the web portal, and site-specific views are provided also. This innovative system provides advantages in terms of procedures automation, information aggregation and problem solving.

  16. Grid Computing at GSI for ALICE and FAIR - present and future

    NASA Astrophysics Data System (ADS)

    Schwarz, Kilian; Uhlig, Florian; Karabowicz, Radoslaw; Montiel-Gonzalez, Almudena; Zynovyev, Mykhaylo; Preuss, Carsten

    2012-12-01

    The future FAIR experiments CBM and PANDA have computing requirements that fall in a category that could currently not be satisfied by one single computing centre. One needs a larger, distributed computing infrastructure to cope with the amount of data to be simulated and analysed. Since 2002, GSI operates a tier2 center for ALICE@CERN. The central component of the GSI computing facility and hence the core of the ALICE tier2 centre is a LSF/SGE batch farm, currently split into three subclusters with a total of 15000 CPU cores shared by the participating experiments, and accessible both locally and soon also completely via Grid. In terms of data storage, a 5.5 PB Lustre file system, directly accessible from all worker nodes is maintained, as well as a 300 TB xrootd-based Grid storage element. Based on this existing expertise, and utilising ALICE's middleware ‘AliEn’, the Grid infrastructure for PANDA and CBM is being built. Besides a tier0 centre at GSI, the computing Grids of the two FAIR collaborations encompass now more than 17 sites in 11 countries and are constantly expanding. The operation of the distributed FAIR computing infrastructure benefits significantly from the experience gained with the ALICE tier2 centre. A close collaboration between ALICE Offline and FAIR provides mutual advantages. The employment of a common Grid middleware as well as compatible simulation and analysis software frameworks ensure significant synergy effects.

  17. A cross-domain communication resource scheduling method for grid-enabled communication networks

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding

    2011-10-01

    To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.

  18. Using fleets of electric-drive vehicles for grid support

    NASA Astrophysics Data System (ADS)

    Tomić, Jasna; Kempton, Willett

    Electric-drive vehicles can provide power to the electric grid when they are parked (vehicle-to-grid power). We evaluated the economic potential of two utility-owned fleets of battery-electric vehicles to provide power for a specific electricity market, regulation, in four US regional regulation services markets. The two battery-electric fleet cases are: (a) 100 Th!nk City vehicle and (b) 252 Toyota RAV4. Important variables are: (a) the market value of regulation services, (b) the power capacity (kW) of the electrical connections and wiring, and (c) the energy capacity (kWh) of the vehicle's battery. With a few exceptions when the annual market value of regulation was low, we find that vehicle-to-grid power for regulation services is profitable across all four markets analyzed. Assuming now more than current Level 2 charging infrastructure (6.6 kW) the annual net profit for the Th!nk City fleet is from US 7000 to 70,000 providing regulation down only. For the RAV4 fleet the annual net profit ranges from US 24,000 to 260,000 providing regulation down and up. Vehicle-to-grid power could provide a significant revenue stream that would improve the economics of grid-connected electric-drive vehicles and further encourage their adoption. It would also improve the stability of the electrical grid.

  19. Validation of SMAP surface soil moisture products with core validation sites

    USDA-ARS?s Scientific Manuscript database

    The NASA Soil Moisture Active Passive (SMAP) mission has utilized a set of core validation sites as the primary methodology in assessing the soil moisture retrieval algorithm performance. Those sites provide well-calibrated in situ soil moisture measurements within SMAP product grid pixels for diver...

  20. A Framework for Control and Observation in Distributed Environments

    NASA Technical Reports Server (NTRS)

    Smith, Warren

    2001-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Further, users need to observe the performance of their applications so that this performance can be improved and control how their applications execute in a dynamic grid environment. In this paper we describe our software framework for control and observation of resources, services, and applications that supports such uses and we provide examples of how our framework can be used.

  1. Integrating Clinical Trial Imaging Data Resources Using Service-Oriented Architecture and Grid Computing

    PubMed Central

    Cladé, Thierry; Snyder, Joshua C.

    2010-01-01

    Clinical trials which use imaging typically require data management and workflow integration across several parties. We identify opportunities for all parties involved to realize benefits with a modular interoperability model based on service-oriented architecture and grid computing principles. We discuss middleware products for implementation of this model, and propose caGrid as an ideal candidate due to its healthcare focus; free, open source license; and mature developer tools and support. PMID:20449775

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babun, Leonardo; Aksu, Hidayet; Uluagac, A. Selcuk

    The core vision of the smart grid concept is the realization of reliable two-­way communications between smart devices (e.g., IEDs, PLCs, PMUs). The benefits of the smart grid also come with tremendous security risks and new challenges in protecting the smart grid systems from cyber threats. Particularly, the use of untrusted counterfeit smart grid devices represents a real problem. Consequences of propagating false or malicious data, as well as stealing valuable user or smart grid state information from counterfeit devices are costly. Hence, early detection of counterfeit devices is critical for protecting smart grid’s components and users. To address thesemore » concerns, in this poster, we introduce our initial design of a configurable framework that utilize system call tracing, library interposition, and statistical techniques for monitoring and detection of counterfeit smart grid devices. In our framework, we consider six different counterfeit device scenarios with different smart grid devices and adversarial seZings. Our initial results on a realistic testbed utilizing actual smart-­grid GOOSE messages with IEC-­61850 communication protocol are very promising. Our framework is showing excellent rates on detection of smart grid counterfeit devices from impostors.« less

  3. Breaking CFD Bottlenecks in Gas-Turbine Flow-Path Design

    NASA Technical Reports Server (NTRS)

    Davis, Roger L.; Dannenhoffer, John F., III; Clark, John P.

    2010-01-01

    New ideas are forthcoming to break existing bottlenecks in using CFD during design. CAD-based automated grid generation. Multi-disciplinary use of embedded, overset grids to eliminate complex gridding problems. Use of time-averaged detached-eddy simulations as norm instead of "steady" RANS to include effects of self-excited unsteadiness. Combined GPU/Core parallel computing to provide over an order of magnitude increase in performance/price ratio. Gas-turbine applications are shown here but these ideas can be used for other Air Force, Navy, and NASA applications.

  4. An Eulerian/Lagrangian method for computing blade/vortex impingement

    NASA Technical Reports Server (NTRS)

    Steinhoff, John; Senge, Heinrich; Yonghu, Wenren

    1991-01-01

    A combined Eulerian/Lagrangian approach to calculating helicopter rotor flows with concentrated vortices is described. The method computes a general evolving vorticity distribution without any significant numerical diffusion. Concentrated vortices can be accurately propagated over long distances on relatively coarse grids with cores only several grid cells wide. The method is demonstrated for a blade/vortex impingement case in 2D and 3D where a vortex is cut by a rotor blade, and the results are compared to previous 2D calculations involving a fifth-order Navier-Stokes solver on a finer grid.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, F. G.; Luo, Y.; Mohanpurkar, M.

    Since the modern-day introduction of plug-in electric vehicles (PEVs), scientists have proposed leveraging PEV battery packs as distributed energy resources for the electric grid. PEV charging can be controlled not only to provide energy for transportation but also to provide grid services and to facilitate the integration of renewable energy generation. With renewable generation increasing at an unprecedented rate, most of which is non-dispatchable and intermittent, the concept of using PEVs as controllable loads is appealing to electric utilities. This additional functionality could also provide value to PEV owners and drive PEV adoption. It has been widely proposed that PEVsmore » can provide valuable grid services, such as load shifting to provide voltage regulation. The objective this work is to address the degree to which PEVs can provide grid services and mutually benefit the electric utilities, PEV owners, and auto manufacturers.« less

  6. Short Paper: Frequency Regulation Services from Connected Residential Devices: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Jin, Xin; Vaidhynathan, Deepthi

    In this paper, we demonstrate the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the- loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfortmore » bounds and physical hardware limitations. Future research will address the issues of cybersecurity threats, participation rates, and reducing equipment wear-and-tear while providing grid services.« less

  7. Towards an advanced e-Infrastructure for Civil Protection applications: Research Strategies and Innovation Guidelines

    NASA Astrophysics Data System (ADS)

    Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.

    2009-04-01

    In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real-Time capabilities, privileging time-of-response instead of accuracy, b) Security services to support complex data policies and trust relationships, c) Interoperability with existing or planned infrastructures (e.g. e-Government, INSPIRE compliant, etc.). Actually these requirements are the main reason why CP applications differ from Earth Science applications. Therefore further research is required to design and implement an advanced e-Infrastructure satisfying those specific requirements. In particular five themes where further research is required were identified: Grid Infrastructure Enhancement, Advanced Middleware for CP Applications, Security and Data Policies, CP Applications Enablement, and Interoperability. For each theme several research topics were proposed and detailed. They are targeted to solve specific problems for the implementation of an effective operational European e-Infrastructure for CP applications.

  8. The Design of Distributed Micro Grid Energy Storage System

    NASA Astrophysics Data System (ADS)

    Liang, Ya-feng; Wang, Yan-ping

    2018-03-01

    Distributed micro-grid runs in island mode, the energy storage system is the core to maintain the micro-grid stable operation. For the problems that it is poor to adjust at work and easy to cause the volatility of micro-grid caused by the existing energy storage structure of fixed connection. In this paper, an array type energy storage structure is proposed, and the array type energy storage system structure and working principle are analyzed. Finally, the array type energy storage structure model is established based on MATLAB, the simulation results show that the array type energy storage system has great flexibility, which can maximize the utilization of energy storage system, guarantee the reliable operation of distributed micro-grid and achieve the function of peak clipping and valley filling.

  9. Testing and evaluation of a slot and tab construction technique for light-weight wood-fiber-based structural panels under bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2015-01-01

    This paper presented construction and strain distributions for light-weight wood-fiber-based structural panels with tri-grid core made from phenolic impregnated laminated paper composites under bending. A new fastening configuration of slots in the faces and tabs on the core was applied to the face/core interfaces of the sandwich panel in addition to epoxy resin. Both...

  10. Advanced Wireless Integrated Navy Network - AWINN

    DTIC Science & Technology

    2005-09-30

    progress report No. 3 on AWINN hardware and software configurations of smart , wideband, multi-function antennas, secure configurable platform, close-in...results to the host PC via a UART soft core. The UART core used is a proprietary Xilinx core which incorporates features described in National...current software uses wheel odometry and visual landmarks to create a map and estimate position on an internal x, y grid . The wheel odometry provides a

  11. Design of Energy Storage Management System Based on FPGA in Micro-Grid

    NASA Astrophysics Data System (ADS)

    Liang, Yafeng; Wang, Yanping; Han, Dexiao

    2018-01-01

    Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.

  12. Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.

    2017-12-01

    Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.

  13. FermiGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yocum, D.R.; Berman, E.; Canal, P.

    2007-05-01

    As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.

  14. Medical applications for high-performance computers in SKIF-GRID network.

    PubMed

    Zhuchkov, Alexey; Tverdokhlebov, Nikolay

    2009-01-01

    The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Zhiwen; Eichman, Josh; Kurtz, Jennifer

    This National Renewable Energy Laboratory industry-inspired Laboratory Directed Research and Development project evaluates the feasibility and economics of using fuel cell backup power systems in cell towers to provide grid services (e.g., balancing, ancillary services, demand response). The work is intended to evaluate the integration of thousands of under-utilized, clean, efficient, and reliable fuel cell systems that are already installed in cell towers for potential grid and ancillary services.

  16. Design of Grid Portal System Based on RIA

    NASA Astrophysics Data System (ADS)

    Cao, Caifeng; Luo, Jianguo; Qiu, Zhixin

    Grid portal is an important branch of grid research. In order to solve the weak expressive force, the poor interaction, the low operating efficiency and other insufficiencies of the first and second generation of grid portal system, RIA technology was introduced to it. A new portal architecture was designed based on RIA and Web service. The concrete realizing scheme of portal system was presented by using Adobe Flex/Flash technology, which formed a new design pattern. In system architecture, the design pattern has B/S and C/S superiorities, balances server and its client side, optimizes the system performance, realizes platform irrelevance. In system function, the design pattern realizes grid service call, provides client interface with rich user experience, integrates local resources by using FABridge, LCDS, Flash player and some other components.

  17. XGC developments for a more efficient XGC-GENE code coupling

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  18. The State of NASA's Information Power Grid

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation transfers the concept of the power grid to information sharing in the NASA community. An information grid of this sort would be characterized as comprising tools, middleware, and services for the facilitation of interoperability, distribution of new technologies, human collaboration, and data management. While a grid would increase the ability of information sharing, it would not necessitate it. The onus of utilizing the grid would rest with the users.

  19. Geometry Laboratory (GEOLAB) surface modeling and grid generation technology and services

    NASA Technical Reports Server (NTRS)

    Kerr, Patricia A.; Smith, Robert E.; Posenau, Mary-Anne K.

    1995-01-01

    The facilities and services of the GEOmetry LABoratory (GEOLAB) at the NASA Langley Research Center are described. Included in this description are the laboratory functions, the surface modeling and grid generation technologies used in the laboratory, and examples of the tasks performed in the laboratory.

  20. Gridding Global δ 18Owater and Interpreting Core Top δ 18Oforam

    NASA Astrophysics Data System (ADS)

    Legrande, A. N.; Schmidt, G.

    2004-05-01

    Estimations of the oxygen isotope ratio in seawater (δ 18O water) traditionally have relied on regional δ 18O water to salinity relationships to convert seawater salinity into δ 18O water. This indirect method of determining δ 18O water is necessary since ?18Owater measurements are relatively sparse. We improve upon this process by constructing local δ 18O water to salinity curves using the Schmidt et al. (1999) global database of δ 18O water and salinity. We calculate local δ 18O water to salinity relationship on a 1x1 grid based on the closest database points to each grid box. Each ocean basin is analyzed separately, and each curve is processed to exclude outliers. These local relationships in combination with seawater salinity (Levitus, 1994) allow us to construct a global map of δ 18O water on a 1x1 grid. We combine seawater temperature (Levitus, 1994) with this dataset to predict δ 18O calcite on a 1x1 grid. These predicted values are then compared to previous compilations of core top δ 18O foram data for individual species of foraminifera. This comparison provides insight into the calcification habitats (as inferred by seawater temperature and salinity) of these species. Additionally, we compare the 1x1 grid of δ 18O water to preliminary output from the latest GISS coupled Atmosphere/Ocean GCM that tracks water isotopes through the hydrologic cycle. This comparison provides insight into possible model applications as a tool to aid in interpreting paleo-isotope data.

  1. Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD

    NASA Astrophysics Data System (ADS)

    Viellieber, Mathias; Class, Andreas G.

    2013-11-01

    Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.

  2. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  3. Integrating existing software toolkits into VO system

    NASA Astrophysics Data System (ADS)

    Cui, Chenzhou; Zhao, Yong-Heng; Wang, Xiaoqian; Sang, Jian; Luo, Ze

    2004-09-01

    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and "Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.

  4. Grid and Cloud for Developing Countries

    NASA Astrophysics Data System (ADS)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  5. Experimental Study of Two Phase Flow Behavior Past BWR Spacer Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratnayake, Ruwan K.; Hochreiter, L.E.; Ivanov, K.N.

    2002-07-01

    Performance of best estimate codes used in the nuclear industry can be significantly improved by reducing the empiricism embedded in their constitutive models. Spacer grids have been found to have an important impact on the maximum allowable Critical Heat Flux within the fuel assembly of a nuclear reactor core. Therefore, incorporation of suitable spacer grids models can improve the critical heat flux prediction capability of best estimate codes. Realistic modeling of entrainment behavior of spacer grids requires understanding the different mechanisms that are involved. Since visual information pertaining to the entrainment behavior of spacer grids cannot possibly be obtained frommore » operating nuclear reactors, experiments have to be designed and conducted for this specific purpose. Most of the spacer grid experiments available in literature have been designed in view of obtaining quantitative data for the purpose of developing or modifying empirical formulations for heat transfer, critical heat flux or pressure drop. Very few experiments have been designed to provide fundamental information which can be used to understand spacer grid effects and phenomena involved in two phase flow. Air-water experiments were conducted to obtain visual information on the two-phase flow behavior both upstream and downstream of Boiling Water Reactor (BWR) spacer grids. The test section was designed and constructed using prototypic dimensions such as the channel cross-section, rod diameter and other spacer grid configurations of a typical BWR fuel assembly. The test section models the flow behavior in two adjacent sub channels in the BWR core. A portion of a prototypic BWR spacer grid accounting for two adjacent channels was used with industrial mild steel rods for the purpose of representing the channel internals. Symmetry was preserved in this practice, so that the channel walls could effectively be considered as the channel boundaries. Thin films were established on the rod surfaces by injecting water through a set of perforations at the bottom ends of the rods, ensuring that the flow upstream of the bottom-most spacer grid is predominantly annular. The flow conditions were regulated such that they represent typical BWR operating conditions. Photographs taken during experiments show that the film entrainment increases significantly at the spacer grids, since the points of contact between the rods and the grids result in a peeling off of large portions of the liquid film from the rod surfaces. Decreasing the water flow resulted in eventual drying out, beginning at positions immediately upstream of the spacer grids. (authors)« less

  6. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  7. OOI CyberInfrastructure - Next Generation Oceanographic Research

    NASA Astrophysics Data System (ADS)

    Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.

    2008-12-01

    Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.

  8. A bilateral integrative health-care knowledge service mechanism based on 'MedGrid'.

    PubMed

    Liu, Chao; Jiang, Zuhua; Zhen, Lu; Su, Hai

    2008-04-01

    Current health-care organizations are encountering impression of paucity of medical knowledge. This paper classifies medical knowledge with new scopes. The discovery of health-care 'knowledge flow' initiates a bilateral integrative health-care knowledge service, and we make medical knowledge 'flow' around and gain comprehensive effectiveness through six operations (such as knowledge refreshing...). Seizing the active demand of Chinese health-care revolution, this paper presents 'MedGrid', which is a platform with medical ontology and knowledge contents service. Each level and detailed contents are described on MedGrid info-structure. Moreover, a new diagnosis and treatment mechanism are formed by technically connecting with electronic health-care records (EHRs).

  9. 20 CFR 663.155 - How are core services delivered?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false How are core services delivered? 663.155... Worker Services Through the One-Stop Delivery System § 663.155 How are core services delivered? Core services must be provided through the One-Stop delivery system. Core services may be provided directly by...

  10. 20 CFR 663.155 - How are core services delivered?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false How are core services delivered? 663.155... Worker Services Through the One-Stop Delivery System § 663.155 How are core services delivered? Core services must be provided through the One-Stop delivery system. Core services may be provided directly by...

  11. JAliEn - A new interface between the AliEn jobs and the central services

    NASA Astrophysics Data System (ADS)

    Grigoras, A. G.; Grigoras, C.; Pedreira, M. M.; Saiz, P.; Schreiner, S.

    2014-06-01

    Since the ALICE experiment began data taking in early 2010, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 40K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users, individually or in organized analysis trains. The overall stability of the AliEn middleware has been excellent throughout the 3 years of running, but the massive amount of end-user analysis and its specific requirements and load has revealed few components which can be improved. One of them is the interface between users and central AliEn services (catalogue, job submission system) which we are currently re-implementing in Java. The interface provides persistent connection with enhanced data and job submission authenticity. In this paper we will describe the architecture of the new interface, the ROOT binding which enables the use of a single interface in addition to the standard UNIX-like access shell and the new security-related features.

  12. Open Science Grid (OSG) Ticket Synchronization: Keeping Your Home Field Advantage In A Distributed Environment

    NASA Astrophysics Data System (ADS)

    Gross, Kyle; Hayashi, Soichi; Teige, Scott; Quick, Robert

    2012-12-01

    Large distributed computing collaborations, such as the Worldwide LHC Computing Grid (WLCG), face many issues when it comes to providing a working grid environment for their users. One of these is exchanging tickets between various ticketing systems in use by grid collaborations. Ticket systems such as Footprints, RT, Remedy, and ServiceNow all have different schema that must be addressed in order to provide a reliable exchange of information between support entities and users in different grid environments. To combat this problem, OSG Operations has created a ticket synchronization interface called GOC-TX that relies on web services instead of error-prone email parsing methods of the past. Synchronizing tickets between different ticketing systems allows any user or support entity to work on a ticket in their home environment, thus providing a familiar and comfortable place to provide updates without having to learn another ticketing system. The interface is built in a way that it is generic enough that it can be customized for nearly any ticketing system with a web-service interface with only minor changes. This allows us to be flexible and rapidly bring new ticket synchronization online. Synchronization can be triggered by different methods including mail, web services interface, and active messaging. GOC-TX currently interfaces with Global Grid User Support (GGUS) for WLCG, Remedy at Brookhaven National Lab (BNL), and Request Tracker (RT) at the Virtual Data Toolkit (VDT). Work is progressing on the Fermi National Accelerator Laboratory (FNAL) ServiceNow synchronization. This paper will explain the problems faced by OSG and how they led OSG to create and implement this ticket synchronization system along with the technical details that allow synchronization to be preformed at a production level.

  13. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  14. Post-Test Inspection of NASA's Evolutionary Xenon Thruster Long Duration Test Hardware: Ion Optics

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Shastry, Rohit

    2016-01-01

    A Long Duration Test (LDT) was initiated in June 2005 as a part of NASA's Evolutionary Xenon Thruster (NEXT) service life validation approach. Testing was voluntarily terminated in February 2014, with the thruster accumulating 51,184 hours of operation, processing 918 kg of xenon propellant, and delivering 35.5 MN-s of total impulse. The post-test inspection objectives for the ion optics were derived from the original NEXT LDT test objectives, such as service life model validation, and expanded to encompass other goals that included verification of in situ measurements, test issue root causes, and past design changes. The ion optics cold grid gap had decreased only by an average of 7% of pretest center grid gap, so efforts to stabilize NEXT grid gap were largely successful. The upstream screen grid surface exhibited a chamfered erosion pattern. Screen grid thicknesses were = 86% of the estimated pretest thickness, indicating that the screen grid has substantial service life remaining. Deposition was found on the screen aperture walls and downstream surfaces that was primarily composed of grid material and back-sputtered carbon, and this deposition likely caused the minor decreases in screen grid ion transparency during the test. Groove depths had eroded through up to 35% of the accelerator grid thickness. Minimum accelerator aperture diameters increased only by about 5-7% of the pretest values and downstream surface diameters increased by about 24-33% of the pretest diameters. These results suggest that increasing the accelerator aperture diameters, improving manufacturing tolerances, and masking down the perforated diameter to 36 cm were successful in reducing the degree of accelerator aperture erosion at larger radii.

  15. Performance of the Widely-Used CFD Code OVERFLOW on the Pleides Supercomputer

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2017-01-01

    Computational performance studies were made for NASA's widely used Computational Fluid Dynamics code OVERFLOW on the Pleiades Supercomputer. Two test cases were considered: a full launch vehicle with a grid of 286 million points and a full rotorcraft model with a grid of 614 million points. Computations using up to 8000 cores were run on Sandy Bridge and Ivy Bridge nodes. Performance was monitored using times reported in the day files from the Portable Batch System utility. Results for two grid topologies are presented and compared in detail. Observations and suggestions for future work are made.

  16. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  17. Challenges and Opportunities in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko

    2016-04-01

    Modeling paradigms on global scales may need to be reconsidered in order to better utilize the power of massively parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. Note that the described scenario strongly favors horizontally local discretizations. This is relatively easy to achieve in regional models. However, the spherical geometry complicates the problem. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of a reasonable size. However, the polar filtering requires transpositions involving extra communications as well as more computations. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for application of spectral representation. With some variations, such techniques are currently dominating in global models. Unfortunately, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with polar filtering is a step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances, such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids, were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Relaxing the hydrostatic approximation requieres careful reformulation of the model dynamics and more computations and communications. The unified Non-hydrostatic Multi-scale Model (NMMB) will be briefly discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable without modifying their amplitudes. The model has been successfully tested on various scales. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models, and its computational efficiency on parallel computers is good.

  18. Augmenting the access grid using augmented reality

    NASA Astrophysics Data System (ADS)

    Li, Ying

    2012-01-01

    The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.

  19. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  20. Autonomous system for Web-based microarray image analysis.

    PubMed

    Bozinov, Daniel

    2003-12-01

    Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.

  1. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    NASA Astrophysics Data System (ADS)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  2. GEM-AC, a stratospheric-tropospheric global and regional model for air quality and climate change: evaluation of gas phase properties

    NASA Astrophysics Data System (ADS)

    Kaminski, J. W.; Semeniuk, K.; McConnell, J. C.; Lupu, A.; Mamun, A.

    2012-12-01

    The Global Environmental Multiscale model for Air Quality and climate change (GEM-AC) is a global general circulation model based on the GEM model developed by the Meteorological Service of Canada for operational weather forecasting. It can be run with a global uniform (GU) grid or a global variable (GV) grid where the core has uniform grid spacing and the exterior grid expands. With a GV grid high resolution regional runs can be accomplished without a concern for boundary conditions. The work described here uses GEM version 3.3.2. The gas-phase chemistry consists in detailed reactions of Ox, NOx, HOx, CO, CH4, NMVOCs, halocarbons, ClOx and BrO. We have recently added elements of the Global Modal-aerosol eXtension (GMXe) scheme to address aerosol microphysics and gas-aerosol partitioning. The evaluation of the MESSY GMXe aerosol scheme is addressed in another poster. The Canadian aerosol module (CAM) is also available. Tracers are advected using the semi-Lagrangian scheme native to GEM. The vertical transport includes parameterized subgrid scale turbulence and large scale convection. Dry deposition is implemented as a flux boundary condition in the vertical diffusion equation. For climate runs the GHGs CO2, CH4, N2O, CFCs in the radiation scheme are adjusted to the scenario considered. In GV regional mode at high resolutions a lake model, FLAKE is also included. Wet removal comprises both in-cloud and below-cloud scavenging. With the gas phase chemistry the model has been run for a series of ten year time slices on a 3°×3° global grid with 77 hybrid levels from the surface to 0.15 hPa. The tropospheric and stratospheric gas phase results are compared with satellite measurements including, ACE, MIPAS, MOPITT, and OSIRIS. Current evaluations of the ozone field and other stratospheric fields are encouraging and tropospheric lifetimes for CH4 and CH3CCl3 are in reasonable accord with tropospheric models. We will present results for current and future climate conditions forced by SST for 2050.

  3. Privacy protection for HealthGrid applications.

    PubMed

    Claerhout, B; De Moor, G J E

    2005-01-01

    This contribution aims at introducing the problem of privacy protection in e-Health and at describing a number of existing privacy enhancing techniques (PETs). The recognition that privacy constitutes a fundamental right is gradually entering public awareness. Because healthcare-related data are susceptible to being abused for many obvious reasons, public apprehension about privacy has focused on medical data. Public authorities have become convinced of the need to enforce privacy protection and make considerable efforts for promoting through privacy protection legislation the deployment of PETs. Based on the study of the specific features of Grid technology, ways in which PET services could be integrated in the HealthGrid are being analyzed. Grid technology aims at removing barriers between local and remote resources. The privacy and legal issues raised by the HealthGrid are caused by the transparent interchange and processing of sensitive medical information. PET technology has already proven its usefulness for privacy protection in health-related marketing and research data collection. While this paper does not describe market-ready solutions for privacy protection in the HealthGrid, it puts forward several cases in which the Grid may benefit from PETs. Early integration of privacy protection services into the HealthGrid can lead to a synergy that is beneficial for the development of the HealthGrid itself.

  4. 21 CFR 892.1910 - Radiographic grid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Radiographic grid. 892.1910 Section 892.1910 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1910 Radiographic grid. (a) Identification. A...

  5. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  6. Smart Grid Development: Multinational Demo Project Analysis

    NASA Astrophysics Data System (ADS)

    Oleinikova, I.; Mutule, A.; Obushevs, A.; Antoskovs, N.

    2016-12-01

    This paper analyses demand side management (DSM) projects and stakeholders' experience with the aim to develop, promote and adapt smart grid tehnologies in Latvia. The research aims at identifying possible system service posibilites, including demand response (DR) and determining the appropriate market design for such type of services to be implemented at the Baltic power system level, with the cooperation of distribution system operator (DSO) and transmission system operator (TSO). This paper is prepared as an extract from the global smart grid best practices, smart solutions and business models.

  7. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  8. WebGIS based on semantic grid model and web services

    NASA Astrophysics Data System (ADS)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.

  9. Climatology of tracked persistent maxima of 500-hPa geopotential height

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Zhu, Yuejian; Zhang, Qin; Gottschalck, Jon; Zhang, Minghua; Melhauser, Christopher; Li, Wei; Guan, Hong; Zhou, Xiaqiong; Hou, Dingchen; Peña, Malaquias; Wu, Guoxiong; Liu, Yimin; Zhou, Linjiong; He, Bian; Hu, Wenting; Sukhdeo, Raymond

    2017-10-01

    Persistent open ridges and blocking highs (maxima) of 500-hPa geopotential height (Z500; PMZ) adjacent in space and time are identified and tracked as one event with a Lagrangian objective approach to derive their climatological statistics with some dynamical reasoning. A PMZ starts with a core that contains a local eddy maximum of Z500 and its neighboring grid points whose eddy values decrease radially to about 20 geopotential meters (GPMs) smaller than the maximum. It connects two consecutive cores that share at least one grid point and are within 10° of longitude of each other using an intensity-weighted location. The PMZ ends at the core without a successor. On each day, the PMZ impacts an area of grid points contiguous to the core and with eddy values decreasing radially to 100 GPMs. The PMZs identified and tracked consist of persistent ridges, omega blockings and blocked anticyclones either connected or as individual events. For example, the PMZ during 2-13 August 2003 corresponds to persistent open ridges that caused the extreme heatwave in Western Europe. Climatological statistics based on the PMZs longer than 3 days generally agree with those of blockings. In the Northern Hemisphere, more PMZs occur in DJF season than in JJA and their duration both exhibit a log-linear distribution. Because more omega-shape blocking highs and open ridges are counted, the PMZs occur more frequently over Northeast Pacific than over Atlantic-Europe during cool seasons. Similar results are obtained using the 200-hPa geopotential height (in place of Z500), indicating the quasi-barotropic nature of the PMZ.

  10. Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR

    NASA Astrophysics Data System (ADS)

    Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen

    2013-03-01

    Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.

  11. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  12. Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo

    2010-05-01

    In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an integration on existing solutions. More specifically, the Open Geospatial Consortium (OGC) Web Services (OWS) specifications play a fundamental role in geospatial information sharing (e.g. in INSPIRE Implementing Rules, GEOSS architecture, GMES Services, etc.). On the Grid side, the gLite middleware, developed in the European EGEE (Enabling Grids for E-sciencE) Projects, is widely spread in Europe and beyond, proving its high scalability and it is one of the middleware chosen for the future European Grid Infrastructure (EGI) initiative. Therefore the convergence between OWS and gLite technologies would be desirable for a seamless access to the Grid capabilities through OWS-compliant systems. Anyway, to achieve this harmonization there are some obstacles to overcome. Firstly, a semantics mismatch must be addressed: gLite handle low-level (e.g. close to the machine) concepts like "file", "data", "instruments", "job", etc., while geo-information services handle higher-level (closer to the human) concepts like "coverage", "observation", "measurement", "model", etc. Secondly, an architectural mismatch must be addressed: OWS implements a Web Service-Oriented-Architecture which is stateless, synchronous and with no embedded security (which is demanded to other specs), while gLite implements the Grid paradigm in an architecture which is stateful, asynchronous (even not fully event-based) and with strong embedded security (based on the VO paradigm). In recent years many initiatives and projects have worked out possible approaches for implementing Grid-enabled OWSs. Just to mention some: (i) in 2007 the OGC has signed a Memorandum of Understanding with the Open Grid Forum, "a community of users, developers, and vendors leading the global standardization effort for grid computing."; (ii) the OGC identified "WPS Profiles - Conflation; and Grid processing" as one of the tasks in the Geo Processing Workflow theme of the OWS Phase 6 (OWS-6); (iii) several national, European and international projects investigated different aspects of this integration, developing demonstrators and Proof-of-Concepts; In this context, "gLite enablement of OpenGeospatial Web Services" (G-OWS) is an initiative started in 2008 by the European CYCLOPS, GENESI-DR, and DORII Projects Consortia in order to collect/coordinate experiences on the enablement of OWS on top of the gLite middleware [GOWS]. Currently G-OWS counts ten member organizations from Europe and beyond, and four European Projects involved. It broadened its scope to the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Its operational objectives are the following: i) to contribute to the OGC-OGF initiative; ii) to release a reference implementation as standard gLite APIs (under the gLite software license); iii) to release a reference model (including procedures and guidelines) for OWS Grid-ification, as far as gLite is concerned; iv) to foster and promote the formation of consortiums for participation to projects/initiatives aimed at building Grid-enabled SDIs To achieve this objectives G-OWS bases its activities on two main guiding principles: a) the adoption of a service-oriented architecture based on the information modelling approach, and b) standardization as a means of achieving interoperability (i.e. adoption of standards from ISO TC211, OGC OWS, OGF). In the first year of activity G-OWS has designed a general architectural framework stemming from the FP6 CYCLOPS studies and enriched by the outcomes of other projects and initiatives involved (i.e. FP7 GENESI-DR, FP7 DORII, AIST GeoGrid, etc.). Some proof-of-concepts have been developed to demonstrate the flexibility and scalability of such architectural framework. The G-OWS WG developed implementations of gLite-enabled Web Coverage Service (WCS) and Web Processing Service (WPS), and an implementation of a Shibboleth authentication for gLite-enabled OWS in order to evaluate the possible integration of Web and Grid security models. The presentation will aim to communicate the G-OWS organization, activities, future plans and means to involve the ESSI community. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Foster 2001] I. Foster, C. Kesselman and S. Tuecke, "The Anatomy of the Grid. The International Journal ofHigh Performance Computing Applications", 15(3):200-222, Fall 2001 [GOWS] G-OWS WG, https://www.g-ows.org/, accessed: 15 January 2010

  13. 20 CFR 652.208 - How are core services and intensive services related to the methods of service delivery described...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false How are core services and intensive services... § 652.208 How are core services and intensive services related to the methods of service delivery described in § 652.207(b)(2)? Core services and intensive services may be delivered through any of the...

  14. 20 CFR 652.208 - How are core services and intensive services related to the methods of service delivery described...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false How are core services and intensive services... § 652.208 How are core services and intensive services related to the methods of service delivery described in § 652.207(b)(2)? Core services and intensive services may be delivered through any of the...

  15. The Geoland2 BioPar burned area product

    NASA Astrophysics Data System (ADS)

    Tansey, K.; Bradley, A.; Smets, B.; van Best, C.; Lacaze, R.

    2012-04-01

    The European Commission Geoland2 project intends to constitute a major step forward to the implementation of the GMES Land Monitoring Core Service (LMCS). The Bio-geophysical Parameters (BioPar) Core Monitoring Service aims at setting-up pre-operational infrastructures for providing regional, European, and global bio-geophysical variables, both in near real time and off-line mode, for describing the vegetation state, the radiation budget at the surface, and the water cycle. The burned area product is part of the BioPar portfolio. The burned area product further builds on the experience of the Global Burned Area (GBA2000) and L3JRC projects. In the GBA2000 project, several algorithms were developed for different geographical regions of the world, and applied to a 1-year time series (the year 2000) of SPOT-VEGETATION data. In the L3JRC project, a single algorithm was improved and applied to a 7-year global dataset of SPOT-VEGETATION data. Since the conception of the Geoland2 project, work has been undertaken to improve the L3JRC algorithm, mainly based on user comments and feedback. Furthermore, the Geoland2 burned area product specification has been developed to meet the requirements of the Core Information Service, specifically LandCarbon and Natural Resource Monitoring in Africa (Narma). The Geoland2 burned area product has the following improvements over the L3JRC product: • It resolves issues with users extracting statistics and burned area estimates for time periods considered to be outside the main seasons for burning. Specifically, this deals with issues in northern latitude winters. • The number of pre-processing steps has been shortened, reducing processing time. • An improved land-water mask has been used. This resolves a problem around the coastlines of land masses which were frequently being detected as being burned. • A season metric calculation is performed over a 1x1 degree grid. For each grid cell, a date is logged against the start of the fire season, peak of the fire season and then the end of the fire season. Once a fire season has been confirmed as being finished, the region effectively resets itself, which means that the land surface can burn again when the next fire season starts. This automated season reset feature enables multiple fire seasons to be analysed. • Provides easy to interpret seasonality tables every 10 days (the reporting period for the product). It is intended that the product will be validated using CEOS-approved protocols and data sets currently being developed through the European Space Agency Fire-CCI project. In this paper, initial results being produced operationally and will be presented along with examples highlighting the performance of the seasonality metric.

  16. Health and well-being of movers in rural and urban areas--a grid-based analysis of northern Finland birth cohort 1966.

    PubMed

    Lankila, Tiina; Näyhä, Simo; Rautio, Arja; Koiranen, Markku; Rusanen, Jarmo; Taanila, Anja

    2013-01-01

    We examined the association of health and well-being with moving using a detailed geographical scale. 7845 men and women born in northern Finland in 1966 were surveyed by postal questionnaire in 1997 and linked to 1 km(2) geographical grids based on each subject's home address in 1997-2000. Population density was used to classify each grid as rural (1-100 inhabitants/km²) or urban (>100 inhabitants/km²) type. Moving was treated as a three-class response variate (not moved; moved to different type of grid; moved to similar type of grid). Moving was regressed on five explanatory factors (life satisfaction, self-reported health, lifetime morbidity, activity-limiting illness and use of health services), adjusting for factors potentially associated with health and moving (gender, marital status, having children, housing tenure, education, employment status and previous move). The results were expressed as odds ratios (OR) and their 95% confidence intervals (CI). Moves from rural to urban grids were associated with dissatisfaction with current life (adjusted OR 2.01; 95% CI 1.26-3.22) and having somatic (OR 1.66; 1.07-2.59) or psychiatric (OR 2.37; 1.21-4.63) morbidities, the corresponding ORs for moves from rural to other rural grids being 1.71 (0.98-2.98), 1.63 (0.95-2.78) and 2.09 (0.93-4.70), respectively. Among urban dwellers, only the frequent use of health services (≥ 21 times/year) was associated with moving, the adjusted ORs being 1.65 (1.05-2.57) for moves from urban to rural grids and 1.30 (1.03-1.64) for urban to other urban grids. We conclude that dissatisfaction with life and history of diseases and injuries, especially psychiatric morbidity, may increase the propensity to move from rural to urbanised environments, while availability of health services may contribute to moves within urban areas and also to moves from urban areas to the countryside, where high-level health services enable a good quality of life for those attracted by the pastoral environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 3: Advanced Fan Section Grid Generator Final Report and Computer Program User's Manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1991-01-01

    A procedure is studied for generating three-dimensional grids for advanced turbofan engine fan section geometries. The procedure constructs a discrete mesh about engine sections containing the fan stage, an arbitrary number of axisymmetric radial flow splitters, a booster stage, and a bifurcated core/bypass flow duct with guide vanes. The mesh is an h-type grid system, the points being distributed with a transfinite interpolation scheme with axial and radial spacing being user specified. Elliptic smoothing of the grid in the meridional plane is a post-process option. The grid generation scheme is consistent with aerodynamic analyses utilizing the average-passage equation system developed by Dr. John Adamczyk of NASA Lewis. This flow solution scheme requires a series of blade specific grids each having a common axisymmetric mesh, but varying in the circumferential direction according to the geometry of the specific blade row.

  18. Navier-Stokes calculations for 3D gaseous fuel injection with data comparisons

    NASA Technical Reports Server (NTRS)

    Fuller, E. J.; Walters, R. W.

    1991-01-01

    Results from a computational study and experiments designed to further expand the knowledge of gaseous injection into supersonic cross-flows are presented. Experiments performed at Mach 6 included several cases of gaseous helium injection with low transverse angles and injection with low transverse angles coupled with a low yaw angle. Both experimental and computational data confirm that injector yaw has an adverse effect on the helium core decay rate. An array of injectors is found to give higher penetration into the freestream without loss of core injectant decay as compared to a single injector. Lateral diffusion plays a major role in lateral plume spreading, eddy viscosity, injectant plume, and injectant-freestream mixing. Grid refinement makes it possible to capture the gradients in the streamwise direction accurately and to vastly improve the data comparisons. Computational results for a refined grid are found to compare favorably with experimental data on injectant overall and core penetration provided laminar lateral diffusion was taken into account using the modified Baldwin-Lomax turbulence model.

  19. GaiaGrid : Its Implications and Implementation

    NASA Astrophysics Data System (ADS)

    Ansari, S. G.; Lammers, U.; Ter Linden, M.

    2005-12-01

    Gaia is an ESA space mission to determine positions of 1 billion objects in the Galaxy at micro-arcsecond precision. The data analysis and processing requirements of the mission involves about 20 institutes across Europe, each providing specific algorithms for specific tasks, which range from relativistic effects on positional determination, classification, astrometric binary star detection, photometric analysis, spectroscopic analysis etc. In an initial phase, a study has been ongoing over the past three years to determine the complexity of Gaia's data processing. Two processing categories have materialised: core and shell. While core deals with routine data processing, shell tasks are algorithms to carry out data analysis, which involves the Gaia Community at large. For this latter category, we are currently experimenting with use of Grid paradigms to allow access to the core data and to augment processing power to simulate and analyse the data in preparation for the actual mission. We present preliminary results and discuss the sociological impact of distributing the tasks amongst the community.

  20. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  1. Smart cities, healthy kids: the association between neighbourhood design and children's physical activity and time spent sedentary.

    PubMed

    Esliger, Dale W; Sherar, Lauren B; Muhajarine, Nazeem

    2012-07-26

    To determine whether, and to what extent, a relation exists between neighbourhood design and children's physical activity and sedentary behaviours in Saskatoon. Three neighbourhood designs were assessed: 1) core neighbourhoods developed before 1930 that follow a grid pattern, 2) fractured-grid pattern neighbourhoods that were developed between the 1930s and mid-1960s, and 3) curvilinear-pattern neighbourhoods that were developed between the mid-1960s through to 1998. Children aged 10-14 years (N=455; mean age 11.7 years), grouped by the neighbourhoods they resided in, had their physical activity and sedentary behaviour objectively measured by accelerometry for 7 days. ANCOVA and MANCOVA (multivariate analysis of covariance) models were used to assess group differences (p<0.05). Group differences were apparent on weekdays but not on weekend days. When age, sex and family income had been controlled for, children living in fractured-grid neighbourhoods had, on average, 83 and 55 fewer accelerometer counts per minute on weekdays than the children in the core and curvilinear-pattern neighbourhoods, respectively. Further analyses showed that the children in the fractured-grid neighbourhoods accumulated 15 and 9 fewer minutes of moderate-to-vigorous physical activity per day and had a greater time spent in sedentary behaviour (23 and 17 minutes) than those in core and curvilinear-pattern neighbourhoods, respectively. These data suggest that in Saskatoon there is a relation between neighbourhood design and children's physical activity and sedentary behaviours. Further work is needed to tease out which features of the built environments have the greatest impact on these important lifestyle behaviours. This information, offered in the context of ongoing development of neighbourhoods, as we see in Saskatoon, is critical to an evidence-informed approach to urban development and planning.

  2. Framework for Service Composition in G-Lite

    NASA Astrophysics Data System (ADS)

    Goranova, R.

    2011-11-01

    G-Lite is a Grid middleware, currently the main middleware installed on all clusters in Bulgaria. The middleware is used by scientists for solving problems, which require a large amount of storage and computational resources. On the other hand, the scientists work with complex processes, where job execution in Grid is just a step of the process. That is why, it is strategically important g-Lite to provide a mechanism for service compositions and business process management. Such mechanism is not specified yet. In this article we propose a framework for service composition in g-Lite. We discuss business process modeling, deployment and execution in this Grid environment. The examples used to demonstrate the concept are based on some IBM products.

  3. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  4. 20 CFR 661.310 - Under what limited conditions may a Local Board directly be a provider of core services...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Board directly be a provider of core services, intensive services, or training services, or act as a One... Board directly be a provider of core services, intensive services, or training services, or act as a One-Stop Operator? (a) A Local Board may not directly provide core services, or intensive services, or be...

  5. 20 CFR 661.310 - Under what limited conditions may a Local Board directly be a provider of core services...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Board directly be a provider of core services, intensive services, or training services, or act as a One... Board directly be a provider of core services, intensive services, or training services, or act as a One-Stop Operator? (a) A Local Board may not directly provide core services, or intensive services, or be...

  6. Current Grid operation and future role of the Grid

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place, Grid will become limited to HEP; if however the current multitude of Grid-like systems will converge to a generic, modular and extensible solution, Grid will become true to its name.

  7. Evaluating the Information Power Grid using the NAS Grid Benchmarks

    NASA Technical Reports Server (NTRS)

    VanderWijngaartm Rob F.; Frumkin, Michael A.

    2004-01-01

    The NAS Grid Benchmarks (NGB) are a collection of synthetic distributed applications designed to rate the performance and functionality of computational grids. We compare several implementations of the NGB to determine programmability and efficiency of NASA's Information Power Grid (IPG), whose services are mostly based on the Globus Toolkit. We report on the overheads involved in porting existing NGB reference implementations to the IPG. No changes were made to the component tasks of the NGB can still be improved.

  8. Towards a Global Service Registry for the World-Wide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro

    2014-06-01

    The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.

  9. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  10. NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services

    NASA Astrophysics Data System (ADS)

    Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.

    2003-12-01

    Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional Web services. The brokering infrastructure will manage the service advertisements and the invocation of these services. This scheme ensures that the fundamental Grid computing concept is met - provide computing capabilities of those that are willing to provide it to those that seek the same. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org

  11. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  12. Evaluation of out-of-core computer programs for the solution of symmetric banded linear equations. [simultaneous equations

    NASA Technical Reports Server (NTRS)

    Dunham, R. S.

    1976-01-01

    FORTRAN coded out-of-core equation solvers that solve using direct methods symmetric banded systems of simultaneous algebraic equations. Banded, frontal and column (skyline) solvers were studied as well as solvers that can partition the working area and thus could fit into any available core. Comparison timings are presented for several typical two dimensional and three dimensional continuum type grids of elements with and without midside nodes. Extensive conclusions are also given.

  13. 20 CFR 662.250 - Where and to what extent must required One-Stop partners make core services available?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-Stop partners make core services available? 662.250 Section 662.250 Employees' Benefits EMPLOYMENT AND... extent must required One-Stop partners make core services available? (a) At a minimum, the core services... worker program partners are required to make all of the core services listed in § 662.240 available at...

  14. 20 CFR 662.250 - Where and to what extent must required One-Stop partners make core services available?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-Stop partners make core services available? 662.250 Section 662.250 Employees' Benefits EMPLOYMENT AND... extent must required One-Stop partners make core services available? (a) At a minimum, the core services... worker program partners are required to make all of the core services listed in § 662.240 available at...

  15. Euler solutions for an unbladed jet engine configuration

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1991-01-01

    A Euler solution for an axisymmetric jet engine configuration without blade effects is presented. The Euler equations are solved on a multiblock grid which covers a domain including the inlet, bypass duct, core passage, nozzle, and the far field surrounding the engine. The simulation is verified by considering five theoretical properties of the solution. The solution demonstrates both multiblock grid generation techniques and a foundation for a full jet engine throughflow calculation.

  16. Grid-Connected Distributed Generation: Compensation Mechanism Basics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aznar, Alexandra Y; Zinaman, Owen R

    2017-10-02

    This short report defines compensation mechanisms for grid-connected, behind-the-meter distributed generation (DG) systems as instruments that comprise three core elements: (1) metering and billing arrangements, (2) sell rate design, and (3) retail rate design. This report describes metering and billing arrangements, with some limited discussion of sell rate design. We detail the three possible arrangements for metering and billing of DG: net energy metering (NEM); buy all, sell all; and net billing.

  17. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  18. A methodology toward manufacturing grid-based virtual enterprise operation platform

    NASA Astrophysics Data System (ADS)

    Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu

    2010-08-01

    Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.

  19. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    NASA Astrophysics Data System (ADS)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display them all together in a unitary manner using the same visual structure [4]. Addressing the High Power Computation requirements that are met while processing environmental data, our platform can be easily extended through tools that connect to GRID infrastructures, WCS web services, Bashyt API (for creating specialized hydrological reports) or any other specialized services (ex. graphics cluster visualization) that can be reached over the Internet. At run time, on the trainee's computer each tool is launched in an asynchronous running mode and connects to the data source that has been established by the teacher, retrieving and displaying the information to the user. The data transfer is accomplished directly between the trainee's computer and the corresponding services (e.g. OGC, Bashyt API, etc.) without passing through the core server platform. In this manner, the eGLE application can provide better and more responsive connections to a large number of users.

  20. Demonstration of Essential Reliability Services by a 300-MW Solar Photovoltaic Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loutan, Clyde; Klauer, Peter; Chowdhury, Sirajul

    The California Independent System Operator (CAISO), First Solar, and the National Renewable Energy Laboratory (NREL) conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to test its ability to provide essential ancillary services to the electric grid. With increasing shares of solar- and wind-generated energy on the electric grid, traditional generation resources equipped with automatic governor control (AGC) and automatic voltage regulation controls -- specifically, fossil thermal -- are being displaced. The deployment of utility-scale, grid-friendly PV power plants that incorporate advanced capabilities to support grid stability and reliability is essential for the large-scale integrationmore » of PV generation into the electric power grid, among other technical requirements. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, PV power plants can be used to mitigate the impact of variability on the grid, a role typically reserved for conventional generators. In August 2016, testing was completed on First Solar's 300-MW PV power plant, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to use grid-friendly controls to provide essential reliability services. These data showed how the development of advanced power controls can enable PV to become a provider of a wide range of grid services, including spinning reserves, load following, voltage support, ramping, frequency response, variability smoothing, and frequency regulation to power quality. Specifically, the tests conducted included various forms of active power control such as AGC and frequency regulation; droop response; and reactive power, voltage, and power factor controls. This project demonstrated that advanced power electronics and solar generation can be controlled to contribute to system-wide reliability. It was shown that the First Solar plant can provide essential reliability services related to different forms of active and reactive power controls, including plant participation in AGC, primary frequency control, ramp rate control, and voltage regulation. For AGC participation in particular, by comparing the PV plant testing results to the typical performance of individual conventional technologies, we showed that regulation accuracy by the PV plant is 24-30 points better than fast gas turbine technologies. The plant's ability to provide volt-ampere reactive control during periods of extremely low power generation was demonstrated as well. The project team developed a pioneering demonstration concept and test plan to show how various types of active and reactive power controls can leverage PV generation's value from being a simple variable energy resource to a resource that provides a wide range of ancillary services. With this project's approach to a holistic demonstration on an actual, large, utility-scale, operational PV power plant and dissemination of the obtained results, the team sought to close some gaps in perspectives that exist among various stakeholders in California and nationwide by providing real test data.« less

  1. Semantic web data warehousing for caGrid

    PubMed Central

    McCusker, James P; Phillips, Joshua A; Beltrán, Alejandra González; Finkelstein, Anthony; Krauthammer, Michael

    2009-01-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG® Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399

  2. Emergency heat removal system for a nuclear reactor

    DOEpatents

    Dunckel, Thomas L.

    1976-01-01

    A heat removal system for nuclear reactors serving as a supplement to an Emergency Core Cooling System (ECCS) during a Loss of Coolant Accident (LOCA) comprises a plurality of heat pipes having one end in heat transfer relationship with either the reactor pressure vessel, the core support grid structure or other in-core components and the opposite end located in heat transfer relationship with a heat exchanger having heat transfer fluid therein. The heat exchanger is located external to the pressure vessel whereby excessive core heat is transferred from the above reactor components and dissipated within the heat exchanger fluid.

  3. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  4. ETICS: the international software engineering service for the grid

    NASA Astrophysics Data System (ADS)

    Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.

    2008-07-01

    The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.

  5. Valuation of Electric Power System Services and Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kintner-Meyer, Michael C. W.; Homer, Juliet S.; Balducci, Patrick J.

    Accurate valuation of existing and new technologies and grid services has been recognized to be important to stimulate investment in grid modernization. Clear, transparent, and accepted methods for estimating the total value (i.e., total benefits minus cost) of grid technologies and services are necessary for decision makers to make informed decisions. This applies to home owners interested in distributed energy technologies, as well as to service providers offering new demand response services, and utility executives evaluating best investment strategies to meet their service obligation. However, current valuation methods lack consistency, methodological rigor, and often the capabilities to identify and quantifymore » multiple benefits of grid assets or new and innovative services. Distributed grid assets often have multiple benefits that are difficult to quantify because of the locational context in which they operate. The value is temporally, operationally, and spatially specific. It varies widely by distribution systems, transmission network topology, and the composition of the generation mix. The Electric Power Research Institute (EPRI) recently established a benefit-cost framework that proposes a process for estimating multiple benefits of distributed energy resources (DERs) and the associated cost. This document proposes an extension of this endeavor that offers a generalizable framework for valuation that quantifies the broad set of values for a wide range of technologies (including energy efficiency options, distributed resources, transmission, and generation) as well as policy options that affect all aspects of the entire generation and delivery system of the electricity infrastructure. The extension includes a comprehensive valuation framework of monetizable and non-monetizable benefits of new technologies and services beyond the traditional reliability objectives. The benefits are characterized into the following categories: sustainability, affordability, and security, flexibility, and resilience. This document defines the elements of a generic valuation framework and process as well as system properties and metrics by which value streams can be derived. The valuation process can be applied to determine the value on the margin of incremental system changes. This process is typically performed when estimating the value of a particular project (e.g., value of a merchant generator, or a distributed photovoltaic (PV) rooftop installation). Alternatively, the framework can be used when a widespread change in the grid operation, generation mix, or transmission topology is to be valued. In this case a comprehensive system analysis is required.« less

  6. 20 CFR 641.210 - What services, in addition to the applicable core services, must SCSEP grantees and sub...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... core services, must SCSEP grantees and sub-recipients provide through the One-Stop delivery system? 641... Investment Act § 641.210 What services, in addition to the applicable core services, must SCSEP grantees and sub-recipients provide through the One-Stop delivery system? In addition to providing core services...

  7. 20 CFR 641.210 - What services, in addition to the applicable core services, must SCSEP grantees provide through...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... core services, must SCSEP grantees provide through the One-Stop Delivery System? 641.210 Section 641... § 641.210 What services, in addition to the applicable core services, must SCSEP grantees provide through the One-Stop Delivery System? In addition to providing core services, SCSEP grantees must make...

  8. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  9. Regulatory Incentives and Disincentives for Utility Investments in Grid Modernization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kihm, Steve; Beecher, Janice; Lehr, Ronald L.

    Electric power is America's most capital-intensive industry, with more than $100 billion invested each year in energy infrastructure. Investment needs are likely to grow as electric utilities make power systems more reliable and resilient, deploy advanced digital technologies, and facilitate new services to meet some consumers' expectations for greater choice and control. But do current regulatory approaches provide the appropriate incentives for grid modernization investments? This report presents three perspectives: -Financial analyst Steve Kihm begins by explaining that any major investor-owned electric utility that wants to raise capital today can do so at a reasonable cost. The question is whethermore » utility managers want to raise capital for grid modernization. Specifically, they look for investments that create the most value for their existing shareholders. In cases where grid modernization investments are not the best choice in terms of shareholder value, Kihm describes shareholder incentive mechanisms that regulators could consider to encourage such investments when they are in the public interest. -From an institutional perspective, Dr. Janice Beecher finds that the traditional rate-base/rate of return regulatory model provides powerful incentives for utilities to pursue investments, cost control, efficiency and even innovation, and it is well suited to the policy objectives of grid modernization. Prudence of grid modernization investments (fair returns) depends on careful evaluation of the specific asset, and any special incentives (bonus returns) should be used only if they promote economic efficiency consistent with the core goals of economic regulation. According to Beecher, realizing the promises of grid modernization depends on effective implementation of the traditional regulatory model and ratemaking tools to serve the public interest. -Conversely, former commissioner and clean energy consultant Ron Lehr says that rapid electric industry changes require a better alignment of utility investment incentives with changes challenging the electricity sector, emerging grid modernization options and benefits, and public policies. For example, investor-owned utilities typically have an incentive to make capital investments, but rarely to employ expense-based solutions, since utilities do not earn profits on expenses. Further, Lehr cites a variety of factors that stand in the way of creating well targeted and well aligned utility incentives, including litigated regulatory processes. These may be a poor choice for finding the right balance among competing interests, establishing rules of prospective application, justifying demonstrations of new technologies and approaches to meeting emerging consumer demands, and keeping pace with rapid change.« less

  10. 75 FR 33611 - Implementing the National Broadband Plan by Studying the Communications Requirements of Electric...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ... Requirements of Electric Utilities To Inform Federal Smart Grid Policy AGENCY: Department of Energy. ACTION..., but not limited to, the requirements of the Smart Grid (75 FR 26206). DOE also sought to collect... the types of networks and communications services that may be used for grid modernization...

  11. Instant provisioning of wavelength service using quasi-circuit optical burst switching

    NASA Astrophysics Data System (ADS)

    Xie, Hongyi; Li, Yanhe; Zheng, Xiaoping; Zhang, Hanyi

    2006-09-01

    Due to the recent outstanding advancement of optical networking technology, pervasive Grid computing will be a feasible option in the near future. As Grid infrastructure, optical networks must be able to handle different Grid traffic patterns with various traffic characteristics as well as different QoS requirements. With current optical switching technology, optical circuit switching is suitable for data-intensive Grid applications while optical burst switching is suitable to submit small Grid jobs. However, there would be high bandwidth short-lived traffic in some emerging Grid applications such as multimedia editing. This kind of traffic couldn't be well supported by both OCS and conventional OBS because of considerable path setup delay and bandwidth waste in OCS and inherent loss in OBS. Quasi-Circuit OBS (QCOBS) is proposed in this paper to address this challenge, providing one-way reserved, nearly lossless, instant provisioned wavelength service in OBS networks. Simulation results show that QCOBS achieves lossless transmission at low and moderate loads, and very low loss probability at high loads with proper guard time configuration.

  12. Using the Bem and Klein Grid Scores to Predict Health Services Usage by Men

    PubMed Central

    Reynolds, Grace L.; Fisher, Dennis G.; Dyo, Melissa; Huckabay, Loucine M.

    2016-01-01

    We examined the association between scores on the Bem Sex Roles Inventory (BSRI), Klein Sexual Orientation Grid (KSOG) and utilization of hospital inpatient services, emergency departments, and outpatient clinic visits in the past 12 months among 53 men (mean age 39 years). The femininity subscale score on the BSRI, ever having had gonorrhea and age were the three variables identified in a multivariate linear regression significantly predicting use of total health services. This supports the hypothesis that sex roles can assist our understanding of men’s use of health services. PMID:27337618

  13. An Evaluation of Alternative Designs for a Grid Information Service

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Waheed, Abdul; Meyers, David; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2001-01-01

    The Globus information service wasn't working well. There were many updates of data from Globus daemons which saturated the single server and users couldn't retrieve information. We created a second server for NASA and Alliance. Things were great on that server, but a bit slow on the other server. We needed to know exactly how the information service was being used. What were the best servers and configurations? This viewgraph presentation gives an overview of the evaluation of alternative designs for a Grid Information Service. Details are given on the workload characterization, methodology used, and the performance evaluation.

  14. 20 CFR 663.160 - Are there particular core services an individual must receive before receiving intensive services...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Are there particular core services an... Worker Services Through the One-Stop Delivery System § 663.160 Are there particular core services an... minimum, an individual must receive at least one core service, such as an initial assessment or job search...

  15. 20 CFR 663.160 - Are there particular core services an individual must receive before receiving intensive services...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Are there particular core services an... Worker Services Through the One-Stop Delivery System § 663.160 Are there particular core services an... minimum, an individual must receive at least one core service, such as an initial assessment or job search...

  16. Nuclear reactor

    DOEpatents

    Yant, Howard W.; Stinebiser, Karl W.; Anzur, Gregory C.

    1977-01-01

    A nuclear reactor, particularly a liquid-metal breeder reactor, whose upper internals include outlet modules for channeling the liquid-metal coolant from selected areas of the outlet of the core vertically to the outlet plenum. The modules are composed of a highly-refractory, high corrosion-resistant alloy, for example, INCONEL-718. Each module is disposed to confine and channel generally vertically the coolant emitted from a subplurality of core-component assemblies. Each module has a grid with openings, each opening disposed to receive the coolant from an assembly of the subplurality. The grid in addition serves as a holdown for the assemblies of the corresponding subplurality preventing their excessive ejection upwardly from the core. In the region directly over the core the outlet modules are of such peripheral form that they nest forming a continuum over the core-component assemblies whose outlet coolant they confine. Each subassembly includes a chimney which confines the coolant emitted by its corresponding subassemblies to generally vertical flow between the outlet of the core and the outlet plenum. Each subplurality of assemblies whose emitted coolant is confined by an outlet module includes assemblies which emit lower-temperature coolant, for example, a control-rod assembly, or fertile assemblies, and assemblies which emit coolant of substantially higher temperature, for example, fuel-rod assemblies. The coolants of different temperatures are mixed in the chimneys reducing the effect of stripping (hot-cold temperature fluctuations) on the remainder of the upper internals which are composed typically of AISI-304 or AISI-316 stainless steel.

  17. In Search of an Identity: Air Force Core Competencies

    DTIC Science & Technology

    1997-06-01

    for connecting core competencies to both inside and outside the service . Core competencies have become a decision making framework for the Air Force...Proposed Intra– Service Relationship ................................................................. 76 Figure 2. Proposed Inter- service and Joint...connecting core competencies to both inside and outside the service . Core competencies have become a decision making framework for the Air Force. They

  18. 42 CFR 418.70 - Condition of participation: Furnishing of non-core services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Condition of participation: Furnishing of non-core services. 418.70 Section 418.70 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... Care Non-Core Services § 418.70 Condition of participation: Furnishing of non-core services. A hospice...

  19. 20 CFR 669.340 - What core services are available to eligible MSFW's?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false What core services are available to eligible... Farmworker Jobs Program Customers and Available Program Services § 669.340 What core services are available to eligible MSFW's? The core services identified in WIA section 134(d)(2) are available to eligible...

  20. 42 CFR 418.70 - Condition of participation: Furnishing of non-core services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Condition of participation: Furnishing of non-core services. 418.70 Section 418.70 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... Care Non-Core Services § 418.70 Condition of participation: Furnishing of non-core services. A hospice...

  1. 20 CFR 669.340 - What core services are available to eligible MSFW's?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What core services are available to eligible... Farmworker Jobs Program Customers and Available Program Services § 669.340 What core services are available to eligible MSFW's? The core services identified in WIA section 134(d)(2) are available to eligible...

  2. Challenges in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom

    2015-04-01

    The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.

  3. GRACC: New generation of the OSG accounting

    NASA Astrophysics Data System (ADS)

    Retzke, K.; Weitzel, D.; Bhat, S.; Levshina, T.; Bockelman, B.; Jayatilaka, B.; Sehgal, C.; Quick, R.; Wuerthwein, F.

    2017-10-01

    Throughout the last decade the Open Science Grid (OSG) has been fielding requests from user communities, resource owners, and funding agencies to provide information about utilization of OSG resources. Requested data include traditional accounting - core-hours utilized - as well as users certificate Distinguished Name, their affiliations, and field of science. The OSG accounting service, Gratia, developed in 2006, is able to provide this information and much more. However, with the rapid expansion and transformation of the OSG resources and access to them, we are faced with several challenges in adapting and maintaining the current accounting service. The newest changes include, but are not limited to, acceptance of users from numerous university campuses, whose jobs are flocking to OSG resources, expansion into new types of resources (public and private clouds, allocation-based HPC resources, and GPU farms), migration to pilot-based systems, and migration to multicore environments. In order to have a scalable, sustainable and expandable accounting service for the next few years, we are embarking on the development of the next-generation OSG accounting service, GRACC, that will be based on open-source technology and will be compatible with the existing system. It will consist of swappable, independent components, such as Logstash, Elasticsearch, Grafana, and RabbitMQ, that communicate through a data exchange. GRACC will continue to interface EGI and XSEDE accounting services and provide information in accordance with existing agreements. We will present the current architecture and working prototype.

  4. Core services and priority-setting: the New Zealand experience.

    PubMed

    Cumming, J

    1994-01-01

    Like people in other countries, New Zealanders have been struggling with the issue of how to decide which health services should be delivered and to whom. The government has established a Core Services Committee to advise on core services, that is, those health care and disability support services to be made available on affordable terms and without unreasonable waiting time. Such a core has a similar role to a standard package of benefits within a managed competition framework. Services not in the core would be left to individuals' own responsibility. Specific objectives for a core are to promote accountability of purchasers, to make explicit the services that are core and those that are not, to promote an efficient and equitable allocation of resources, to limit government expenditure on health care and to involve the public in decision-making. A number of different options for defining a core are identified, and the work undertaken so far is discussed. The original concept of a core has not been implemented in New Zealand. The Core Services Committee has established broad priorities and facilitated a series of consensus development conferences to provide advice on the effectiveness of services. Some of the committee's recommendations have been incorporated into policy guidelines, which set out what the government expects of purchasers. These guidelines include priority areas for health gains, service obligations and principles for purchasing. Service obligations are not sufficiently detailed to meet the specific objectives of a core and do not meet equity objectives, as they allow in effect each of the four purchasers to develop their own core of services. The key issue for the government now is to decide whether to allow RHAs flexibility in determining their own priorities or whether a national approach to efficiency and equity is to be preferred.

  5. A temporal assessment of vehicle use patterns and their impact on the provision of vehicle-to-grid services

    NASA Astrophysics Data System (ADS)

    Harris, Chioke B.; Webber, Michael E.

    2012-09-01

    With the emerging nationwide availability of battery electric vehicles (BEVs) at prices attainable for many consumers, electric utilities, system operators and researchers have been investigating the impact of this new source of energy demand. The presence of BEVs on the electric grid might offer benefits equivalent to dedicated utility-scale energy storage systems by leveraging vehicles’ grid-connected energy storage through vehicle-to-grid (V2G) enabled infrastructure. It is, however, unclear whether BEVs will be available to provide needed grid services when those services are in highest demand. In this work, a set of GPS vehicle travel data from the Puget Sound Regional Council (PSRC) is analyzed to assess temporal patterns in vehicle use. These results show that vehicle use does not vary significantly across months, but differs noticeably between weekdays and weekends, such that averaging the data together could lead to erroneous V2G modeling results. Combination of these trends with wind generation and electricity demand data from the Electric Reliability Council of Texas (ERCOT) indicates that BEV availability does not align well with electricity demand and wind generation during the summer months, limiting the quantity of ancillary services that could be provided with V2G. Vehicle availability aligns best between the hours of 9 pm and 8 am during cooler months of the year, when electricity demand is bimodal and brackets the hours of highest vehicle use.

  6. Pervasive access to MRI bias artifact suppression service on a grid.

    PubMed

    Ardizzone, Edoardo; Gambino, Orazio; Genco, Alessandro; Pirrone, Roberto; Sorce, Salvatore

    2009-01-01

    Bias artifact corrupts MRIs in such a way that the image is afflicted by illumination variations. Some of the authors proposed the exponential entropy-driven homomorphic unsharp masking ( E(2)D-HUM) algorithm that corrects this artifact without any a priori hypothesis about the tissues or the MRI modality. Moreover, E(2)D-HUM does not care about the body part under examination and does not require any particular training task. People who want to use this algorithm, which is Matlab-based, have to set their own computers in order to execute it. Furthermore, they have to be Matlab-skilled to exploit all the features of the algorithm. In this paper, we propose to make such algorithm available as a service on a grid infrastructure, so that people can use it almost from everywhere, in a pervasive fashion, by means of a suitable user interface running on smartphones. The proposed solution allows physicians to use the E(2)D-HUM algorithm (or any other kind of algorithm, given that it is available as a service on the grid), being it remotely executed somewhere in the grid, and the results are sent back to the user's device. This way, physicians do not need to be aware of how to use Matlab to process their images. The pervasive service provision for medical image enhancement is presented, along with some experimental results obtained using smartphones connected to an existing Globus-based grid infrastructure.

  7. Nuclear reactor melt-retention structure to mitigate direct containment heating

    DOEpatents

    Tutu, Narinder K.; Ginsberg, Theodore; Klages, John R.

    1991-01-01

    A light water nuclear reactor melt-retention structure to mitigate the extent of direct containment heating of the reactor containment building. The structure includes a retention chamber for retaining molten core material away from the upper regions of the reactor containment building when a severe accident causes the bottom of the pressure vessel of the reactor to fail and discharge such molten material under high pressure through the reactor cavity into the retention chamber. In combination with the melt-retention chamber there is provided a passageway that includes molten core droplet deflector vanes and has gas vent means in its upper surface, which means are operable to deflect molten core droplets into the retention chamber while allowing high pressure steam and gases to be vented into the upper regions of the containment building. A plurality of platforms are mounted within the passageway and the melt-retention structure to direct the flow of molten core material and help retain it within the melt-retention chamber. In addition, ribs are mounted at spaced positions on the floor of the melt-retention chamber, and grid means are positioned at the entrance side of the retention chamber. The grid means develop gas back pressure that helps separate the molten core droplets from discharged high pressure steam and gases, thereby forcing the steam and gases to vent into the upper regions of the reactor containment building.

  8. 20 CFR 669.350 - How are core services delivered to MSFW's?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false How are core services delivered to MSFW's... Farmworker Jobs Program Customers and Available Program Services § 669.350 How are core services delivered to MSFW's? (a) The full range of core services are available to MSFW's, as well as other individuals, at...

  9. 20 CFR 669.350 - How are core services delivered to MSFW's?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false How are core services delivered to MSFW's... Farmworker Jobs Program Customers and Available Program Services § 669.350 How are core services delivered to MSFW's? (a) The full range of core services are available to MSFW's, as well as other individuals, at...

  10. Application of numerical grid generation for improved CFD analysis of multiphase screw machines

    NASA Astrophysics Data System (ADS)

    Rane, S.; Kovačević, A.

    2017-08-01

    Algebraic grid generation is widely used for discretization of the working domain of twin screw machines. Algebraic grid generation is fast and has good control over the placement of grid nodes. However, the desired qualities of grid which should be able to handle multiphase flows such as oil injection, may be difficult to achieve at times. In order to obtain fast solution of multiphase screw machines, it is important to further improve the quality and robustness of the computational grid. In this paper, a deforming grid of a twin screw machine is generated using algebraic transfinite interpolation to produce initial mesh upon which an elliptic partial differential equations (PDE) of the Poisson’s form is solved numerically to produce smooth final computational mesh. The quality of numerical cells and their distribution obtained by the differential method is greatly improved. In addition, a similar procedure was introduced to fully smoothen the transition of the partitioning rack curve between the rotors thus improving continuous movement of grid nodes and in turn improve robustness and speed of the Computational Fluid Dynamic (CFD) solver. Analysis of an oil injected twin screw compressor is presented to compare the improvements in grid quality factors in the regions of importance such as interlobe space, radial tip and the core of the rotor. The proposed method that combines algebraic and differential grid generation offer significant improvement in grid quality and robustness of numerical solution.

  11. Glossary of AWS Acrinabs. Acronyms, Initialisms, and Abbreviations Commonly Used in Air Weather Service

    DTIC Science & Technology

    1991-01-01

    Foundation FYDP ......... Five Year Defense Plan FSI ............ Fog Stability Index 17 G G ................ gravity, giga- GISM ......... Gridded ...Global Circulation Model GOES-TAP GOES imagery processing & dissemination system GCS .......... grid course GOFS ........ Global Ocean Flux Study GD...Analysis Support System Complex Systems GRID .......... Global Resource Information Data -Base GEMAG ..... geomagnetic GRIST..... grazing-incidence solar

  12. Context-aware access control for pervasive access to process-based healthcare systems.

    PubMed

    Koufi, Vassiliki; Vassilacopoulos, George

    2008-01-01

    Healthcare is an increasingly collaborative enterprise involving a broad range of healthcare services provided by many individuals and organizations. Grid technology has been widely recognized as a means for integrating disparate computing resources in the healthcare field. Moreover, Grid portal applications can be developed on a wireless and mobile infrastructure to execute healthcare processes which, in turn, can provide remote access to Grid database services. Such an environment provides ubiquitous and pervasive access to integrated healthcare services at the point of care, thus improving healthcare quality. In such environments, the ability to provide an effective access control mechanism that meets the requirement of the least privilege principle is essential. Adherence to the least privilege principle requires continuous adjustments of user permissions in order to adapt to the current situation. This paper presents a context-aware access control mechanism for HDGPortal, a Grid portal application which provides access to workflow-based healthcare processes using wireless Personal Digital Assistants. The proposed mechanism builds upon and enhances security mechanisms provided by the Grid Security Infrastructure. It provides tight, just-in-time permissions so that authorized users get access to specific objects according to the current context. These permissions are subject to continuous adjustments triggered by the changing context. Thus, the risk of compromising information integrity during task executions is reduced.

  13. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  14. gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana

    2010-05-01

    The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.

  15. Energy storage at the threshold: Smart mobility and the grid of the future

    NASA Astrophysics Data System (ADS)

    Crabtree, George

    2018-01-01

    Energy storage is poised to drive transformations in transportation and the electricity grid that personalize access to mobility and energy services, not unlike the transformation of smart phones that personalized access to people and information. Storage will work with other emerging technologies such as electric vehicles, ride-sharing, self-driving and connected cars in transportation and with renewable generation, distributed energy resources and smart energy management on the grid to create mobility and electricity as services matched to customer needs replacing the conventional one-size-fits-all approach. This survey outlines the prospects, challenges and impacts of the coming mobility and electricity transformations.

  16. Connection technology of HPTO type WECs and DC nano grid in island

    NASA Astrophysics Data System (ADS)

    Wang, Kun-lin; Tian, Lian-fang; You, Ya-ge; Wang, Xiao-hong; Sheng, Song-wei; Zhang, Ya-qun; Ye, Yin

    2016-07-01

    Wave energy fluctuating a great deal endangers the security of power grid especially micro grid in island. A DC nano grid supported by batteries is proposed to smooth the output power of wave energy converters (WECs). Thus, renewable energy converters connected to DC grid is a new subject. The characteristics of WECs are very important to the connection technology of HPTO type WECs and DC nano grid. Hydraulic power take-off system (HPTO) is the core unit of the largest category of WECs, with the functions of supplying suitable damping for a WEC to absorb wave energy, and converting captured wave energy to electricity. The HPTO is divided into a hydraulic energy storage system (HESS) and a hydraulic power generation system (HPGS). A primary numerical model for the HPGS is established in this paper. Three important basic characteristics of the HPGS are deduced, which reveal how the generator load determines the HPGS rotation rate. Therefore, the connector of HPTO type WEC and DC nano grid would be an uncontrollable rectifier with high reliability, also would be a controllable power converter with high efficiency, such as interleaved boost converter-IBC. The research shows that it is very flexible to connect to DC nano grid for WECs, but bypass resistance loads are indispensable for the security of WECs.

  17. The GridPP DIRAC project - DIRAC for non-LHC communities

    NASA Astrophysics Data System (ADS)

    Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.

    2015-12-01

    The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.

  18. Lifecycle comparison of selected Li-ion battery chemistries under grid and electric vehicle duty cycle combinations

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair J.; Huang, Qian; Kintner-Meyer, Michael C. W.; Zhang, Ji-Guang; Reed, David M.; Sprenkle, Vincent L.; Viswanathan, Vilayanur V.; Choi, Daiwon

    2018-03-01

    Li-ion batteries are expected to play a vital role in stabilizing the electrical grid as solar and wind generation capacity becomes increasingly integrated into the electric infrastructure. This article describes how two different commercial Li-ion batteries based on LiNi0.8Co0.15Al0.05O2 (NCA) and LiFePO4 (LFP) chemistries were tested under grid duty cycles recently developed for two specific grid services: (1) frequency regulation (FR) and (2) peak shaving (PS) with and without being subjected to electric vehicle (EV) drive cycles. The lifecycle comparison derived from the capacity, round-trip efficiency (RTE), resistance, charge/discharge energy, and total used energy of the two battery chemistries are discussed. The LFP chemistry shows better stability for the energy-intensive PS service, while the NCA chemistry is more conducive to the FR service under the operating regimes investigated. The results can be used as a guideline for selection, deployment, operation, and cost analyses of Li-ion batteries used for different applications.

  19. A 3-D Finite-Volume Non-hydrostatic Icosahedral Model (NIM)

    NASA Astrophysics Data System (ADS)

    Lee, Jin

    2014-05-01

    The Nonhydrostatic Icosahedral Model (NIM) formulates the latest numerical innovation of the three-dimensional finite-volume control volume on the quasi-uniform icosahedral grid suitable for ultra-high resolution simulations. NIM's modeling goal is to improve numerical accuracy for weather and climate simulations as well as to utilize the state-of-art computing architecture such as massive parallel CPUs and GPUs to deliver routine high-resolution forecasts in timely manner. NIM dynamic corel innovations include: * A local coordinate system remapped spherical surface to plane for numerical accuracy (Lee and MacDonald, 2009), * Grid points in a table-driven horizontal loop that allow any horizontal point sequence (A.E. MacDonald, et al., 2010), * Flux-Corrected Transport formulated on finite-volume operators to maintain conservative positive definite transport (J.-L, Lee, ET. Al., 2010), *Icosahedral grid optimization (Wang and Lee, 2011), * All differentials evaluated as three-dimensional finite-volume integrals around the control volume. The three-dimensional finite-volume solver in NIM is designed to improve pressure gradient calculation and orographic precipitation over complex terrain. NIM dynamical core has been successfully verified with various non-hydrostatic benchmark test cases such as internal gravity wave, and mountain waves in Dynamical Cores Model Inter-comparisons Projects (DCMIP). Physical parameterizations suitable for NWP are incorporated into NIM dynamical core and successfully tested with multimonth aqua-planet simulations. Recently, NIM has started real data simulations using GFS initial conditions. Results from the idealized tests as well as real-data simulations will be shown in the conference.

  20. Computational and Experimental Investigations of the Coolant Flow in the Cassette Fissile Core of a KLT-40S Reactor

    NASA Astrophysics Data System (ADS)

    Dmitriev, S. M.; Varentsov, A. V.; Dobrov, A. A.; Doronkov, D. V.; Pronin, A. N.; Sorokin, V. D.; Khrobostov, A. E.

    2017-07-01

    Results of experimental investigations of the local hydrodynamic and mass-exchange characteristics of a coolant flowing through the cells in the characteristic zones of a fuel assembly of a KLT-40S reactor plant downstream of a plate-type spacer grid by the method of diffusion of a gas tracer in the coolant flow with measurement of its velocity by a five-channel pneumometric probe are presented. An analysis of the concentration distribution of the tracer in the coolant flow downstream of a plate-type spacer grid in the fuel assembly of the KLT-40S reactor plant and its velocity field made it possible to obtain a detailed pattern of this flow and to determine its main mechanisms and features. Results of measurement of the hydraulic-resistance coefficient of a plate-type spacer grid depending on the Reynolds number are presented. On the basis of the experimental data obtained, recommendations for improvement of the method of calculating the flow rate of a coolant in the cells of the fissile core of a KLT-40S reactor were developed. The results of investigations of the local hydrodynamic and mass-exchange characteristics of the coolant flow in the fuel assembly of the KLT-40S reactor plant were accepted for estimating the thermal and technical reliability of the fissile cores of KLT-40S reactors and were included in the database for verification of computational hydrodynamics programs (CFD codes).

  1. CILogon-HA. Higher Assurance Federated Identities for DOE Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basney, James

    The CILogon-HA project extended the existing open source CILogon service (initially developed with funding from the National Science Foundation) to provide credentials at multiple levels of assurance to users of DOE facilities for collaborative science. CILogon translates mechanism and policy across higher education and grid trust federations, bridging from the InCommon identity federation (which federates university and DOE lab identities) to the Interoperable Global Trust Federation (which defines standards across the Worldwide LHC Computing Grid, the Open Science Grid, and other cyberinfrastructure). The CILogon-HA project expanded the CILogon service to support over 160 identity providers (including 6 DOE facilities) andmore » 3 internationally accredited certification authorities. To provide continuity of operations upon the end of the CILogon-HA project period, project staff transitioned the CILogon service to operation by XSEDE.« less

  2. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  3. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  4. DIRAC distributed secure framework

    NASA Astrophysics Data System (ADS)

    Casajus, A.; Graciani, R.; LHCb DIRAC Team

    2010-04-01

    DIRAC, the LHCb community Grid solution, provides access to a vast amount of computing and storage resources to a large number of users. In DIRAC users are organized in groups with different needs and permissions. In order to ensure that only allowed users can access the resources and to enforce that there are no abuses, security is mandatory. All DIRAC services and clients use secure connections that are authenticated using certificates and grid proxies. Once a client has been authenticated, authorization rules are applied to the requested action based on the presented credentials. These authorization rules and the list of users and groups are centrally managed in the DIRAC Configuration Service. Users submit jobs to DIRAC using their local credentials. From then on, DIRAC has to interact with different Grid services on behalf of this user. DIRAC has a proxy management service where users upload short-lived proxies to be used when DIRAC needs to act on behalf of them. Long duration proxies are uploaded by users to a MyProxy service, and DIRAC retrieves new short delegated proxies when necessary. This contribution discusses the details of the implementation of this security infrastructure in DIRAC.

  5. Bringing Federated Identity to Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teheran, Jeny

    The Fermi National Accelerator Laboratory (FNAL) is facing the challenge of providing scientific data access and grid submission to scientific collaborations that span the globe but are hosted at FNAL. Users in these collaborations are currently required to register as an FNAL user and obtain FNAL credentials to access grid resources to perform their scientific computations. These requirements burden researchers with managing additional authentication credentials, and put additional load on FNAL for managing user identities. Our design integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and MyProxy with the FNAL grid submission system to provide secure access formore » users from diverse experiments and collab orations without requiring each user to have authentication credentials from FNAL. The design automates the handling of certificates so users do not need to manage them manually. Although the initial implementation is for FNAL's grid submission system, the design and the core of the implementation are general and could be applied to other distributed computing systems.« less

  6. Grid accounting service: state and future development

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.

    2014-06-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  7. Telemedical applications and grid technology

    NASA Astrophysics Data System (ADS)

    Graschew, Georgi; Roelofs, Theo A.; Rakowsky, Stefan; Schlag, Peter M.; Kaiser, Silvan; Albayrak, Sahin

    2005-11-01

    Due to the experience in the exploitation of previous European telemedicine projects an open Euro-Mediterranean consortium proposes the Virtual Euro-Mediterranean Hospital (VEMH) initiative. The provision of the same advanced technologies to the European and Mediterranean Countries should contribute to their better dialogue for integration. VEMH aims to facilitate the interconnection of various services through real integration which must take into account the social, human and cultural dimensions. VEMH will provide a platform consisting of a satellite and terrestrial link for the application of medical e-learning, real-time telemedicine and medical assistance. The methodologies for the VEMH are medical-needs-driven instead of technology-driven. They supply new management tools for virtual medical communities and allow management of clinical outcomes for implementation of evidence-based medicine. Due to the distributed character of the VEMH Grid technology becomes inevitable for successful deployment of the services. Existing Grid Engines provide basic computing power needed by today's medical analysis tasks but lack other capabilities needed for communication and knowledge sharing services envisioned. When it comes to heterogeneous systems to be shared by different institutions especially the high level system management areas are still unsupported. Therefore a Metagrid Engine is needed that provides a superset of functionalities across different Grid Engines and manages strong privacy and Quality of Service constraints at this comprehensive level.

  8. A Survey on Next-generation Power Grid Data Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Shutang; Zhu, Dr. Lin; Liu, Yong

    2015-01-01

    The operation and control of power grids will increasingly rely on data. A high-speed, reliable, flexible and secure data architecture is the prerequisite of the next-generation power grid. This paper summarizes the challenges in collecting and utilizing power grid data, and then provides reference data architecture for future power grids. Based on the data architecture deployment, related research on data architecture is reviewed and summarized in several categories including data measurement/actuation, data transmission, data service layer, data utilization, as well as two cross-cutting issues, interoperability and cyber security. Research gaps and future work are also presented.

  9. Research of the application of the Low Power Wide Area Network in power grid

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Sui, Hong; Li, Jia; Yao, Jian

    2018-03-01

    Low Power Wide Area Network (LPWAN) technologies developed rapidly in recent years, but these technologies have not make large-scale applications in different application scenarios of power grid. LoRa is a mainstream LPWAN technology. This paper makes a comparison test of the signal coverage of LoRa and other traditional wireless communication technologies in typical signal environment of power grid. Based on the test results, this paper gives an application suggestion of LoRa in power grid services, which can guide the planning and construction of the LPWAN in power grid.

  10. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 10 2014-10-01 2014-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND.... 223, Fig. 13 Figure 13 to Part 223—Single Grid Hard TED Escape Opening EC01JY91.060 [60 FR 15520, Mar...

  11. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 10 2013-10-01 2013-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND.... 223, Fig. 13 Figure 13 to Part 223—Single Grid Hard TED Escape Opening EC01JY91.060 [60 FR 15520, Mar...

  12. Phase 2 and phase 3 presentation grids

    Treesearch

    Joseph McCollum; Jamie K. Cochran

    2009-01-01

    Many forest inventory and analysis (FIA) analysts, other researchers, and FIA Spatial Data Services personnel have expressed their desire to use the FIA Phase 2 (P2) and Phase 3 (P3), and Forest Health Monitoring (FHM) grids in presentations and other analytical reports. Such uses have been prohibited due to the necessity of keeping the actual P2, P3, and FHM grids...

  13. 20 CFR 662.260 - What services, in addition to the applicable core services, are to be provided by One-Stop...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... core services, are to be provided by One-Stop partners through the One-Stop delivery system? 662.260... Responsibilities of Partners § 662.260 What services, in addition to the applicable core services, are to be provided by One-Stop partners through the One-Stop delivery system? In addition to the provision of core...

  14. 20 CFR 662.260 - What services, in addition to the applicable core services, are to be provided by One-Stop...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... core services, are to be provided by One-Stop partners through the One-Stop delivery system? 662.260... Responsibilities of Partners § 662.260 What services, in addition to the applicable core services, are to be provided by One-Stop partners through the One-Stop delivery system? In addition to the provision of core...

  15. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  16. Cloud Computing for the Grid: GridControl: A Software Platform to Support the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    GENI Project: Cornell University is creating a new software platform for grid operators called GridControl that will utilize cloud computing to more efficiently control the grid. In a cloud computing system, there are minimal hardware and software demands on users. The user can tap into a network of computers that is housed elsewhere (the cloud) and the network runs computer applications for the user. The user only needs interface software to access all of the cloud’s data resources, which can be as simple as a web browser. Cloud computing can reduce costs, facilitate innovation through sharing, empower users, and improvemore » the overall reliability of a dispersed system. Cornell’s GridControl will focus on 4 elements: delivering the state of the grid to users quickly and reliably; building networked, scalable grid-control software; tailoring services to emerging smart grid uses; and simulating smart grid behavior under various conditions.« less

  17. Radiosurgery planning supported by the GEMSS grid.

    PubMed

    Fenner, J W; Mehrem, R A; Ganesan, V; Riley, S; Middleton, S E; Potter, K; Walton, L

    2005-01-01

    GEMSS (Grid Enabled Medical Simulation Services IST-2001-37153) is an EU project funded to provide a test bed for Grid-enabled health applications. Its purpose is evaluation of Grid computing in the health sector. The health context imposes particular constraints on Grid infrastructure design, and it is this that has driven the feature set of the middleware. In addition to security, the time critical nature of health applications is accommodated by a Quality of Service component, and support for a well defined business model is also included. This paper documents experience of a GEMSS compliant radiosurgery application running within the Medical Physics department at the Royal Hallamshire Hospital in the UK. An outline of the Grid-enabled RAPT radiosurgery application is presented and preliminary experience of its use in the hospital environment is reported. The performance of the software is compared against GammaPlan (an industry standard) and advantages/disadvantages are highlighted. The RAPT software relies on features of the GEMSS middleware that are integral to the success of this application, and together they provide a glimpse of an enabling technology that can impact upon patient management in the 21st century.

  18. Demonstration of Active Power Controls by Utility-Scale PV Power Plant in an Island Grid: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorgian, Vahan; O'Neill, Barbara

    The National Renewable Energy Laboratory (NREL), AES, and the Puerto Rico Electric Power Authority conducted a demonstration project on a utility-scale photovoltaic (PV) plant to test the viability of providing important ancillary services from this facility. As solar generation increases globally, there is a need for innovation and increased operational flexibility. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, it may mitigate the impact of its variability on the grid and contribute to important system requirements more like traditional generators. In 2015,more » testing was completed on a 20-MW AES plant in Puerto Rico, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to provide various types of new grid-friendly controls. This data showed how active power controls can leverage PV's value from being simply an intermittent energy resource to providing additional ancillary services for an isolated island grid. Specifically, the tests conducted included PV plant participation in automatic generation control, provision of droop response, and fast frequency response.« less

  19. Grid Application Meta-Repository System: Repository Interconnectivity and Cross-domain Application Usage in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Tudose, Alexandru; Terstyansky, Gabor; Kacsuk, Peter; Winter, Stephen

    Grid Application Repositories vary greatly in terms of access interface, security system, implementation technology, communication protocols and repository model. This diversity has become a significant limitation in terms of interoperability and inter-repository access. This paper presents the Grid Application Meta-Repository System (GAMRS) as a solution that offers better options for the management of Grid applications. GAMRS proposes a generic repository architecture, which allows any Grid Application Repository (GAR) to be connected to the system independent of their underlying technology. It also presents applications in a uniform manner and makes applications from all connected repositories visible to web search engines, OGSI/WSRF Grid Services and other OAI (Open Archive Initiative)-compliant repositories. GAMRS can also function as a repository in its own right and can store applications under a new repository model. With the help of this model, applications can be presented as embedded in virtual machines (VM) and therefore they can be run in their native environments and can easily be deployed on virtualized infrastructures allowing interoperability with new generation technologies such as cloud computing, application-on-demand, automatic service/application deployments and automatic VM generation.

  20. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  1. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    NASA Astrophysics Data System (ADS)

    De Salvo, A.; Kataoka, M.; Sanchez Pineda, A.; Smirnov, Y.

    2015-12-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original Workload Management Service (WMS) and the new PanDA modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over Wide Area Network. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation Database is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system is in production for ATLAS since 2013, having as main sites in HA the INFN Roma Tier 2 and the CERN Agile Infrastructure. The Light Job Submission Framework for Installation (LJSFi) v2 engine is directly interfacing with PanDA for the Job Management, the Atlas Grid Information System (AGIS) for the site parameter configurations, and CVMFS for both core components and the installation of the software itself. LJSFi2 is also able to use other plugins, and is essentially Virtual Organization (VO) agnostic, so can be directly used and extended to cope with the requirements of any Grid or Cloud enabled VO. In this work we will present the architecture, performance, status and possible evolutions to the system for the LHC Run2 and beyond.

  2. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  3. A grid-enabled web service for low-resolution crystal structure refinement.

    PubMed

    O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr

    2012-03-01

    Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.

  4. 20 CFR 663.220 - Who may receive intensive services?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... one core service and are unable to obtain employment through core services, and are determined by a... dislocated workers who are employed, have received at least one core service, and are determined by a One...

  5. 20 CFR 663.220 - Who may receive intensive services?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... one core service and are unable to obtain employment through core services, and are determined by a... dislocated workers who are employed, have received at least one core service, and are determined by a One...

  6. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  7. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  8. Auspice: Automatic Service Planning in Cloud/Grid Environments

    NASA Astrophysics Data System (ADS)

    Chiu, David; Agrawal, Gagan

    Recent scientific advances have fostered a mounting number of services and data sets available for utilization. These resources, though scattered across disparate locations, are often loosely coupled both semantically and operationally. This loosely coupled relationship implies the possibility of linking together operations and data sets to answer queries. This task, generally known as automatic service composition, therefore abstracts the process of complex scientific workflow planning from the user. We have been exploring a metadata-driven approach toward automatic service workflow composition, among other enabling mechanisms, in our system, Auspice: Automatic Service Planning in Cloud/Grid Environments. In this paper, we present a complete overview of our system's unique features and outlooks for future deployment as the Cloud computing paradigm becomes increasingly eminent in enabling scientific computing.

  9. Web-HLA and Service-Enabled RTI in the Simulation Grid

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin

    HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.

  10. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  11. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  12. JTS and its Application in Environmental Protection Applications

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta

    2010-05-01

    The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.

  13. Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand

    NASA Astrophysics Data System (ADS)

    Bodenstein, Christian

    In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.

  14. A Prototype Nonhydrostatic Regional-to-Global Nested-Grid Atmosphere Model for Medium-range Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Harris, L.; Lin, S. J.; Zhou, L.; Chen, J. H.; Benson, R.; Rees, S.

    2016-12-01

    Limited-area convection-permitting models have proven useful for short-range NWP, but are unable to interact with the larger scales needed for longer lead-time skill. A new global forecast model, fvGFS, has been designed combining a modern nonhydrostatic dynamical core, the GFDL Finite-Volume Cubed-Sphere dynamical core (FV3) with operational GFS physics and initial conditions, and has been shown to provide excellent global skill while improving representation of small-scale phenomena. The nested-grid capability of FV3 allows us to build a regional-to-global variable-resolution model to efficiently refine to 3-km grid spacing over the Continental US. The use of two-way grid nesting allows us to reach these resolutions very efficiently, with the operational requirement easily attainable on current supercomputing systems.Even without a boundary-layer or advanced microphysical scheme appropriate for convection-perrmitting resolutions, the effectiveness of fvGFS can be demonstrated for a variety of weather events. We demonstrate successful proof-of-concept simulations of a variety of phenomena. We show the capability to develop intense hurricanes with realistic fine-scale eyewalls and rainbands. The new model also produces skillful predictions of severe weather outbreaks and of organized mesoscale convective systems. Fine-scale orographic and boundary-layer phenomena are also simulated with excellent fidelity by fvGFS. Further expected improvements are discussed, including the introduction of more sophisticated microphysics and of scale-aware convection schemes.

  15. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  16. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  17. Designing Wind and Solar Power Purchase Agreements to Support Grid Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Neill, Barbara; Chernyakhovskiy, Ilya

    Power purchase agreements (PPAs) represent one of many institutional tools that power systems can use to improve grid services from variable renewable energy (VRE) generators. This fact sheet introduces the concept of PPAs for VRE generators and provides a brief summary of key PPA components that can facilitate VRE generators to enhance grid stability and serve as a source of power system flexibility.

  18. A Computational and Experimental Investigation of a Delta Wing with Vertical Tails

    NASA Technical Reports Server (NTRS)

    Krist. Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    2004-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 and a Reynolds number of 500; 000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  19. A computational and experimental investigation of a delta wing with vertical tails

    NASA Technical Reports Server (NTRS)

    Krist, Sherrie L.; Washburn, Anthony E.; Visser, Kenneth D.

    1993-01-01

    The flow over an aspect ratio 1 delta wing with twin vertical tails is studied in a combined computational and experimental investigation. This research is conducted in an effort to understand the vortex and fin interaction process. The computational algorithm used solves both the thin-layer Navier-Stokes and the inviscid Euler equations and utilizes a chimera grid-overlapping technique. The results are compared with data obtained from a detailed experimental investigation. The laminar case presented is for an angle of attack of 20 deg and a Reynolds number of 500,000. Good agreement is observed for the physics of the flow field, as evidenced by comparisons of computational pressure contours with experimental flow-visualization images, as well as by comparisons of vortex-core trajectories. While comparisons of the vorticity magnitudes indicate that the computations underpredict the magnitude in the wing primary-vortex-core region, grid embedding improves the computational prediction.

  20. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  1. NPSS on NASA's Information Power Grid: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications

    NASA Technical Reports Server (NTRS)

    Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David

    2000-01-01

    This paper describes a project to evaluate the feasibility of combining Grid and Numerical Propulsion System Simulation (NPSS) technologies, with a view to leveraging the numerous advantages of commodity technologies in a high-performance Grid environment. A team from the NASA Glenn Research Center and Argonne National Laboratory has been studying three problems: a desktop-controlled parameter study using Excel (Microsoft Corporation); a multicomponent application using ADPAC, NPSS, and a controller program-, and an aviation safety application running about 100 jobs in near real time. The team has successfully demonstrated (1) a Common-Object- Request-Broker-Architecture- (CORBA-) to-Globus resource manager gateway that allows CORBA remote procedure calls to be used to control the submission and execution of programs on workstations and massively parallel computers, (2) a gateway from the CORBA Trader service to the Grid information service, and (3) a preliminary integration of CORBA and Grid security mechanisms. We have applied these technologies to two applications related to NPSS, namely a parameter study and a multicomponent simulation.

  2. Expanding access to off-grid rural electrification in Africa: An analysis of community-based micro-grids in Kenya

    NASA Astrophysics Data System (ADS)

    Kirubi, Charles Gathu

    Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily lead to elite capture. Experiences from different micro-grid settings illustrate the manner in which a coincidence of interest between the elites and the rest of members and access to external support can create incentives and mechanisms to enable community-wide access to scarce services, hence mitigating elite capture. Moreover, access to external support was found to increase the likelihood of participation for the relatively poor households. The policy-relevant message from this research is two-fold. In rural areas with suitable sites for micro-hydro power, the potential for community micro-grids appear considerable to the extent that this option would seem to represent "the road not taken" as far as policies and initiatives aimed at expanding RE are concerned in Kenya and other African countries with comparable settings. However, local participatory initiatives not complimented by external technical assistance run a considerable risk of locking rural households into relatively more costly and poor-quality services. By taking advantage of existing and/or building a dense network of local organizations, including micro-finance agencies, the government and development partners can make available to local communities the necessary support---financial, technical or regulatory---essential for efficient design of micro-grids in addition to facilitating equitable distribution of electricity benefits.

  3. 34 CFR 365.21 - What funds may the State use to provide the IL core services?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What funds may the State use to provide the IL core... may the State use to provide the IL core services? (a) In providing IL services as required under... directly, or through grants or contracts, the following IL core services: (1) Information and referral...

  4. Analysis of the beam halo in negative ion sources by using 3D3V PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp; Nishioka, S.; Goto, I.

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with thosemore » for the 2D PIC simulation result.« less

  5. Assessing Option Grid® practicability and feasibility for facilitating shared decision making: An exploratory study.

    PubMed

    Tsulukidze, Maka; Grande, Stuart W; Gionfriddo, Michael R

    2015-07-01

    To assess the feasibility of Option Grids(®)for facilitating shared decision making (SDM) in simulated clinical consultations and explore clinicians' views on their practicability. We used mixed methods approach to analyze clinical consultations using the Observer OPTION instrument and thematic analysis for follow-up interviews with clinicians. Clinicians achieved high scores on information sharing and low scores on preference elicitation and integration. Four themes were identified: (1) Barriers affect practicability of Option Grids(®); (2) Option Grids(®) facilitate the SDM process; (3) Clinicians are aware of the gaps in their practice of SDM; (4) Training and ongoing feedback on the optimal use of Option Grids(®) are necessary. Use of Option Grids(®) by clinicians with background knowledge in SDM did not facilitate optimal levels of competency on the SDM core concepts of preference elicitation and integration. Future research must evaluate the impact of training on the use of Option Grids(®), and explore how best to help clinicians bridge the gap between knowledge and action. Clinicians proficiently imparting information in simulations struggled to elicit and integrate patient preferences - understanding this gap and developing strategies to close it are the next steps for implementing SDM into clinical practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Exploiting the Potential of Data Centers in the Smart Grid

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  7. Future evolution of distributed systems for smart grid - The challenges and opportunities to using decentralized energy system

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    A decentralized energy system is a relatively new approach in the power industry. Decentralized energy systems provide promising opportunities for deploying renewable energy sources locally available as well as for expanding access to clean energy services to remote communities. The electricity system of the future must produce and distribute electricity that is reliable and affordable. To accomplish these goals, both the electricity grid and the existing regulatory system must be smarter. In this paper, the major issues and challenges in distributed systems for smart grid are discussed and future trends are presented. The smart grid technologies and distributed generation systems are explored. A general overview of the comparison of the traditional grid and smart grid is also included.

  8. Collaboration Services: Enabling Chat in Disadvantaged Grids

    DTIC Science & Technology

    2014-06-01

    grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of

  9. Distributed hierarchical control architecture for integrating smart grid assets during normal and disrupted operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek

    Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.

  10. Microgrid and Plug in Electric Vehicle (PEV) with Vehicle to Grid (V2G) Power Services Capability (Briefing Charts)

    DTIC Science & Technology

    2015-09-01

    unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 AGENDA 1. Non-Tactical Vehicle-to-Grid (V2G) Projects • Smart Power...Vehicle Technology Expo and the Battery Show Conference Novi, MI, 15-17 Sep 2015 2 For the Nation • Help stabilize smart grid and can generate revenue...demonstration of a smart , aggregated, ad-hoc capable, vehicle to grid (V2G) and Vehicle to Vehicle (V2V) capable fleet power system to support

  11. Information Power Grid (IPG) Tutorial 2003

    NASA Technical Reports Server (NTRS)

    Meyers, George

    2003-01-01

    For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.

  12. 20 CFR 663.165 - How long must an individual be in core services in order to be eligible for intensive services?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false How long must an individual be in core...-Stop Delivery System § 663.165 How long must an individual be in core services in order to be eligible for intensive services? There is no Federally-required minimum time period for participation in core...

  13. 20 CFR 663.165 - How long must an individual be in core services in order to be eligible for intensive services?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false How long must an individual be in core...-Stop Delivery System § 663.165 How long must an individual be in core services in order to be eligible for intensive services? There is no Federally-required minimum time period for participation in core...

  14. 20 CFR 663.150 - What core services must be provided to adults and dislocated workers?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What core services must be provided to adults....150 What core services must be provided to adults and dislocated workers? (a) At a minimum, all of the core services described in WIA section 134(d)(2) and 20 CFR 662.240 must be provided in each local area...

  15. 7 CFR 1709.109 - Eligible projects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...

  16. 7 CFR 1709.109 - Eligible projects.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...

  17. 7 CFR 1709.109 - Eligible projects.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...

  18. 7 CFR 1709.109 - Eligible projects.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...

  19. 7 CFR 1709.109 - Eligible projects.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... through on-grid and off-grid renewable energy technologies, energy efficiency, and energy conservation... improvement of: (a) Electric generation, transmission, and distribution facilities, equipment, and services... electric power generation, water or space heating, or process heating and power for the eligible community...

  20. [Tumor Data Interacted System Design Based on Grid Platform].

    PubMed

    Liu, Ying; Cao, Jiaji; Zhang, Haowei; Zhang, Ke

    2016-06-01

    In order to satisfy demands of massive and heterogeneous tumor clinical data processing and the multi-center collaborative diagnosis and treatment for tumor diseases,a Tumor Data Interacted System(TDIS)was established based on grid platform,so that an implementing virtualization platform of tumor diagnosis service was realized,sharing tumor information in real time and carrying on standardized management.The system adopts Globus Toolkit 4.0tools to build the open grid service framework and encapsulats data resources based on Web Services Resource Framework(WSRF).The system uses the middleware technology to provide unified access interface for heterogeneous data interaction,which could optimize interactive process with virtualized service to query and call tumor information resources flexibly.For massive amounts of heterogeneous tumor data,the federated stored and multiple authorized mode is selected as security services mechanism,real-time monitoring and balancing load.The system can cooperatively manage multi-center heterogeneous tumor data to realize the tumor patient data query,sharing and analysis,and compare and match resources in typical clinical database or clinical information database in other service node,thus it can assist doctors in consulting similar case and making up multidisciplinary treatment plan for tumors.Consequently,the system can improve efficiency of diagnosis and treatment for tumor,and promote the development of collaborative tumor diagnosis model.

  1. The WLCG Messaging Service and its Future

    NASA Astrophysics Data System (ADS)

    Cons, Lionel; Paladin, Massimo

    2012-12-01

    Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.

  2. GRACC: New generation of the OSG accounting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Retzke, K.; Weitzel, D.; Bhat, S.

    2016-10-14

    Throughout the last decade the Open Science Grid (OSG) has been fielding requests from user communities, resource owners, and funding agencies to provide information about utilization of OSG resources. Requested data include traditional accounting - core-hours utilized - as well as users certicate Distinguished Name, their affiliations, and eld of science. The OSG accounting service, Gratia, developed in 2006, is able to provide this information and much more. However, with the rapid expansion and transformation of the OSG resources and access to them, we are faced with several challenges in adapting and maintaining the current accounting service. The newest changesmore » include, but are not limited to, acceptance of users from numerous university campuses, whose jobs are flocking to OSG resources, expansion into new types of resources (public and private clouds, allocation-based HPC resources, and GPU farms), migration to pilot-based systems, and migration to multicore environments. In order to have a scalable, sustainable and expandable accounting service for the next few years, we are embarking on the development of the next-generation OSG accounting service, GRACC, that will be based on open-source technology and will be compatible with the existing system. It will consist of swappable, independent components, such as Logstash, Elasticsearch, Grafana, and RabbitMQ, that communicate through a data exchange. GRACC will continue to interface EGI and XSEDE accounting services and provide information in accordance with existing agreements. Lastly, we will present the current architecture and working prototype.« less

  3. Grid accounting service: state and future development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levshina, T.; Sehgal, C.; Bockelman, B.

    2014-01-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at Universitymore » of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.« less

  4. Service differentiated and adaptive CSMA/CA over IEEE 802.15.4 for Cyber-Physical Systems.

    PubMed

    Xia, Feng; Li, Jie; Hao, Ruonan; Kong, Xiangjie; Gao, Ruixia

    2013-01-01

    Cyber-Physical Systems (CPS) that collect, exchange, manage information, and coordinate actions are an integral part of the Smart Grid. In addition, Quality of Service (QoS) provisioning in CPS, especially in the wireless sensor/actuator networks, plays an essential role in Smart Grid applications. IEEE 802.15.4, which is one of the most widely used communication protocols in this area, still needs to be improved to meet multiple QoS requirements. This is because IEEE 802.15.4 slotted Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) employs static parameter configuration without supporting differentiated services and network self-adaptivity. To address this issue, this paper proposes a priority-based Service Differentiated and Adaptive CSMA/CA (SDA-CSMA/CA) algorithm to provide differentiated QoS for various Smart Grid applications as well as dynamically initialize backoff exponent according to traffic conditions. Simulation results demonstrate that the proposed SDA-CSMA/CA scheme significantly outperforms the IEEE 802.15.4 slotted CSMA/CA in terms of effective data rate, packet loss rate, and average delay.

  5. Service Differentiated and Adaptive CSMA/CA over IEEE 802.15.4 for Cyber-Physical Systems

    PubMed Central

    Gao, Ruixia

    2013-01-01

    Cyber-Physical Systems (CPS) that collect, exchange, manage information, and coordinate actions are an integral part of the Smart Grid. In addition, Quality of Service (QoS) provisioning in CPS, especially in the wireless sensor/actuator networks, plays an essential role in Smart Grid applications. IEEE 802.15.4, which is one of the most widely used communication protocols in this area, still needs to be improved to meet multiple QoS requirements. This is because IEEE 802.15.4 slotted Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) employs static parameter configuration without supporting differentiated services and network self-adaptivity. To address this issue, this paper proposes a priority-based Service Differentiated and Adaptive CSMA/CA (SDA-CSMA/CA) algorithm to provide differentiated QoS for various Smart Grid applications as well as dynamically initialize backoff exponent according to traffic conditions. Simulation results demonstrate that the proposed SDA-CSMA/CA scheme significantly outperforms the IEEE 802.15.4 slotted CSMA/CA in terms of effective data rate, packet loss rate, and average delay. PMID:24260021

  6. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  7. Guidelines for Implementing Advanced Distribution Management Systems-Requirements for DMS Integration with DERMS and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui; Chen, Chen; Lu, Xiaonan

    2015-08-01

    This guideline focuses on the integration of DMS with DERMS and microgrids connected to the distribution grid by defining generic and fundamental design and implementation principles and strategies. It starts by addressing the current status, objectives, and core functionalities of each system, and then discusses the new challenges and the common principles of DMS design and implementation for integration with DERMS and microgrids to realize enhanced grid operation reliability and quality power delivery to consumers while also achieving the maximum energy economics from the DER and microgrid connections.

  8. Global Static Indexing for Real-Time Exploration of Very Large Regular Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascucci, V; Frank, R

    2001-07-23

    In this paper we introduce a new indexing scheme for progressive traversal and visualization of large regular grids. We demonstrate the potential of our approach by providing a tool that displays at interactive rates planar slices of scalar field data with very modest computing resources. We obtain unprecedented results both in terms of absolute performance and, more importantly, in terms of scalability. On a laptop computer we provide real time interaction with a 2048{sup 3} grid (8 Giga-nodes) using only 20MB of memory. On an SGI Onyx we slice interactively an 8192{sup 3} grid (1/2 tera-nodes) using only 60MB ofmore » memory. The scheme relies simply on the determination of an appropriate reordering of the rectilinear grid data and a progressive construction of the output slice. The reordering minimizes the amount of I/O performed during the out-of-core computation. The progressive and asynchronous computation of the output provides flexible quality/speed tradeoffs and a time-critical and interruptible user interface.« less

  9. A Detailed Examination of the GPM Core Satellite Gridded Text Product

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kelley, Owen A.; Kummerow, C.; Huffman, George; Olson, William S.; Kwiatowski, John M.

    2015-01-01

    The Global Precipitation Measurement (GPM) mission quarter-degree gridded-text product has a similar file format and a similar purpose as the Tropical Rainfall Measuring Mission (TRMM) 3G68 quarter-degree product. The GPM text-grid format is an hourly summary of surface precipitation retrievals from various GPM instruments and combinations of GPM instruments. The GMI Goddard Profiling (GPROF) retrieval provides the widest swath (800 km) and does the retrieval using the GPM Microwave Imager (GMI). The Ku radar provides the widest radar swath (250 km swath) and also provides continuity with the TRMM Ku Precipitation Radar. GPM's Ku+Ka band matched swath (125 km swath) provides a dual-frequency precipitation retrieval. The "combined" retrieval (125 km swath) provides a multi-instrument precipitation retrieval based on the GMI, the DPR Ku radar, and the DPR Ka radar. While the data are reported in hourly grids, all hours for a day are packaged into a single text file that is g-zipped to reduce file size and to speed up downloading. The data are reported on a 0.25deg x 0.25 deg grid.

  10. The GENIUS Grid Portal and robot certificates: a new tool for e-Science

    PubMed Central

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-01-01

    Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747

  11. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    PubMed

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.

  12. Spaceflight Operations Services Grid Prototype

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Mehrotra, Piyush; Lisotta, Anthony

    2004-01-01

    NASA over the years has developed many types of technologies and conducted various types of science resulting in numerous variations of operations, data and applications. For example, operations range from deep space projects managed by JPL, Saturn and Shuttle operations managed from JSC and KSC, ISS science operations managed from MSFC and numerous low earth orbit satellites managed from GSFC that are varied and intrinsically different but require many of the same types of services to fulfill their missions. Also, large data sets (databases) of Shuttle flight data, solar system projects and earth observing data exist which because of their varied and sometimes outdated technologies are not and have not been fully examined for additional information and knowledge. Many of the applications/systems supporting operational services e.g. voice, video, telemetry and commanding, are outdated and obsolete. The vast amounts of data are located in various formats, at various locations and range over many years. The ability to conduct unified space operations, access disparate data sets and to develop systems and services that can provide operational services does not currently exist in any useful form. In addition, adding new services to existing operations is generally expensive and with the current budget constraints not feasible on any broad level of implementation. To understand these services a discussion of each one follows. The Spaceflight User-based Services are those services required to conduct space flight operations. Grid Services are those Grid services that will be used to overcome, through middleware software, some or all the problems that currently exists. In addition, Network Services will be discussed briefly. Network Services are crucial to any type of remedy and are evolving adequately to support any technology currently in development.

  13. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and load balancing. Solution method: The simulation grid is represented by an adjacency list (graph) with vertices stored into a hash table and edges into contiguous arrays. Message Passing Interface standard is used for parallelization. Cell data is given as a template parameter when instantiating the grid. Restrictions: Logically cartesian grid. Running time: Running time depends on the hardware, problem and the solution method. Small problems can be solved in under a minute and very large problems can take weeks. The examples and tests provided with the package take less than about one minute using default options. In the version of dccrg presented here the speed of adaptive mesh refinement is at most of the order of 106 total created cells per second. http://www.mpi-forum.org/. http://www.boost.org/. K. Devine, E. Boman, R. Heaphy, B. Hendrickson, C. Vaughan, Zoltan data management services for parallel dynamic applications, Comput. Sci. Eng. 4 (2002) 90-97. http://dx.doi.org/10.1109/5992.988653. https://gitorious.org/sfc++.

  14. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  15. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  16. Out-of-Core Streamline Visualization on Large Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu

    1997-01-01

    It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.

  17. Economic models for management of resources in peer-to-peer and grid computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  18. A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research

    PubMed Central

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue

    2012-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123

  19. A roadmap for caGrid, an enterprise Grid architecture for biomedical research.

    PubMed

    Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil

    2008-01-01

    caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.

  20. A sustainability model based on cloud infrastructures for core and downstream Copernicus services

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Calò, Fabiana; De Luca, Claudio; Elefante, Stefano; Farres, Jordi; Guzzetti, Fausto; Imperatore, Pasquale; Lanari, Riccardo; Lengert, Wolfgang; Zinno, Ivana; Casu, Francesco

    2014-05-01

    The incoming Sentinel missions have been designed to be the first remote sensing satellite system devoted to operational services. In particular, the Synthetic Aperture Radar (SAR) Sentinel-1 sensor, dedicated to globally acquire over land in the interferometric mode, guarantees an unprecedented capability to investigate and monitor the Earth surface deformations related to natural and man-made hazards. Thanks to the global coverage strategy and 12-day revisit time, jointly with the free and open access data policy, such a system will allow an extensive application of Differential Interferometric SAR (DInSAR) techniques. In such a framework, European Commission has been funding several projects through the GMES and Copernicus programs, aimed at preparing the user community to the operational and extensive use of Sentinel-1 products for risk mitigation and management purposes. Among them, the FP7-DORIS, an advanced GMES downstream service coordinated by Italian National Council of Research (CNR), is based on the fully exploitation of advanced DInSAR products in landslides and subsidence contexts. In particular, the DORIS project (www.doris-project.eu) has developed innovative scientific techniques and methodologies to support Civil Protection Authorities (CPA) during the pre-event, event, and post-event phases of the risk management cycle. Nonetheless, the huge data stream expected from the Sentinel-1 satellite may jeopardize the effective use of such data in emergency response and security scenarios. This potential bottleneck can be properly overcome through the development of modern infrastructures, able to efficiently provide computing resources as well as advanced services for big data management, processing and dissemination. In this framework, CNR and ESA have tightened up a cooperation to foster the use of GRID and cloud computing platforms for remote sensing data processing, and to make available to a large audience advanced and innovative tools for DInSAR products generation and exploitation. In particular, CNR is porting the multi-temporal DInSAR technique referred to as Small Baseline Subset (SBAS) into the ESA G-POD (Grid Processing On Demand) and CIOP (Cloud Computing Operational Pilot) platforms (Elefante et al., 2013) within the SuperSites Exploitation Platform (SSEP) project, which aim is contributing to the development of an ecosystem for big geo-data processing and dissemination. This work focuses on presenting the main results that have been achieved by the DORIS project concerning the use of advanced DInSAR products for supporting CPA during the risk management cycle. Furthermore, based on the DORIS experience, a sustainability model for Core and Downstream Copernicus services based on the effective exploitation of cloud platforms is proposed. In this framework, remote sensing community, both service providers and users, can significantly benefit from the Helix Nebula-The Science Cloud initiative, created by European scientific institutions, agencies, SMEs and enterprises to pave the way for the development and exploitation of a cloud computing infrastructure for science. REFERENCES Elefante, S., Imperatore, P. , Zinno, I., M. Manunta, E. Mathot, F. Brito, J. Farres, W. Lengert, R. Lanari, F. Casu, 2013, "SBAS-DINSAR Time series generation on cloud computing platforms". IEEE IGARSS Conference, Melbourne (AU), July 2013.

  1. Formation of Virtual Organizations in Grids: A Game-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Carroll, Thomas E.; Grosu, Daniel

    The execution of large scale grid applications requires the use of several computational resources owned by various Grid Service Providers (GSPs). GSPs must form Virtual Organizations (VOs) to be able to provide the composite resource to these applications. We consider grids as self-organizing systems composed of autonomous, self-interested GSPs that will organize themselves into VOs with every GSP having the objective of maximizing its profit. We formulate the resource composition among GSPs as a coalition formation problem and propose a game-theoretic framework based on cooperation structures to model it. Using this framework, we design a resource management system that supports the VO formation among GSPs in a grid computing system.

  2. The Language Grid: supporting intercultural collaboration

    NASA Astrophysics Data System (ADS)

    Ishida, T.

    2018-03-01

    A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.

  3. A Security-façade Library for Virtual-observatory Software

    NASA Astrophysics Data System (ADS)

    Rixon, G.

    2009-09-01

    The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.

  4. Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter

    An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less

  5. Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid

    DOE PAGES

    Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter; ...

    2018-06-12

    An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less

  6. The value of plug-in hybrid electric vehicles as grid resources

    DOE PAGES

    Sioshansi, Ramteen; Denholm, Paul

    2010-07-01

    Here, plug-in hybrid electric vehicles (PHEVs) can become valuable resources for an electric power system by providing vehicle to grid (V2G) services, such as energy storage and ancillary services. We use a unit commitment model of the Texas power system to simulate system operations with different-sized PHEV fleets that do and do not provide V2G services, to estimate the value of those services. We demonstrate that a PHEV fleet can provide benefits to the system, mainly through the provision of ancillary services, reducing the need to reserve conventional generator capacity. Moreover, our analysis shows that PHEV owners are made bettermore » off by providing V2G services and we demonstrate that these benefits can reduce the time it takes to recover the higher upfront capital cost of a PHEV when compared to other vehicle types.« less

  7. DCMIP2016: a review of non-hydrostatic dynamical core design and intercomparison of participating models

    NASA Astrophysics Data System (ADS)

    Ullrich, Paul A.; Jablonowski, Christiane; Kent, James; Lauritzen, Peter H.; Nair, Ramachandran; Reed, Kevin A.; Zarzycki, Colin M.; Hall, David M.; Dazlich, Don; Heikes, Ross; Konor, Celal; Randall, David; Dubos, Thomas; Meurdesoif, Yann; Chen, Xi; Harris, Lucas; Kühnlein, Christian; Lee, Vivian; Qaddouri, Abdessamad; Girard, Claude; Giorgetta, Marco; Reinert, Daniel; Klemp, Joseph; Park, Sang-Hun; Skamarock, William; Miura, Hiroaki; Ohno, Tomoki; Yoshida, Ryuji; Walko, Robert; Reinecke, Alex; Viner, Kevin

    2017-12-01

    Atmospheric dynamical cores are a fundamental component of global atmospheric modeling systems and are responsible for capturing the dynamical behavior of the Earth's atmosphere via numerical integration of the Navier-Stokes equations. These systems have existed in one form or another for over half of a century, with the earliest discretizations having now evolved into a complex ecosystem of algorithms and computational strategies. In essence, no two dynamical cores are alike, and their individual successes suggest that no perfect model exists. To better understand modern dynamical cores, this paper aims to provide a comprehensive review of 11 non-hydrostatic dynamical cores, drawn from modeling centers and groups that participated in the 2016 Dynamical Core Model Intercomparison Project (DCMIP) workshop and summer school. This review includes a choice of model grid, variable placement, vertical coordinate, prognostic equations, temporal discretization, and the diffusion, stabilization, filters, and fixers employed by each system.

  8. 20 CFR 652.206 - May a State use funds authorized under the Act to provide “core services” and “intensive services...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... § 652.206 May a State use funds authorized under the Act to provide “core services” and “intensive... core services, as defined at section 134(d)(2) of WIA and discussed at 20 CFR 663.150, and may be used.... Funds authorized under section 7(b) of the Act may be used to provide core or intensive services. Core...

  9. 20 CFR 652.206 - May a State use funds authorized under the Act to provide “core services” and “intensive services...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... § 652.206 May a State use funds authorized under the Act to provide “core services” and “intensive... core services, as defined at section 134(d)(2) of WIA and discussed at 20 CFR 663.150, and may be used.... Funds authorized under section 7(b) of the Act may be used to provide core or intensive services. Core...

  10. Emissions impacts and benefits of plug-in hybrid electric vehicles and vehicle-to-grid services

    DOE PAGES

    Sioshansi, Ramteen; Denholm, Paul

    2009-01-22

    Plug-in hybrid electric vehicles (PHEVs) have been promoted as a potential technology to reduce emissions of greenhouse gases and other pollutants by using electricity instead of petroleum, and by improving electric system efficiency by providing vehicle-to-grid (V2G) services. We use an electric power system model to explicitly evaluate the change in generator dispatches resulting from PHEV deployment in the Texas grid, and apply fixed and non-parametric estimates of generator emissions rates, to estimate the resulting changes in generation emissions. Here, we find that by using the flexibility of when vehicles may be charged, generator efficiency can be increased substantially. Bymore » changing generator dispatch, a PHEV fleet of up to 15% of light-duty vehicles can actually decrease net generator NO x emissions during the ozone season, despite the additional charging load. By adding V2G services, such as spinning reserves and energy storage, CO 2, SO 2, and NO x emissions can be reduced even further.« less

  11. Emissions impacts and benefits of plug-in hybrid electric vehicles and vehicle-to-grid services.

    PubMed

    Sioshansi, Ramteen; Denholm, Paul

    2009-02-15

    Plug-in hybrid electric vehicles (PHEVs) have been promoted as a potential technology to reduce emissions of greenhouse gases and other pollutants by using electricity instead of petroleum, and byimproving electric system efficiency by providing vehicle-to-grid (V2G) services. We use an electric power system model to explicitly evaluate the change in generator dispatches resulting from PHEV deployment in the Texas grid, and apply fixed and non-parametric estimates of generator emissions rates, to estimate the resulting changes in generation emissions. We find that by using the flexibility of when vehicles may be charged, generator efficiency can be increased substantially. By changing generator dispatch, a PHEVfleet of up to 15% of light-duty vehicles can actually decrease net generator NOx emissions during the ozone season, despite the additional charging load. By adding V2G services, such as spinning reserves and energy storage, CO2, SO2, and NOx emissions can be reduced even further.

  12. The HEPCloud Facility: elastic computing for High Energy Physics - The NOvA Use Case

    NASA Astrophysics Data System (ADS)

    Fuess, S.; Garzoglio, G.; Holzman, B.; Kennedy, R.; Norman, A.; Timm, S.; Tiradani, A.

    2017-10-01

    The need for computing in the HEP community follows cycles of peaks and valleys mainly driven by conference dates, accelerator shutdown, holiday schedules, and other factors. Because of this, the classical method of provisioning these resources at providing facilities has drawbacks such as potential overprovisioning. As the appetite for computing increases, however, so does the need to maximize cost efficiency by developing a model for dynamically provisioning resources only when needed. To address this issue, the HEPCloud project was launched by the Fermilab Scientific Computing Division in June 2015. Its goal is to develop a facility that provides a common interface to a variety of resources, including local clusters, grids, high performance computers, and community and commercial Clouds. Initially targeted experiments include CMS and NOvA, as well as other Fermilab stakeholders. In its first phase, the project has demonstrated the use of the “elastic” provisioning model offered by commercial clouds, such as Amazon Web Services. In this model, resources are rented and provisioned automatically over the Internet upon request. In January 2016, the project demonstrated the ability to increase the total amount of global CMS resources by 58,000 cores from 150,000 cores - a 38 percent increase - in preparation for the Recontres de Moriond. In March 2016, the NOvA experiment has also demonstrated resource burst capabilities with an additional 7,300 cores, achieving a scale almost four times as large as the local allocated resources and utilizing the local AWS s3 storage to optimize data handling operations and costs. NOvA was using the same familiar services used for local computations, such as data handling and job submission, in preparation for the Neutrino 2016 conference. In both cases, the cost was contained by the use of the Amazon Spot Instance Market and the Decision Engine, a HEPCloud component that aims at minimizing cost and job interruption. This paper describes the Fermilab HEPCloud Facility and the challenges overcome for the CMS and NOvA communities.

  13. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  14. Preface: Workshop on Off-Grid Technology Systems

    NASA Astrophysics Data System (ADS)

    Alonso-Marroquin, Fernando

    2017-06-01

    Off-grid houses are dwellings that do not rely on water supply, sewer, or electrical power grid, and are able to operate independently of all public utility services. These houses are ideal for remote communities or population suffering natural or human-made disasters. Our aim is to develop compact and affordable off-grid technologies by integrating high-end nano-engineering with systems that imitates natural biological processes. The key areas of focus in the workshop were: solar energy harvesting using nanotechnology, wind energy harvesting from vertical-axis wind turbines, supercapacitors energy storage systems, treatment of greywater, and green roofs to achieve air comfort.

  15. The Optimization dispatching of Micro Grid Considering Load Control

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Xie, Jiqiang; Yang, Xiu; He, Hongli

    2018-01-01

    This paper proposes an optimization control of micro-grid system economy operation model. It coordinates the new energy and storage operation with diesel generator output, so as to achieve the economic operation purpose of micro-grid. In this paper, the micro-grid network economic operation model is transformed into mixed integer programming problem, which is solved by the mature commercial software, and the new model is proved to be economical, and the load control strategy can reduce the charge and discharge times of energy storage devices, and extend the service life of the energy storage device to a certain extent.

  16. Grid Connected Functionality

    DOE Data Explorer

    Baker, Kyri; Jin, Xin; Vaidynathan, Deepthi; Jones, Wesley; Christensen, Dane; Sparn, Bethany; Woods, Jason; Sorensen, Harry; Lunacek, Monte

    2016-08-04

    Dataset demonstrating the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the-loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfort bounds and physical hardware limitations.

  17. MERCHANT MARINE SHIP REACTOR

    DOEpatents

    Mumm, J.F.; North, D.C. Jr.; Rock, H.R.; Geston, D.K.

    1961-05-01

    A nuclear reactor is described for use in a merchant marine ship. The reactor is of pressurized light water cooled and moderated design in which three passes of the water through the core in successive regions of low, intermediate, and high heat generation and downflow in a fuel region are made. The foregoing design makes a compact reactor construction with extended core life. The core has an egg-crate lattice containing the fuel elements confined between a lower flow baffle and upper grid plate, with the latter serving also as part of a turn- around manifold from which the entire coolant is distributed into the outer fuel elements for the second pass through the core. The inner fuel elements are cooled in the third pass.

  18. Merchant Marine Ship Reactor

    DOEpatents

    Sankovich, M. F.; Mumm, J. F.; North, Jr, D. C.; Rock, H. R.; Gestson, D. K.

    1961-05-01

    A nuclear reactor for use in a merchant marine ship is described. The reactor is of pressurized, light water cooled and moderated design in which three passes of the water through the core in successive regions of low, intermediate, and high heat generation and downflow in a fuel region are made. The design makes a compact reactor construction with extended core life. The core has an egg-crate lattice containing the fuel elements that are confined between a lower flow baffle and upper grid plate, with the latter serving also as part of a turn- around manifold from which the entire coolant is distributed into the outer fuel elements for the second pass through the core. The inner fuel elements are cooled in the third pass. (AEC)

  19. GPM V05 Gridded Text Products

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kelley, Owen

    2017-01-01

    This presentation will summarize the changes in the products for the GPM V05 reprocessing cycle. It will concentrate on discussing the gridded text product from the core satellite retrievals. However, all aspects of the GPROF GMI changes in this product are equally appropriate to the other two gridded text products. The GPM mission reprocessed its products in May of 2017 as part of a continuing improvement of precipitation retrievals. This lead to important improvement in the retrievals and therefore also necessitated reprocessing the gridded test products. The V05 GPROF changes not only improved the retrievals but substantially alerted the format and this compelled changes to the gridded text products. Especially important in this regard is the GPROF2017 (used in V05) change from reporting the fraction of the total precipitation rate that occurring as convection or in liquid phase. Instead, GPROF2017, and therefore V05 gridded text products, report the rate of convective precipitation in mm/hr. The GPROF2017 algorithm now reports the frozen precipitation rate in mm/hr rather than the fraction of total precipitation that is liquid. Because of the aim of the gridded text product is to remain simple the radar and combined results will also change in V05 to reflect this change in the GMI retrieval. The presentation provides an analysis of these changes as well as presenting a comparison with the swath products from which the hourly text grids were derived.

  20. Implementation of data node in spatial information grid based on WS resource framework and WS notification

    NASA Astrophysics Data System (ADS)

    Zhang, Dengrong; Yu, Le

    2006-10-01

    Abstract-An approach of constructing a data node in spatial information grid (SIG) based on Web Service Resource Framework (WSRF) and Web Service Notification (WSN) is described in this paper. Attentions are paid to construct and implement SIG's resource layer, which is the most important part. A study on this layer find out, it is impossible to require persistent interaction with the clients of the services in common SIG architecture because of inheriting "stateless" and "not persistent" limitations of Web Service. A WSRF/WSN-based data node is designed to hurdle this short comes. Three different access modes are employed to test the availability of this node. Experimental results demonstrate this service node can successfully respond to standard OGC requests and returns specific spatial data in different network environment, also is stateful, dynamic and persistent.

  1. Synchrotron Imaging Computations on the Grid without the Computing Element

    NASA Astrophysics Data System (ADS)

    Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.

    2011-12-01

    Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.

  2. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  3. Nomadic migration : a service environment for autonomic computing on the Grid

    NASA Astrophysics Data System (ADS)

    Lanfermann, Gerd

    2003-06-01

    In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment. In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen. Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung. Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen.

  4. GEMSS: privacy and security for a medical Grid.

    PubMed

    Middleton, S E; Herveg, J A M; Crazzolara, F; Marvin, D; Poullet, Y

    2005-01-01

    The GEMSS project is developing a secure Grid infrastructure through which six medical simulations services can be invoked. We examine the legal and security framework within which GEMSS operates. We provide a legal qualification to the operations performed upon patient data, in view of EU directive 95/46, when using medical applications on the GEMSS Grid. We identify appropriate measures to ensure security and describe the legal rationale behind our choice of security technology. Our legal analysis demonstrates there must be an identified controller (typically a hospital) of patient data. The controller must then choose a processor (in this context a Grid service provider) that provides sufficient guarantees with respect to the security of their technical and organizational data processing procedures. These guarantees must ensure a level of security appropriate to the risks, with due regard to the state of the art and the cost of their implementation. Our security solutions are based on a public key infrastructure (PKI), transport level security and end-to-end security mechanisms in line with the web service (WS Security, WS Trust and SecureConversation) security specifications. The GEMSS infrastructure ensures a degree of protection of patient data that is appropriate for the health care sector, and is in line with the European directives. We hope that GEMSS will become synonymous with high security data processing, providing a framework by which GEMSS service providers can provide the security guarantees required by hospitals with regard to the processing of patient data.

  5. European grid services for global earth science

    NASA Astrophysics Data System (ADS)

    Brewer, S.; Sipos, G.

    2012-04-01

    This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.

  6. Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2016-12-01

    We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons

  7. CopperCore Service Integration

    ERIC Educational Resources Information Center

    Vogten, Hubert; Martens, Harrie; Nadolski, Rob; Tattersall, Colin; van Rosmalen, Peter; Koper, Rob

    2007-01-01

    In an e-learning environment there is a need to integrate various e-learning services like assessment services, collaboration services, learning design services and communication services. In this article we present the design and implementation of a generic integrative service framework, called CopperCore Service Integration (CCSI). We will…

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abe Lederman

    This report contains the comprehensive summary of the work performed on the SBIR Phase II project (“Distributed Relevance Ranking in Heterogeneous Document Collections”) at Deep Web Technologies (http://www.deepwebtech.com). We have successfully completed all of the tasks defined in our SBIR Proposal work plan (See Table 1 - Phase II Tasks Status). The project was completed on schedule and we have successfully deployed an initial production release of the software architecture at DOE-OSTI for the Science.gov Alliance's search portal (http://www.science.gov). We have implemented a set of grid services that supports the extraction, filtering, aggregation, and presentation of search results from numerousmore » heterogeneous document collections. Illustration 3 depicts the services required to perform QuickRank™ filtering of content as defined in our architecture documentation. Functionality that has been implemented is indicated by the services highlighted in green. We have successfully tested our implementation in a multi-node grid deployment both within the Deep Web Technologies offices, and in a heterogeneous geographically distributed grid environment. We have performed a series of load tests in which we successfully simulated 100 concurrent users submitting search requests to the system. This testing was performed on deployments of one, two, and three node grids with services distributed in a number of different configurations. The preliminary results from these tests indicate that our architecture will scale well across multi-node grid deployments, but more work will be needed, beyond the scope of this project, to perform testing and experimentation to determine scalability and resiliency requirements. We are pleased to report that a production quality version (1.4) of the science.gov Alliance's search portal based on our grid architecture was released in June of 2006. This demonstration portal is currently available at http://science.gov/search30 . The portal allows the user to select from a number of collections grouped by category and enter a query expression (See Illustration 1 - Science.gov 3.0 Search Page). After the user clicks “search” a results page is displayed that provides a list of results from the selected collections ordered by relevance based on the query expression the user provided. Our grid based solution to deep web search and document ranking has already gained attention within DOE, other Government Agencies and a fortune 50 company. We are committed to the continued development of grid based solutions to large scale data access, filtering, and presentation problems within the domain of Information Retrieval and the more general categories of content management, data mining and data analysis.« less

  9. Free-wake computation of helicopter rotor flowfields in forward flight

    NASA Technical Reports Server (NTRS)

    Ramachandran, K.; Schlechtriem, S.; Caradonna, F. X.; Steinhoff, John

    1993-01-01

    A new method has been developed for computing advancing rotor flows. This method uses the Vorticity Embedding technique, which has been developed and validated over the last several years for hovering rotor problems. In this work, the unsteady full potential equation is solved on an Eulerian grid with an embedded vortical velocity field. This vortical velocity accounts for the influence of the wake. Dynamic grid changes that are required to accommodate prescribed blade motion and deformation are included using a novel grid blending method. Free wake computations have been performed on a two-bladed AH-1G rotor at low advance ratios including blade motion. Computed results are compared with experimental data. The sudden variations in airloads due to blade-vortex interactions on the advancing and retreating sides are well captured. The sensitivity of the computed solution to various factors like core size, time step and grids has been investigated. Computed wake geometries and their influence on the aerodynamic loads at these advance ratios are also discussed.

  10. Sensitivity of selected geomagnetic properties to truncation level of spherical harmonic expansions

    NASA Technical Reports Server (NTRS)

    Benton, E. R.; Estes, R. H.; Langel, R. A.; Muth, L. A.

    1982-01-01

    The model dependence of Gauss coefficients associated with a lack of spherical harmonic orthogonality on a nonuniform Magsat data grid is shown to be minor, where the fitting level exceeds the harmonic order by a value of approximately four. The shape of the magnetic energy spectrum outside the core, and the sensitivity to truncation level of magnetic contour location and the number of their intersections on the core-mantle boundary, suggest that spherical harmonic expansions of the main geomagnetic field should be truncated at a truncation level value of not more than eight if they are to be extrapolated to the core.

  11. Technical report series on global modeling and data assimilation. Volume 5: Documentation of the AIRES/GEOS dynamical core, version 2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Takacs, Lawrence L.

    1995-01-01

    A detailed description of the numerical formulation of Version 2 of the ARIES/GEOS 'dynamical core' is presented. This code is a nearly 'plug-compatible' dynamics for use in atmospheric general circulation models (GCMs). It is a finite difference model on a staggered latitude-longitude C-grid. It uses second-order differences for all terms except the advection of vorticity by the rotation part of the flow, which is done at fourth-order accuracy. This dynamical core is currently being used in the climate (ARIES) and data assimilation (GEOS) GCMs at Goddard.

  12. On the recovery of electric currents in the liquid core of the Earth

    NASA Astrophysics Data System (ADS)

    Kuslits, Lukács; Prácser, Ernő; Lemperger, István

    2017-04-01

    Inverse geodynamo modelling has become a standard method to get a more accurate image of the processes within the outer core. In this poster excerpts from the preliminary results of an other approach are presented. This comes around the possibility of recovering the currents within the liquid core directly, using Main Magnetic Field data. The approximation of different systems of the flow of charge is possible with various geometries. Based on previous geodynamo simulations, current coils can furnish a good initial geometry for such an estimation. The presentation introduces our preliminary test results and the study of reliability of the applied inversion algorithm for different numbers of coils, distributed in a grid simbolysing the domain between the inner-core and core-mantle boundaries. We shall also present inverted current structures using Main Field model data.

  13. Comparison of Models for Spacer Grid Pressure Loss in Nuclear Fuel Bundles for One and Two-Phase Flows

    NASA Astrophysics Data System (ADS)

    Maskal, Alan B.

    Spacer grids maintain the structural integrity of the fuel rods within fuel bundles of nuclear power plants. They can also improve flow characteristics within the nuclear reactor core. However, spacer grids add reactor coolant pressure losses, which require estimation and engineering into the design. Several mathematical models and computer codes were developed over decades to predict spacer grid pressure loss. Most models use generalized characteristics, measured by older, less precise equipment. The study of OECD/US-NRC BWR Full-Size Fine Mesh Bundle Tests (BFBT) provides updated and detailed experimental single and two-phase results, using technically advanced flow measurements for a wide range of boundary conditions. This thesis compares the predictions from the mathematical models to the BFBT experimental data by utilizing statistical formulae for accuracy and precision. This thesis also analyzes the effects of BFBT flow characteristics on spacer grids. No single model has been identified as valid for all flow conditions. However, some models' predictions perform better than others within a range of flow conditions, based on the accuracy and precision of the models' predictions. This study also demonstrates that pressure and flow quality have a significant effect on two-phase flow spacer grid models' biases.

  14. Transforming the U.S. Market with a New Application of Ternary-Type Pumped-Storage Hydropower Technology: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbus, David A; Jacobson, Mark D; Tan, Jin

    As the deployment of wind and solar technologies increases at an unprecedented rate across the United States and in many world markets, the variability of power output from these technologies expands the need for increased power system flexibility. Energy storage can play an important role in the transition to a more flexible power system that can accommodate high penetrations of variable renewable technologies. This project focuses on how ternary pumped storage hydropower (T-PSH) coupled with dynamic transmission can help this transition by defining the system-wide benefits of deploying this technology in specific U.S. markets. T-PSH technology is the fastest respondingmore » pumped hydro technology equipment available today for grid services. T-PSH efficiencies are competitive with lithium-ion (Li-ion) batteries, and T-PSH can provide increased storage capacity with minimal degradation during a 50-year lifetime. This project evaluates T-PSH for grid services ranging from fast frequency response (FFR) for power system contingency events and enhanced power system stability to longer time periods for power system flexibility to accommodate ramping from wind and solar variability and energy arbitrage. In summary, this project: Compares power grid services and costs, including ancillary services and essential reliability services, for T-PSH and conventional pumped storage hydropower (PSH) - Evaluates the dynamic response of T-PSH and PSH technologies and their contribution to essential reliability services for grid stability by developing new power system model representations for T-PSH and performing simulations in the Western Interconnection - Evaluates production costs, operational impacts, and energy storage revenue streams for future power system scenarios with T-PSH focusing on time frames of 5 minutes and more - Assesses the electricity market-transforming capabilities of T-PSH technology coupled with transmission monitoring and dynamic control. This paper presents an overview of the methodology and initial, first-year preliminary findings of a 2-year in-depth study into how advanced PSH and dynamic transmission contribute to the transformation and modernization of the U.S. electric grid. This project is part of the HydroNEXT Initiative funded by the U.S. Department of Energy (DOE) that is focused on the development of innovative technologies to advance nonpowered dams and PSH. The project team consists of the National Renewable Energy Laboratory (project lead), Absaroka Energy, LLC (Montana-based PSH project developer), GE Renewable Energy (PSH pump/turbine equipment supplier), Grid Dynamics, and Auburn University (lead for NREL/Auburn dynamic modeling team).« less

  15. UA-ICON - A non-hydrostatic global model for studying gravity waves from the troposphere to the thermosphere

    NASA Astrophysics Data System (ADS)

    Borchert, Sebastian; Zängl, Günther; Baldauf, Michael; Zhou, Guidi; Schmidt, Hauke; Manzini, Elisa

    2017-04-01

    In numerical weather prediction as well as climate simulations, there are ongoing efforts to raise the upper model lid, acknowledging the possible influence of middle and upper atmosphere dynamics on tropospheric weather and climate. As the momentum deposition of gravity waves (GWs) is responsible for key features of the large scale flow in the middle and upper atmosphere, the upward model extension has put GWs in the focus of atmospheric research needs. The Max Planck Institute for Meteorology (MPI-M) and the German Weather Service (DWD) have been developing jointly the non-hydrostatic global model ICON (Zängl et al, 2015) which features a new dynamical core based on an icosahedral grid. The extension of ICON beyond the mesosphere, where most GWs deposit their momentum, requires, e.g., relaxing the shallow-atmosphere and other traditional approximations as well as implementing additional physical processes that are important to the upper atmosphere. We would like to present aspects of the model development and its evaluation, and first results from a simulation of a period of the DEEPWAVE campaign in New Zealand in 2014 (Fritts et al, 2016) using grid nesting up to a horizontal mesh size of about 1.25 km. This work is part of the research unit: Multi-Scale Dynamics of Gravity Waves (MS-GWaves: sub-project GWING, https://ms-gwaves.iau.uni-frankfurt.de/index.php), funded by the German Research Foundation. Fritts, D.C. and Coauthors, 2016: "The Deep Propagating Gravity Wave Experiment (DEEPWAVE): An airborne and ground-based exploration of gravity wave propagation and effects from their sources throughout the lower and middle atmosphere". Bull. Amer. Meteor. Soc., 97, 425 - 453, doi:10.1175/BAMS-D-14-00269.1 Zängl, G., Reinert, D., Ripodas, P., Baldauf, M., 2015: "The ICON (ICOsahedral Non-hydrostatic) modelling framework of DWD and MPI-M: Description of the non-hydrostatic dynamical core". Quart. J. Roy. Met. Soc., 141, 563 - 579, doi:10.1002/qj.2378

  16. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    NASA Astrophysics Data System (ADS)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  17. Testing as a Service with HammerCloud

    NASA Astrophysics Data System (ADS)

    Medrano Llamas, Ramón; Barrand, Quentin; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel

    2014-06-01

    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.

  18. Advanced Power Electronics and Smart Inverters | Grid Modernization | NREL

    Science.gov Websites

    provide grid services such as voltage and frequency regulation, ride-through, dynamic current injection impacts of smart inverters on distribution systems. These activities are focused on enabling high combines high-voltage silicon carbide with revolutionary concepts such as additive manufacturing and multi

  19. 21 CFR 886.1330 - Amsler grid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Amsler grid. 886.1330 Section 886.1330 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES... the patient and intended to rapidly detect central and paracentral irregularities in the visual field...

  20. Systematic adaptation of data delivery

    DOEpatents

    Bakken, David Edward

    2016-02-02

    This disclosure describes, in part, a system management component for use in a power grid data network to systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription and the system management component may adjust the data rates in real-time to ensure that the power grid data network does not become overloaded and/or fail. In one example, subscriptions with lower priorities may have their quality of service adjusted before subscriptions with higher priorities. In each instance, the quality of service may be maintained, even if reduced, to meet or exceed the minimum acceptable quality of service for the subscription.

  1. Connections beyond the margins of the power grid Information technology and the evolution of off-grid solar electricity in the developing world

    NASA Astrophysics Data System (ADS)

    Alstone, Peter Michael

    This work explores the intersections of information technology and off-grid electricity deployment in the developing world with focus on a key instance: the emergence of pay-as-you-go (PAYG) solar household-scale energy systems. It is grounded in detailed field study by my research team in Kenya between 2013-2014 that included primary data collection across the solar supply chain from global businesses through national and local distribution and to the end-users. We supplement the information with business process and national survey data to develop a detailed view of the markets, technology systems, and individuals who interact within those frameworks. The findings are presented in this dissertation as a series of four chapters with introductory, bridging, and synthesis material between them. The first chapter, Decentralized Energy Systems for Clean Electricity Access, presents a global view of the emerging off-grid power sector. Long-run trends in technology create "a unique moment in history" for closing the gap between global population and access to electricity, which has stubbornly held at 1-2 billion people without power since the initiation of the electric utility business model in the late 1800's. We show the potential for widespread near-term adoption of off-grid solar, which could lead to ten times less inequality in access and also ten times lower household-level climate impacts. Decentralized power systems that replace fuel-based incumbent lighting can advance the causes of climate stabilization, economic and social freedom and human health. Chapters two and three are focused on market and institutional dynamics present circa 2014 in for off-grid solar with a focus on the Kenya market. Chapter 2, "Off-grid Power and Connectivity", presents our findings related to the widespread influence of information technology across the supply chain for solar and in PAYG approaches. Using digital financing and embedded payment verification technology, PAYG businesses can help overcome key barriers to adoption of off-grid energy systems. The framework provides financing (or energy service payment structures) for users of off-grid solar, and we show is also instrumental for building trust in off-grid solar technology, facilitating supply chain coordination, and creating mechanisms and incentives for after-sales service. Chapter 3, Quality Communication, delves into detail on the information channels (both incumbent and ICT-based) that link retailers with regional and global markets for solar goods. In it we uncover the linked structure of physical distribution networks and the pathway for information about product characteristics (including, critically, the quality of products). The work shows that a few key decisions about product purchasing at the wholesale level, in places like Nairobi (the capital city for Kenya) create the bulk of the choice set for retail buyers, and show how targeting those wholesale purchasers is critically important for ensuring good-quality products are available. Chapter 4, the last in this dissertation, is titled Off-grid solar energy services enabled and evaluated through information technology and presents an analytic framework for using remote monitoring data from PAYG systems to assess the joint technological and behavioral drivers for energy access through solar home systems. Using large-scale (n ~ 1,000) data from a large PAYG business in Kenya (M-KOPA), we show that people tend to co-optimize between the quantity and reliability of service, using 55% of the energy technically possible but with only 5% system down time. Half of the users move their solar panel frequently (in response to concerns about theft, for the most part) and these users experienced 20% lower energy service quantities. The findings illustrate the implications of key trends for off-grid power: evolving system component technology architectures, opportunities for improved support to markets, and the use of background data from business and technology systems. (Abstract shortened by ProQuest.).

  2. 20 CFR 663.110 - What are the eligibility criteria for core services for adults in the adult and dislocated worker...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What are the eligibility criteria for core... the One-Stop Delivery System § 663.110 What are the eligibility criteria for core services for adults in the adult and dislocated worker programs? To be eligible to receive core services as an adult in...

  3. 20 CFR 663.115 - What are the eligibility criteria for core services for dislocated workers in the adult and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What are the eligibility criteria for core... Through the One-Stop Delivery System § 663.115 What are the eligibility criteria for core services for dislocated workers in the adult and dislocated worker programs? (a) To be eligible to receive core services...

  4. The method of a joint intraday security check system based on cloud computing

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  5. Gap Assessment (FY 13 Update)

    DOE Data Explorer

    Getman, Dan

    2013-09-30

    To help guide its future data collection efforts, The DOE GTO funded a data gap analysis in FY2012 to identify high potential hydrothermal areas where critical data are needed. This analysis was updated in FY2013 and the resulting datasets are represented by this metadata. The original process was published in FY 2012 and is available here: https://pangea.stanford.edu/ERE/db/GeoConf/papers/SGW/2013/Esposito.pdf Though there are many types of data that can be used for hydrothermal exploration, five types of exploration data were targeted for this analysis. These data types were selected for their regional reconnaissance potential, and include many of the primary exploration techniques currently used by the geothermal industry. The data types include: 1. well data 2. geologic maps 3. fault maps 4. geochemistry data 5. geophysical data To determine data coverage, metadata for exploration data (including data type, data status, and coverage information) were collected and catalogued from nodes on the National Geothermal Data System (NGDS). It is the intention of this analysis that the data be updated from this source in a semi-automated fashion as new datasets are added to the NGDS nodes. In addition to this upload, an online tool was developed to allow all geothermal data providers to access this assessment and to directly add metadata themselves and view the results of the analysis via maps of data coverage in Geothermal Prospector (http://maps.nrel.gov/gt_prospector). A grid of the contiguous U.S. was created with 88,000 10-km by 10-km grid cells, and each cell was populated with the status of data availability corresponding to the five data types. Using these five data coverage maps and the USGS Resource Potential Map, sites were identified for future data collection efforts. These sites signify both that the USGS has indicated high favorability of occurrence of geothermal resources and that data gaps exist. The uploaded data are contained in two data files for each data category. The first file contains the grid and is in the SHP file format (shape file.) Each populated grid cell represents a 10k area within which data is known to exist. The second file is a CSV (comma separated value) file that contains all of the individual layers that intersected with the grid. This CSV can be joined with the map to retrieve a list of datasets that are available at any given site. The attributes in the CSV include: 1. grid_id : The id of the grid cell that the data intersects with 2. title: This represents the name of the WFS service that intersected with this grid cell 3. abstract: This represents the description of the WFS service that intersected with this grid cell 4. gap_type: This represents the category of data availability that these data fall within. As the current processing is pulling data from NGDS, this category universally represents data that are available in the NGDS and are ready for acquisition for analytic purposes. 5. proprietary_type: Whether the data are considered proprietary 6. service_type: The type of service 7. base_url: The service URL

  6. Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data

    NASA Astrophysics Data System (ADS)

    Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii

    2013-04-01

    Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.

  7. Numerical Simulations of Close and Contact Binary Systems Having Bipolytropic Equation of State

    NASA Astrophysics Data System (ADS)

    Kadam, Kundan; Clayton, Geoffrey C.; Motl, Patrick M.; Marcello, Dominic; Frank, Juhan

    2017-01-01

    I present the results of the numerical simulations of the mass transfer in close and contact binary systems with both stars having a bipolytropic (composite polytropic) equation of state. The initial binary systems are obtained by a modifying Hachisu’s self-consistent field technique. Both the stars have fully resolved cores with a molecular weight jump at the core-envelope interface. The initial properties of these simulations are chosen such that they satisfy the mass-radius relation, composition and period of a late W-type contact binary system. The simulations are carried out using two different Eulerian hydrocodes, Flow-ER with a fixed cylindrical grid, and Octo-tiger with an AMR capable cartesian grid. The detailed comparison of the simulations suggests an agreement between the results obtained from the two codes at different resolutions. The set of simulations can be treated as a benchmark, enabling us to reliably simulate mass transfer and merger scenarios of binary systems involving bipolytropic components.

  8. The Effects of Denial-of-Service Attacks on Secure Time-Critical Communications in the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fengli; Li, QInghua; Mantooth, Homer Alan

    2016-04-02

    According to IEC 61850, many smart grid communications require messages to be delivered in a very short time. –Trip messages and sample values applied to the transmission level: 3 ms –Interlocking messages applied to the distribution level: 10 ms •Time-critical communications are vulnerable to denial-of-service (DoS) attacks –Flooding attack: Attacker floods many messages to the target network/machine. We conducted systematic, experimental study about how DoS attacks affect message delivery delays.

  9. Grid infrastructure for automatic processing of SAR data for flood applications

    NASA Astrophysics Data System (ADS)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be executed by different resources of the Grid system. The resulting geospatial services are available in various OGC standards such as KML and WMS. Currently, the Grid infrastructure integrates the resources of several geographically distributed organizations, in particular: Space Research Institute NASU-NSAU (Ukraine) with deployed computational and storage nodes based on Globus Toolkit 4 (htpp://www.globus.org) and gLite 3 (http://glite.web.cern.ch) middleware, access to geospatial data and a Grid portal; Institute of Cybernetics of NASU (Ukraine) with deployed computational and storage nodes (SCIT-1/2/3 clusters) based on Globus Toolkit 4 middleware and access to computational resources (approximately 500 processors); Center of Earth Observation and Digital Earth Chinese Academy of Sciences (CEODE-CAS, China) with deployed computational nodes based on Globus Toolkit 4 middleware and access to geospatial data (approximately 16 processors). We are currently adding new geospatial services based on optical satellite data, namely MODIS. This work is carried out jointly with the CEODE-CAS. Using workflow patterns that were developed for SAR data processing we are building new workflows for optical data processing.

  10. Structural analysis of an off-grid tiny house

    NASA Astrophysics Data System (ADS)

    Calluari, Karina Arias; Alonso-Marroquín, Fernando

    2017-06-01

    The off-grid technologies and tiny house movement have experimented an unprecedented growth in recent years. Putting both sides together, we are trying to achieve an economic and environmental friendly solution to the higher cost of residential properties. This solution is the construction of off-grid tiny houses. This article presents a design for a small modular off-grid house made by pine timber. A numerical analysis of the proposed tiny house was performed to ensure its structural stability. The results were compared with the suggested serviceability limit state criteria, which are contended in the Australia Guidelines Standards making this design reliable for construction.

  11. Towards More Nuanced Classification of NGOs and Their Services to Improve Integrated Planning across Disaster Phases

    PubMed Central

    Towe, Vivian L.; Acosta, Joie D.; Chandra, Anita

    2017-01-01

    Nongovernmental organizations (NGOs) are being integrated into U.S. strategies to expand the services that are available during health security threats like disasters. Identifying better ways to classify NGOs and their services could optimize disaster planning. We surveyed NGOs about the types of services they provided during different disaster phases. Survey responses were used to categorize NGO services as core—critical to fulfilling their organizational mission—or adaptive—services implemented during a disaster based on community need. We also classified NGOs as being core or adaptive types of organizations by calculating the percentage of each NGO’s services classified as core. Service types classified as core were mainly social services, while adaptive service types were those typically relied upon during disasters (e.g., warehousing, food services, etc.). In total, 120 NGOs were classified as core organizations, meaning they mainly provided the same services across disaster phases, while 100 NGOs were adaptive organizations, meaning their services changed. Adaptive NGOs were eight times more likely to report routinely participating in disaster planning as compared to core NGOs. One reason for this association may be that adaptive NGOs are more aware of the changing needs in their communities across disaster phases because of their involvement in disaster planning. PMID:29160810

  12. An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua

    2011-07-09

    Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Gridmore » Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.« less

  13. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    2012-12-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  14. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    NASA Astrophysics Data System (ADS)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  15. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  16. LIGHT WATER MODERATED NEUTRONIC REACTOR

    DOEpatents

    Christy, R.F.; Weinberg, A.M.

    1957-09-17

    A uranium fuel reactor designed to utilize light water as a moderator is described. The reactor core is in a tank at the bottom of a substantially cylindrical cross-section pit, the core being supported by an apertured grid member and comprised of hexagonal tubes each containing a pluralily of fuel rods held in a geometrical arrangement between end caps of the tubes. The end caps are apertured to permit passage of the coolant water through the tubes and the fuel elements are aluminum clad to prevent corrosion. The tubes are hexagonally arranged in the center of the tank providing an amulus between the core and tank wall which is filled with water to serve as a reflector. In use, the entire pit and tank are filled with water in which is circulated during operation by coming in at the bottom of the tank, passing upwardly through the grid member and fuel tubes and carried off near the top of the pit, thereby picking up the heat generated by the fuel elements during the fission thereof. With this particular design the light water coolant can also be used as the moderator when the uranium is enriched by fissionable isotope to an abundance of U/sup 235/ between 0.78% and 2%.

  17. 20 CFR 666.140 - Which individuals receiving services are included in the core indicators of performance?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... included in the core indicators of performance? 666.140 Section 666.140 Employees' Benefits EMPLOYMENT AND... the core indicators of performance? (a)(1) The core indicators of performance apply to all individuals... informational activities. (WIA sec. 136(b)(2)(A).) (2) Self-service and informational activities are those core...

  18. gLExec: gluing grid computing to the Unix world

    NASA Astrophysics Data System (ADS)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  19. Maui Smart Grid Demonstration Project Managing Distribution System Resources for Improved Service Quality and Reliability, Transmission Congestion Relief, and Grid Support Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2014-09-30

    The Maui Smart Grid Project (MSGP) is under the leadership of the Hawaii Natural Energy Institute (HNEI) of the University of Hawaii at Manoa. The project team includes Maui Electric Company, Ltd. (MECO), Hawaiian Electric Company, Inc. (HECO), Sentech (a division of SRA International, Inc.), Silver Spring Networks (SSN), Alstom Grid, Maui Economic Development Board (MEDB), University of Hawaii-Maui College (UHMC), and the County of Maui. MSGP was supported by the U.S. Department of Energy (DOE) under Cooperative Agreement Number DE-FC26-08NT02871, with approximately 50% co-funding supplied by MECO. The project was designed to develop and demonstrate an integrated monitoring, communications,more » database, applications, and decision support solution that aggregates renewable energy (RE), other distributed generation (DG), energy storage, and demand response technologies in a distribution system to achieve both distribution and transmission-level benefits. The application of these new technologies and procedures will increase MECO’s visibility into system conditions, with the expected benefits of enabling more renewable energy resources to be integrated into the grid, improving service quality, increasing overall reliability of the power system, and ultimately reducing costs to both MECO and its customers.« less

  20. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS THREATENED MARINE AND ANADROMOUS SPECIES Pt...

  1. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 10 2012-10-01 2012-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS THREATENED MARINE AND ANADROMOUS SPECIES Pt...

  2. Using Python to generate AHPS-based precipitation simulations over CONUS using Amazon distributed computing

    NASA Astrophysics Data System (ADS)

    Machalek, P.; Kim, S. M.; Berry, R. D.; Liang, A.; Small, T.; Brevdo, E.; Kuznetsova, A.

    2012-12-01

    We describe how the Climate Corporation uses Python and Clojure, a language impleneted on top of Java, to generate climatological forecasts for precipitation based on the Advanced Hydrologic Prediction Service (AHPS) radar based daily precipitation measurements. A 2-year-long forecasts is generated on each of the ~650,000 CONUS land based 4-km AHPS grids by constructing 10,000 ensembles sampled from a 30-year reconstructed AHPS history for each grid. The spatial and temporal correlations between neighboring AHPS grids and the sampling of the analogues are handled by Python. The parallelization for all the 650,000 CONUS stations is further achieved by utilizing the MAP-REDUCE framework (http://code.google.com/edu/parallel/mapreduce-tutorial.html). Each full scale computational run requires hundreds of nodes with up to 8 processors each on the Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) distributed computing service resulting in 3 terabyte datasets. We further describe how we have productionalized a monthly run of the simulations process at full scale of the 4km AHPS grids and how the resultant terabyte sized datasets are handled.

  3. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knirsch, Fabian; Engel, Dominik; Neureiter, Christian

    In a smart grid, data and information are transported, transmitted, stored, and processed with various stakeholders having to cooperate effectively. Furthermore, personal data is the key to many smart grid applications and therefore privacy impacts have to be taken into account. For an effective smart grid, well integrated solutions are crucial and for achieving a high degree of customer acceptance, privacy should already be considered at design time of the system. To assist system engineers in early design phase, frameworks for the automated privacy evaluation of use cases are important. For evaluation, use cases for services and software architectures needmore » to be formally captured in a standardized and commonly understood manner. In order to ensure this common understanding for all kinds of stakeholders, reference models have recently been developed. In this paper we present a model-driven approach for the automated assessment of such services and software architectures in the smart grid that builds on the standardized reference models. The focus of qualitative and quantitative evaluation is on privacy. For evaluation, the framework draws on use cases from the University of Southern California microgrid.« less

  5. Synergy Between Archives, VO, and the Grid at ESAC

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Alvarez, R.; Gabriel, C.; Osuna, P.; Ott, S.

    2011-07-01

    Over the years, in support to the Science Operations Centers at ESAC, we have set up two Grid infrastructures. These have been built: 1) to facilitate daily research for scientists at ESAC, 2) to provide high computing capabilities for project data processing pipelines (e.g., Herschel), 3) to support science operations activities (e.g., calibration monitoring). Furthermore, closer collaboration between the science archives, the Virtual Observatory (VO) and data processing activities has led to an other Grid use case: the Remote Interface to XMM-Newton SAS Analysis (RISA). This web service-based system allows users to launch SAS tasks transparently to the GRID, save results on http-based storage and visualize them through VO tools. This paper presents real and operational use cases of Grid usages in these contexts

  6. Economic performance and sustainability of HealthGrids: evidence from two case studies.

    PubMed

    Dobrev, Alexander; Scholz, Stefan; Zegners, Dainis; Stroetmann, Karl A; Semler, Sebastian C

    2009-01-01

    Financial sustainability is not a driving force of HealthGrids today, as a previous desk research survey of 22 international HealthGrid projects has showed. The majority of applications are project based, which puts a time limit of funding, but also of goals and objectives. Given this situation, we analysed two initiatives, WISDOM and MammoGrid from an economic, cost-benefit perspective, and evaluated the potential for these initiatives to be brought to market as self-financing, sustainable services. We conclude that the topic of HealthGrids should be pursued further because of the substantial potential for net gains to society at large. The most significant hurdle to sustainability - the discrepancy between social benefits and private incentives - can be solved by sound business models.

  7. An orthogonal ferromagnetically coupled tetracopper(II) 2 x 2 homoleptic grid supported by micro-O4 bridges and its DFT study.

    PubMed

    Roy, Somnath; Mandal, Tarak Nath; Barik, Anil Kumar; Pal, Sachindranath; Butcher, Ray J; El Fallah, Mohamed Salah; Tercero, Javier; Kar, Susanta Kumar

    2007-03-28

    A pyrazole based ditopic ligand (PzOAP), prepared by the reaction between 5-methylpyrazole-3-carbohydrazide and methyl ester of imino picolinic acid, reacts with Cu(NO3)2.6H2O to form a self-assembled, ferromagnetically coupled, alkoxide bridged tetranuclear homoleptic Cu(II) square grid-complex [Cu4(PzOAP)4(NO3)2] (NO3)2.4H2O (1) with a central Cu4[micro-O4] core, involving four ligand molecules. In the Cu4[micro-O4] core, out of four copper centers, two copper centers are penta-coordinated and the remaining two are hexa-coordinated. In each case of hexa-coordination, the sixth position is occupied by the nitrate ion. The complex 1 has been characterized structurally and magnetically. Although Cu-O-Cu bridge angles are too large (138-141 degrees) and Cu-Cu distances are short (4.043-4.131 A), suitable for propagation of expected antiferromagnetic exchange interactions within the grid, yet intramolecular ferromagnetic exchange (J = 5.38 cm(-1)) is present with S = 4/2 magnetic ground state. This ferromagnetic interaction is quite obvious from the bridging connections (d(x2-y2)) lying almost orthogonally between the metal centers. The exchange pathways parameters have been evaluated from density functional calculations.

  8. Development And Testing Of The Inertial Electrostatic Confinement Diffusion Thruster

    NASA Technical Reports Server (NTRS)

    Becnel, Mark D.; Polzin, Kurt A.

    2013-01-01

    The Inertial Electrostatic Confinement (IEC) diffusion thruster is an experiment in active development that takes advantage of physical phenomenon that occurs during operation of an IEC device. The IEC device has been proposed as a fusion reactor design that relies on traditional electrostatic ion acceleration and is typically arranged in a spherical geometry. The design incorporates two radially-symmetric spherical electrodes. Often the inner electrode utilizes a grid of wire shaped in a sphere with a radius 15 to 50 percent of the radius of the outer electrode. The inner electrode traditionally has 90 percent or more transparency to allow particles (ions) to pass to the center of the spheres and collide/recombine in the dense plasma core at r=0. When operating the IEC, an unsteady plasma leak is typically observed passing out one of the gaps in the lattice grid of the inner electrode. The IED diffusion thruster is based upon the idea that this plasma leak can be used for propulsive purposes. The IEC diffusion thruster utilizes the radial symmetry found in the IEC device. A cylindrical configuration is employed here as it will produce a dense core of plasma the length of the cylindrical grid while promoting the plasma leak to exhaust through an electromagnetic nozzle at one end of the apparatus. A proof-of-concept IEC diffusion thruster is operational and under testing using argon as propellant (Figure 1).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdullayev, A. M.; Kulish, G. V.; Slyeptsov, O.

    The evaluation of WWER-1000 Westinghouse fuel performance was done using the results of post–irradiation examinations of six LTAs and the WFA reload batches that have operated normally in mixed cores at South-Ukraine NPP, Unit-3 and Unit-2. The data on WFA/LTA elongation, FR growth and bow, WFA bow and twist, RCCA drag force and drag work, RCCA drop time, FR cladding integrity as well as the visual observation of fuel assemblies obtained during the 2006-2012 outages was utilized. The analysis of the measured data showed that assembly growth, FR bow, irradiation growth, and Zr-1%Nb grid and ZIRLO cladding corrosion lies withinmore » the design limits. The RCCA drop time measured for the LTA/WFA is about 1.9 s at BOC and practically does not change at EOC. The measured WFA bow and twist, and data of drag work on RCCA insertion showed that the WFA deformation in the mixed core is mostly controlled by the distortion of Russian FAs (TVSA) having the higher lateral stiffness. The visual inspection of WFAs carried out during the 2012 outages revealed some damage to the Zr-1%Nb grid outer strap for some WFAs during the loading sequence. The performed fundamental investigations allowed identifying the root cause of grid outer strap deformation and proposing the WFA design modifications for preventing damage to SG at a 225 kg handling trip limit.« less

  10. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE PAGES

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...

    2017-08-24

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  11. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  12. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    NASA Astrophysics Data System (ADS)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  13. INFN, IT the GENIUS grid portal and the robot certificates to perform phylogenetic analysis on large scale: a success story from the International LIBI project

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio

    This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.

  14. Market Segmentation for Information Services.

    ERIC Educational Resources Information Center

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  15. Post-Test Analysis of the Deep Space One Spare Flight Thruster Ion Optics

    NASA Technical Reports Server (NTRS)

    Anderson, John R.; Sengupta, Anita; Brophy, John R.

    2004-01-01

    The Deep Space 1 (DSl) spare flight thruster (FT2) was operated for 30,352 hours during the extended life test (ELT). The test was performed to validate the service life of the thruster, study known and identify unknown life limiting modes. Several of the known life limiting modes involve the ion optics system. These include loss of structural integrity for either the screen grid or accelerator grid due to sputter erosion from energetic ions striking the grid, sputter erosion enlargement of the accelerator grid apertures to the point where the accelerator grid power supply can no longer prevent electron backstreaming, unclearable shorting between the grids causes by flakes of sputtered material, and rouge hole formation due to flakes of material defocusing the ion beam. Grid gap decrease, which increases the probability of electron backstreaming and of arcing between the grids, was identified as an additional life limiting mechanism after the test. A combination of accelerator grid aperture enlargement and grid gap decrease resulted in the inability to prevent electron backstreaming at full power at 26,000 hours of the ELT. Through pits had eroded through the accelerator grid webbing and grooves had penetrated through 45% of the grid thickness in the center of the grid. The upstream surface of the screen grid eroded in a chamfered pattern around the holes in the central portion of the grid. Sputter deposited material, from the accelerator grid, adhered to the downstream surface of the screen grid and did not spall to form flakes. Although a small amount of sputter deposited material protruded into the screen grid apertures, no rouge holes were found after the ELT.

  16. The GILDA t-Infrastructure: grid training activities in Africa and future opportunities

    NASA Astrophysics Data System (ADS)

    Ardizzone, V.; Barbera, R.; Ciuffo, L.; Giorgio, E.

    2009-04-01

    Scientists, educators, and students from many parts of the worlds are not able to take advantage of ICT because the digital divide is growing and prevents less developed countries to exploit its benefits. Instead of becoming more empowered and involved in worldwide developments, they are becoming increasingly marginalised as the world of education and science becomes increasingly Internet-dependent. The Grid Infn Laboratory for Dissemination Activities (GILDA) spreads since almost five years the awareness of Grid technology to a large audience, training new communities and fostering new organisations to provide resources. The knowledge dissemination process guided by the training activities is a key factor to ensure that all users can fully understand the characteristics of the Grid services offered by large existing e-Infrastructure. GILDA is becoming a "de facto" standard in training infrastructures (t-Infrastructures) and it is adopted by many grid projects worldwide. In this contribution we will report on the latest status of GILDA services and on the training activities recently carried out in sub-Saharan Africa (Malawi and South Africa). Particular care will be devoted to show how GILDA can be "cloned" to satisfy both education and research demands of African Organisations. The opportunities to benefit from GILDA in the framework of the EPIKH project as well as the plans of the European Commission on grid training and education for the 2010-2011 calls of its 7th Framework Programme will be presented and discussed.

  17. A regional analysis of cloudy mean spherical albedo over the marine stratocumulus region and the tropical Atlantic Ocean. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ginger, Kathryn M.

    1993-01-01

    Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. The relationships between cloud properties and cloud fraction are studied in order to supplement grid scale parameterizations. The satellite data used is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.5 deg x 2.5 deg latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution, and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakley's theory of reflection for uniform and broken layered cloud and to Kedem, et al.'s findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 m) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al's theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.

  18. A Regional Analysis of Cloudy Mean Spherical Albedo over the Marine Stratocumulus Region and the Tropical Atlantic Ocean

    NASA Technical Reports Server (NTRS)

    Ginger, Kathryn M.

    1993-01-01

    Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. This study examines the relationships between cloud properties and cloud fraction in order to supplement grid scale parameterizations. The satellite data used in this study is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.50 x 2.50 latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakleys (1991) theory of reflection for uniform and broken layered cloud and to Kedem, et al.(1990) findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 in) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al. theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.

  19. Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid

    NASA Astrophysics Data System (ADS)

    Yao, Tong

    In recent years, wide bandgap (WBG) devices enable power converters with higher power density and higher efficiency. On the other hand, smart grid technologies are getting mature due to new battery technology and computer technology. In the near future, the two technologies will form the next generation of smart grid enabled by WBG devices. This dissertation deals with two applications: silicon carbide (SiC) device used for medium voltage level interface (7.2 kV to 240 V) and gallium nitride (GaN) device used for low voltage level interface (240 V/120 V). A 20 kW solid state transformer (SST) is designed with 6 kHz switching frequency SiC rectifier. Then three robust control design methods are proposed for each of its smart grid operation modes. In grid connected mode, a new LCL filter design method is proposed considering grid voltage THD, grid current THD and current regulation loop robust stability with respect to the grid impedance change. In grid islanded mode, micro synthesis method combined with variable structure control is used to design a robust controller for grid voltage regulation. For grid emergency mode, multivariable controller designed using Hinfinity synthesis method is proposed for accurate power sharing. Controller-hardware-in-the-loop (CHIL) testbed considering 7-SST system is setup with Real Time Digital Simulator (RTDS). The real TMS320F28335 DSP and Spartan 6 FPGA control board is used to interface a switching model SST in RTDS. And the proposed control methods are tested. For low voltage level application, a 3.3 kW smart grid hardware is built with 3 GaN inverters. The inverters are designed with the GaN device characterized using the proposed multi-function double pulse tester. The inverter is controlled by onboard TMS320F28379D dual core DSP with 200 kHz sampling frequency. Each inverter is tested to process 2.2 kW power with overall efficiency of 96.5 % at room temperature. The smart grid monitor system and fault interrupt devices (FID) based on Arduino Mega2560 are built and tested. The smart grid cooperates with GaN inverters through CAN bus communication. At last, the three GaN inverters smart grid achieved the function of grid connected to islanded mode smooth transition.

  20. Publications - GMC 367 | Alaska Division of Geological & Geophysical

    Science.gov Websites

    . Minerals Management Service, and Core Laboratories Publication Date: Aug 2009 Publisher: Alaska Division of Bibliographic Reference U.S. Minerals Management Service, and Core Laboratories, 2009, Sidewall core analyses

  1. Making the case for high temperature low sag (htls) overhead transmission line conductors

    NASA Astrophysics Data System (ADS)

    Banerjee, Koustubh

    The future grid will face challenges to meet an increased power demand by the consumers. Various solutions were studied to address this issue. One alternative to realize increased power flow in the grid is to use High Temperature Low Sag (HTLS) since it fulfills essential criteria of less sag and good material performance with temperature. HTLS conductors like Aluminum Conductor Composite Reinforced (ACCR) and Aluminum Conductor Carbon Composite (ACCC) are expected to face high operating temperatures of 150-200 degree Celsius in order to achieve the desired increased power flow. Therefore, it is imperative to characterize the material performance of these conductors with temperature. The work presented in this thesis addresses the characterization of carbon composite core based and metal matrix core based HTLS conductors. The thesis focuses on the study of variation of tensile strength of the carbon composite core with temperature and the level of temperature rise of the HTLS conductors due to fault currents cleared by backup protection. In this thesis, Dynamic Mechanical Analysis (DMA) was used to quantify the loss in storage modulus of carbon composite cores with temperature. It has been previously shown in literature that storage modulus is correlated to the tensile strength of the composite. Current temperature relationships of HTLS conductors were determined using the IEEE 738-2006 standard. Temperature rise of these conductors due to fault currents were also simulated. All simulations were performed using Microsoft Visual C++ suite. Tensile testing of metal matrix core was also performed. Results of DMA on carbon composite cores show that the storage modulus, hence tensile strength, decreases rapidly in the temperature range of intended use. DMA on composite cores subjected to heat treatment were conducted to investigate any changes in the variation of storage modulus curves. The experiments also indicates that carbon composites cores subjected to temperatures at or above 250 degree Celsius can cause permanent loss of mechanical properties including tensile strength. The fault current temperature analysis of carbon composite based conductors reveal that fault currents eventually cleared by backup protection in the event of primary protection failure can cause damage to fiber matrix interface.

  2. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  3. The Determination of Jurisdiction in Grid and Cloud Service Level Agreements

    NASA Astrophysics Data System (ADS)

    Parrilli, Davide Maria

    Service Level Agreements in Grid and Cloud scenarios can be a source of disputes particularly in case of breach of the obligations arising under them. It is then important to determine where parties can litigate in relation with such agreements. The paper deals with this question in the peculiar context of the European Union, and so taking into consideration Regulation 44/2001. According to the rules on jurisdiction provided by the Regulation, two general distinctions are drawn in order to determine which (European) courts are competent to adjudicate disputes arising out of a Service Level Agreement. The former is between B2B and B2C transactions, and the latter regards contracts which provide a jurisdiction clause and contracts which do not.

  4. NREL Supercomputer Tackles Grid Challenges | News | NREL

    Science.gov Websites

    traditional database processes. Photo by Dennis Schroeder, NREL "Big data" is playing an imagery, and large-scale simulation data. Photo by Dennis Schroeder, NREL "Peregrine provides much . Photo by Dennis Schroeder, NREL Collaboration is key, and it is hard-wired into the ESIF's core. NREL

  5. A Project-Based Cooperative Approach to Teaching Sustainable Energy Systems

    ERIC Educational Resources Information Center

    Verbic, Gregor; Keerthisinghe, Chanaka; Chapman, Archie C.

    2017-01-01

    Engineering education is undergoing a restructuring driven by the needs of an increasingly multidisciplinary engineering profession. At the same time, power systems are transitioning toward future smart grids that will require power engineers with skills outside of the core power engineering domain. Since including new topics in the existing…

  6. Data location-aware job scheduling in the grid. Application to the GridWay metascheduler

    NASA Astrophysics Data System (ADS)

    Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.

    2010-04-01

    Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.

  7. Investigating Anomalies in the Output Generated by the Weather Research and Forecasting (WRF) Model

    NASA Astrophysics Data System (ADS)

    Decicco, Nicholas; Trout, Joseph; Manson, J. Russell; Rios, Manny; King, David

    2015-04-01

    The Weather Research and Forecasting (WRF) model is an advanced mesoscale numerical weather prediction (NWP) model comprised of two numerical cores, the Numerical Mesoscale Modeling (NMM) core, and the Advanced Research WRF (ARW) core. An investigation was done to determine the source of erroneous output generated by the NMM core. In particular were the appearance of zero values at regularly spaced grid cells in output fields and the NMM core's evident (mis)use of static geographic information at a resolution lower than the nesting level for which the core is performing computation. A brief discussion of the high-level modular architecture of the model is presented as well as methods utilized to identify the cause of these problems. Presented here are the initial results from a research grant, ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''.

  8. Mediated definite delegation - Certified Grid jobs in ALICE and beyond

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Grigoras, Costin; Litmaath, Maarten; Betev, Latchezar; Buchmann, Johannes

    2012-12-01

    Grid computing infrastructures need to provide traceability and accounting of their users’ activity and protection against misuse and privilege escalation, where the delegation of privileges in the course of a job submission is a key concern. This work describes an improved handling of Multi-user Grid Jobs in the ALICE Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of jobs and data. These limitations are discussed and formulated, both in general and with respect to an adoption in line with Multi-user Grid Jobs. A new general model of mediated definite delegation is developed, allowing a broker to dynamically process and assign Grid jobs to agents while providing strong accountability and long-term traceability. A prototype implementation allowing for fully certified Grid jobs is presented as well as a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, including a discussion of non-repudiation in the face of malicious Grid jobs.

  9. The vacuum platform

    NASA Astrophysics Data System (ADS)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  10. FROG: Time Series Analysis for the Web Service Era

    NASA Astrophysics Data System (ADS)

    Allan, A.

    2005-12-01

    The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).

  11. Stochastic Characterization of Communication Network Latency for Wide Area Grid Control Applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameme, Dan Selorm Kwami; Guttromson, Ross

    This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selectionmore » within the simulated network.« less

  12. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  13. Grid-supported Medical Digital Library.

    PubMed

    Kosiedowski, Michal; Mazurek, Cezary; Stroinski, Maciej; Weglarz, Jan

    2007-01-01

    Secure, flexible and efficient storing and accessing digital medical data is one of the key elements for delivering successful telemedical systems. To this end grid technologies designed and developed over the recent years and grid infrastructures deployed with their use seem to provide an excellent opportunity for the creation of a powerful environment capable of delivering tools and services for medical data storage, access and processing. In this paper we present the early results of our work towards establishing a Medical Digital Library supported by grid technologies and discuss future directions of its development. These works are part of the "Telemedycyna Wielkopolska" project aiming to develop a telemedical system for the support of the regional healthcare.

  14. Controllable Grid Interface for Testing Ancillary Service Controls and Fault Performance of Utility-Scale Wind Power Generation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gevorgian, Vahan; Koralewicz, Przemyslaw; Wallen, Robb

    The rapid expansion of wind power has led many transmission system operators to demand modern wind power plants to comply with strict interconnection requirements. Such requirements involve various aspects of wind power plant operation, including fault ride-through and power quality performance as well as the provision of ancillary services to enhance grid reliability. During recent years, the National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy has developed a new, groundbreaking testing apparatus and methodology to test and demonstrate many existing and future advanced controls for wind generation (and other renewable generation technologies) on the multimegawatt scale andmore » medium-voltage levels. This paper describes the capabilities and control features of NREL's 7-MVA power electronic grid simulator (also called a controllable grid interface, or CGI) that enables testing many active and reactive power control features of modern wind turbine generators -- including inertial response, primary and secondary frequency responses, and voltage regulation -- under a controlled, medium-voltage grid environment. In particular, this paper focuses on the specifics of testing the balanced and unbalanced fault ride-through characteristics of wind turbine generators under simulated strong and weak medium-voltage grid conditions. In addition, this paper provides insights on the power hardware-in-the-loop feature implemented in the CGI to emulate (in real time) the conditions that might exist in various types of electric power systems under normal operations and/or contingency scenarios. Using actual test examples and simulation results, this paper describes the value of CGI as an ultimate modeling validation tool for all types of 'grid-friendly' controls by wind generation.« less

  15. Balancing Conflicting Requirements for Grid and Particle Decomposition in Continuum-Lagrangian Solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray

    2015-10-30

    The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less

  16. SLA-aware differentiated QoS in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Agrawal, Anuj; Vyas, Upama; Bhatia, Vimal; Prakash, Shashi

    2017-07-01

    The quality of service (QoS) offered by optical networks can be improved by accurate provisioning of service level specifications (SLSs) included in the service level agreement (SLA). A large number of users coexisting in the network require different services. Thus, a pragmatic network needs to offer a differentiated QoS to a variety of users according to the SLA contracted for different services at varying costs. In conventional wavelength division multiplexed (WDM) optical networks, service differentiation is feasible only for a limited number of users because of its fixed-grid structure. Newly introduced flex-grid based elastic optical networks (EONs) are more adaptive to traffic requirements as compared to the WDM networks because of the flexibility in their grid structure. Thus, we propose an efficient SLA provisioning algorithm with improved QoS for these flex-grid EONs empowered by optical orthogonal frequency division multiplexing (O-OFDM). The proposed algorithm, called SLA-aware differentiated QoS (SADQ), employs differentiation at the level of routing, spectrum allocation, and connection survivability. The proposed SADQ aims to accurately provision the SLA using such multilevel differentiation with an objective to improve the spectrum utilization from the network operator's perspective. SADQ is evaluated for three different CoSs under various traffic demand patterns and for different ratios of the number of requests belonging to the three considered CoSs. We propose two new SLA metrics for the improvement of functional QoS requirements, namely, security, confidentiality and survivability of high class of service (CoS) traffic. Since, to the best of our knowledge, the proposed SADQ is the first scheme in optical networks to employ exhaustive differentiation at the levels of routing, spectrum allocation, and survivability in a single algorithm, we first compare the performance of SADQ in EON and currently deployed WDM networks to assess the differentiation capability of EON and WDM networks under such differentiated service environment. The proposed SADQ is then compared with two existing benchmark routing and spectrum allocation (RSA) schemes that are also designed under EONs. Simulations indicate that the performance of SADQ is distinctly better in EON than in WDM network under differentiated QoS scenario. The comparative analysis of the proposed SADQ with the considered benchmark RSA strategies designed under EON shows the improved performance of SADQ in EON paradigm for offering differentiated services as per the SLA.

  17. [Application of digital earth technology in research of traditional Chinese medicine resources].

    PubMed

    Liu, Jinxin; Liu, Xinxin; Gao, Lu; Wei, Yingqin; Meng, Fanyun; Wang, Yongyan

    2011-02-01

    This paper describes the digital earth technology and its core technology-"3S" integration technology. The advance and promotion of the "3S" technology provide more favorable means and technical support for Chinese medicine resources survey, evaluation and appropriate zoning. Grid is a mature and popular technology that can connect all kinds of information resources. The author sums up the application of digital earth technology in the research of traditional Chinese medicine resources in recent years, and proposes the new method and technical route of investigation in traditional Chinese medicine resources, traditional Chinese medicine zoning and suitability assessment by combining the digital earth technology and grid.

  18. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  19. 12 CFR 567.12 - Purchased credit card relationships, servicing assets, intangible assets (other than purchased...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and core capital. (b) Computation of core and tangible capital. (1) Purchased credit card relationships may be included (that is, not deducted) in computing core capital in accordance with the... restrictions in this section, mortgage servicing assets may be included in computing core and tangible capital...

  20. Pricing the Services of Scientific Cores. Part I: Charging Subsidized and Unsubsidized Users.

    ERIC Educational Resources Information Center

    Fife, Jerry; Forrester, Robert

    2002-01-01

    Explaining that scientific cores at research institutions support shared resources and facilities, discusses devising a method of charging users for core services and controlling and managing the rates. Proposes the concept of program-based management to cover sources of core support that are funding similar work. (EV)

  1. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  2. Demonstration of Essential Reliability Services by Utility-Scale Solar

    Science.gov Websites

    Essential Reliability Services by Utility-Scale Solar Photovoltaic Power Plant: Q&A Demonstration of Essential Reliability Services by Utility-Scale Solar Photovoltaic Power Plant: Q&A Webinar Questions & Answers April 27, 2017 Is photovoltaic (PV) generation required to provide grid supportive

  3. Grid-based platform for training in Earth Observation

    NASA Astrophysics Data System (ADS)

    Petcu, Dana; Zaharie, Daniela; Panica, Silviu; Frincu, Marc; Neagul, Marian; Gorgan, Dorian; Stefanut, Teodor

    2010-05-01

    GiSHEO platform [1] providing on-demand services for training and high education in Earth Observation is developed, in the frame of an ESA funded project through its PECS programme, to respond to the needs of powerful education resources in remote sensing field. It intends to be a Grid-based platform of which potential for experimentation and extensibility are the key benefits compared with a desktop software solution. Near-real time applications requiring simultaneous multiple short-time-response data-intensive tasks, as in the case of a short time training event, are the ones that are proved to be ideal for this platform. The platform is based on Globus Toolkit 4 facilities for security and process management, and on the clusters of four academic institutions involved in the project. The authorization uses a VOMS service. The main public services are the followings: the EO processing services (represented through special WSRF-type services); the workflow service exposing a particular workflow engine; the data indexing and discovery service for accessing the data management mechanisms; the processing services, a collection allowing easy access to the processing platform. The WSRF-type services for basic satellite image processing are reusing free image processing tools, OpenCV and GDAL. New algorithms and workflows were develop to tackle with challenging problems like detecting the underground remains of old fortifications, walls or houses. More details can be found in [2]. Composed services can be specified through workflows and are easy to be deployed. The workflow engine, OSyRIS (Orchestration System using a Rule based Inference Solution), is based on DROOLS, and a new rule-based workflow language, SILK (SImple Language for worKflow), has been built. Workflow creation in SILK can be done with or without a visual designing tools. The basics of SILK are the tasks and relations (rules) between them. It is similar with the SCUFL language, but not relying on XML in order to allow the introduction of more workflow specific issues. Moreover, an event-condition-action (ECA) approach allows a greater flexibility when expressing data and task dependencies, as well as the creation of adaptive workflows which can react to changes in the configuration of the Grid or in the workflow itself. Changes inside the grid are handled by creating specific rules which allow resource selection based on various task scheduling criteria. Modifications of the workflow are usually accomplished either by inserting or retracting at runtime rules belonging to it or by modifying the executor of the task in case a better one is found. The former implies changes in its structure while the latter does not necessarily mean changes of the resource but more precisely changes of the algorithm used for solving the task. More details can be found in [3]. Another important platform component is the data indexing and storage service, GDIS, providing features for data storage, indexing data using a specialized RDBMS, finding data by various conditions, querying external services and keeping track of temporary data generated by other components. The data storage component part of GDIS is responsible for storing the data by using available storage backends such as local disk file systems (ext3), local cluster storage (GFS) or distributed file systems (HDFS). A front-end GridFTP service is capable of interacting with the storage domains on behalf of the clients and in a uniform way and also enforces the security restrictions provided by other specialized services and related with data access. The data indexing is performed by PostGIS. An advanced and flexible interface for searching the project's geographical repository is built around a custom query language (LLQL - Lisp Like Query Language) designed to provide fine grained access to the data in the repository and to query external services (e.g. for exploiting the connection with GENESI-DR catalog). More details can be found in [4]. The Workload Management System (WMS) provides two types of resource managers. The first one will be based on Condor HTC and use Condor as a job manager for task dispatching and working nodes (for development purposes) while the second one will use GT4 GRAM (for production purposes). The WMS main component, the Grid Task Dispatcher (GTD), is responsible for the interaction with other internal services as the composition engine in order to facilitate access to the processing platform. Its main responsibilities are to receive tasks from the workflow engine or directly from user interface, to use a task description language (the ClassAd meta language in case of Condor HTC) for job units, to submit and check the status of jobs inside the workload management system and to retrieve job logs for debugging purposes. More details can be found in [4]. A particular component of the platform is eGLE, the eLearning environment. It provides the functionalities necessary to create the visual appearance of the lessons through the usage of visual containers like tools, patterns and templates. The teacher uses the platform for testing the already created lessons, as well as for developing new lesson resources, such as new images and workflows describing graph-based processing. The students execute the lessons or describe and experiment with new workflows or different data. The eGLE database includes several workflow-based lesson descriptions, teaching materials and lesson resources, selected satellite and spatial data. More details can be found in [5]. A first training event of using the platform was organized in September 2009 during 11th SYNASC symposium (links to the demos, testing interface, and exercises are available on project site [1]). The eGLE component was presented at 4th GPC conference in May 2009. Moreover, the functionality of the platform will be presented as demo in April 2010 at 5th EGEE User forum. References: [1] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [2] D. Petcu, D. Zaharie, M. Neagul, S. Panica, M. Frincu, D. Gorgan, T. Stefanut, V. Bacu, Remote Sensed Image Processing on Grids for Training in Earth Observation. In Image Processing, V. Kordic (ed.), In-Tech, January 2010. [3] M. Neagul, S. Panica, D. Petcu, D. Zaharie, D. Gorgan, Web and Grid Services for Training in Earth Observation, IDAACS 2009, IEEE Computer Press, 241-246 [4] M. Frincu, S. Panica, M. Neagul, D. Petcu, Gisheo: On Demand Grid Service Based Platform for EO Data Processing. HiperGrid 2009, Politehnica Press, 415-422. [5] D. Gorgan, T. Stefanut, V. Bacu, Grid Based Training Environment for Earth Observation, GPC 2009, LNCS 5529, 98-109

  4. Research and Experiments on a Unipolar Capacitive Voltage Sensor

    PubMed Central

    Zhou, Qiang; He, Wei; Li, Songnong; Hou, Xingzhe

    2015-01-01

    Voltage sensors are an important part of the electric system. In service, traditional voltage sensors need to directly contact a high-voltage charged body. Sensors involve a large volume, complex insulation structures, and high design costs. Typically an iron core structure is adopted. As a result, ferromagnetic resonance can occur easily during practical application. Moreover, owing to the multilevel capacitor divider, the sensor cannot reflect the changes of measured voltage in time. Based on the electric field coupling principle, this paper designs a new voltage sensor; the unipolar structure design solves many problems of traditional voltage sensors like the great insulation design difficulty and high costs caused by grounding electrodes. A differential signal input structure is adopted for the detection circuit, which effectively restrains the influence of the common-mode interference signal. Through sensor modeling, simulation and calculations, the structural design of the sensor electrode was optimized, miniaturization of the sensor was realized, the voltage division ratio of the sensor was enhanced, and the phase difference of sensor measurement was weakened. The voltage sensor is applied to a single-phase voltage class line of 10 kV for testing. According to the test results, the designed sensor is able to meet the requirements of accurate and real-time measurement for voltage of the charged conductor as well as to provide a new method for electricity larceny prevention and on-line monitoring of the power grid in an electric system. Therefore, it can satisfy the development demands of the smart power grid. PMID:26307992

  5. A Testbed Environment for Buildings-to-Grid Cyber Resilience Research and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridhar, Siddharth; Ashok, Aditya; Mylrea, Michael E.

    The Smart Grid is characterized by the proliferation of advanced digital controllers at all levels of its operational hierarchy from generation to end consumption. Such controllers within modern residential and commercial buildings enable grid operators to exercise fine-grained control over energy consumption through several emerging Buildings-to-Grid (B2G) applications. Though this capability promises significant benefits in terms of operational economics and improved reliability, cybersecurity weaknesses in the supporting infrastructure could be exploited to cause a detrimental effect and this necessitates focused research efforts on two fronts. First, the understanding of how cyber attacks in the B2G space could impact grid reliabilitymore » and to what extent. Second, the development and validation of cyber-physical application-specific countermeasures that are complementary to traditional infrastructure cybersecurity mechanisms for enhanced cyber attack detection and mitigation. The PNNL B2G testbed is currently being developed to address these core research needs. Specifically, the B2G testbed combines high-fidelity buildings+grid simulators, industry-grade building automation and Supervisory Control and Data Acquisition (SCADA) systems in an integrated, realistic, and reconfigurable environment capable of supporting attack-impact-detection-mitigation experimentation. In this paper, we articulate the need for research testbeds to model various B2G applications broadly by looking at the end-to-end operational hierarchy of the Smart Grid. Finally, the paper not only describes the architecture of the B2G testbed in detail, but also addresses the broad spectrum of B2G resilience research it is capable of supporting based on the smart grid operational hierarchy identified earlier.« less

  6. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  7. The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG

    NASA Astrophysics Data System (ADS)

    Sun, S.; Liu, D.; Li, G.; Yu, W.

    2011-08-01

    The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.

  8. Utilizing data grid architecture for the backup and recovery of clinical image data.

    PubMed

    Liu, Brent J; Zhou, M Z; Documet, J

    2005-01-01

    Grid Computing represents the latest and most exciting technology to evolve from the familiar realm of parallel, peer-to-peer and client-server models. However, there has been limited investigation into the impact of this emerging technology in medical imaging and informatics. In particular, PACS technology, an established clinical image repository system, while having matured significantly during the past ten years, still remains weak in the area of clinical image data backup. Current solutions are expensive or time consuming and the technology is far from foolproof. Many large-scale PACS archive systems still encounter downtime for hours or days, which has the critical effect of crippling daily clinical operations. In this paper, a review of current backup solutions will be presented along with a brief introduction to grid technology. Finally, research and development utilizing the grid architecture for the recovery of clinical image data, in particular, PACS image data, will be presented. The focus of this paper is centered on applying a grid computing architecture to a DICOM environment since DICOM has become the standard for clinical image data and PACS utilizes this standard. A federation of PACS can be created allowing a failed PACS archive to recover its image data from others in the federation in a seamless fashion. The design reflects the five-layer architecture of grid computing: Fabric, Resource, Connectivity, Collective, and Application Layers. The testbed Data Grid is composed of one research laboratory and two clinical sites. The Globus 3.0 Toolkit (Co-developed by the Argonne National Laboratory and Information Sciences Institute, USC) for developing the core and user level middleware is utilized to achieve grid connectivity. The successful implementation and evaluation of utilizing data grid architecture for clinical PACS data backup and recovery will provide an understanding of the methodology for using Data Grid in clinical image data backup for PACS, as well as establishment of benchmarks for performance from future grid technology improvements. In addition, the testbed can serve as a road map for expanded research into large enterprise and federation level data grids to guarantee CA (Continuous Availability, 99.999% up time) in a variety of medical data archiving, retrieval, and distribution scenarios.

  9. Distributed Optimization of Sustainable Power Dispatch and Flexible Consumer Loads for Resilient Power Grid Operations

    NASA Astrophysics Data System (ADS)

    Srikantha, Pirathayini

    Today's electric grid is rapidly evolving to provision for heterogeneous system components (e.g. intermittent generation, electric vehicles, storage devices, etc.) while catering to diverse consumer power demand patterns. In order to accommodate this changing landscape, the widespread integration of cyber communication with physical components can be witnessed in all tenets of the modern power grid. This ubiquitous connectivity provides an elevated level of awareness and decision-making ability to system operators. Moreover, devices that were typically passive in the traditional grid are now `smarter' as these can respond to remote signals, learn about local conditions and even make their own actuation decisions if necessary. These advantages can be leveraged to reap unprecedented long-term benefits that include sustainable, efficient and economical power grid operations. Furthermore, challenges introduced by emerging trends in the grid such as high penetration of distributed energy sources, rising power demands, deregulations and cyber-security concerns due to vulnerabilities in standard communication protocols can be overcome by tapping onto the active nature of modern power grid components. In this thesis, distributed constructs in optimization and game theory are utilized to design the seamless real-time integration of a large number of heterogeneous power components such as distributed energy sources with highly fluctuating generation capacities and flexible power consumers with varying demand patterns to achieve optimal operations across multiple levels of hierarchy in the power grid. Specifically, advanced data acquisition, cloud analytics (such as prediction), control and storage systems are leveraged to promote sustainable and economical grid operations while ensuring that physical network, generation and consumer comfort requirements are met. Moreover, privacy and security considerations are incorporated into the core of the proposed designs and these serve to improve the resiliency of the future smart grid. It is demonstrated both theoretically and practically that the techniques proposed in this thesis are highly scalable and robust with superior convergence characteristics. These distributed and decentralized algorithms allow individual actuating nodes to execute self-healing and adaptive actions when exposed to changes in the grid so that the optimal operating state in the grid is maintained consistently.

  10. Can Clouds replace Grids? Will Clouds replace Grids?

    NASA Astrophysics Data System (ADS)

    Shiers, J. D.

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  11. Advanced Photovoltaic Inverter Control Development and Validation in a Controller-Hardware-in-the-Loop Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabakar, Kumaraguru; Shirazi, Mariko; Singh, Akanksha

    Penetration levels of solar photovoltaic (PV) generation on the electric grid have increased in recent years. In the past, most PV installations have not included grid-support functionalities. But today, standards such as the upcoming revisions to IEEE 1547 recommend grid support and anti-islanding functions-including volt-var, frequency-watt, volt-watt, frequency/voltage ride-through, and other inverter functions. These functions allow for the standardized interconnection of distributed energy resources into the grid. This paper develops and tests low-level inverter current control and high-level grid support functions. The controller was developed to integrate advanced inverter functions in a systematic approach, thus avoiding conflict among the differentmore » control objectives. The algorithms were then programmed on an off-the-shelf, embedded controller with a dual-core computer processing unit and field-programmable gate array (FPGA). This programmed controller was tested using a controller-hardware-in-the-loop (CHIL) test bed setup using an FPGA-based real-time simulator. The CHIL was run at a time step of 500 ns to accommodate the 20-kHz switching frequency of the developed controller. The details of the advanced control function and CHIL test bed provided here will aide future researchers when designing, implementing, and testing advanced functions of PV inverters.« less

  12. Research and Deployment a Hospital Open Software Platform for e-Health on the Grid System at VAST/IAMI

    NASA Astrophysics Data System (ADS)

    van Tuyet, Dao; Tuan, Ngo Anh; van Lang, Tran

    Grid computing has been an increasing topic in recent years. It attracts the attention of many scientists from many fields. As a result, many Grid systems have been built for serving people's demands. At present, many tools for developing the Grid systems such as Globus, gLite, Unicore still developed incessantly. Especially, gLite - the Grid Middleware - was developed by the Europe Community scientific in recent years. Constant growth of Grid technology opened the way for new opportunities in term of information and data exchange in a secure and collaborative context. These new opportunities can be exploited to offer physicians new telemedicine services in order to improve their collaborative capacities. Our platform gives physicians an easy method to use telemedicine environment to manage and share patient's information (such as electronic medical record, images formatted DICOM) between remote locations. This paper presents the Grid Infrastructure based on gLite; some main components of gLite; the challenge scenario in which new applications can be developed to improve collaborative work between scientists; the process of deploying Hospital Open software Platform for E-health (HOPE) on the Grid.

  13. Improvements to the gridding of precipitation data across Europe under the E-OBS scheme

    NASA Astrophysics Data System (ADS)

    Cornes, Richard; van den Besselaar, Else; Jones, Phil; van der Schrier, Gerard; Verver, Ge

    2016-04-01

    Gridded precipitation data are a valuable resource for analyzing past variations and trends in the hydroclimate. Such data also provide a reference against which model simulations may be driven, compared and/or adjusted. The E-OBS precipitation dataset is widely used for such analyses across Europe, and is particularly valuable since it provides a spatially complete, daily field across the European domain. In this analysis, improvements to the E-OBS precipitation dataset will be presented that aim to provide a more reliable estimate of grid-box precipitation values, particularly in mountainous areas and in regions with a relative sparsity of input station data. The established three-stage E-OBS gridding scheme is retained, whereby monthly precipitation totals are gridded using a thin-plate spline; daily anomalies are gridded using indicator kriging; and the final dataset is produced by multiplying the two grids. The current analysis focuses on improving the monthly thin-plate spline, which has overall control on the final daily dataset. The results from different techniques are compared and the influence on the final daily data is assessed by comparing the data against gridded country-wide datasets produced by various National Meteorological Services

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    von Laszewski, G.; Gawor, J.; Lane, P.

    In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less

  15. 78 FR 63990 - HIV/AIDS Bureau; Ryan White HIV/AIDS Program Core Medical Services Waiver; Application Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-25

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Health Resources and Services Administration HIV/AIDS Bureau; Ryan White HIV/AIDS Program Core Medical Services Waiver; Application Requirements AGENCY: Health... Service Act, as amended by the Ryan White HIV/AIDS Treatment Extension Act of 2009 (Ryan White Program or...

  16. On Unified Mode in Grid Mounted Round Jets

    NASA Astrophysics Data System (ADS)

    Parimalanathan, Senthil Kumar; T, Sundararajan; v, Raghavan

    2015-11-01

    The turbulence evolution in a free round jet is strongly affected by its initial conditions. Since the transition to turbulence is moderated by instability modes, the initial conditions seem to play a major role in altering the dynamics of these modes. In the present investigation, grids of different configurations are placed at the jet nozzle exit and the flow field characterization is carried out using a bi-component hot-wire anemometer. The instability modes has been obtained by analyzing the velocity spectral data. Free jets are characterized by the presence of two instability modes, viz., the preferred mode and the shear mode. The preferred mode corresponds to the most amplified oscillations along the jet centerline, while the shear modes are due to the dynamic evolution of vortical structures in the jet shear layer. The presence of grid clearly alters the jet structure, and plays a major role in altering the shear layer mode in particular. In fact, it is observed that close to the nozzle exit, the presence of grids deviate the streamlines inwards around the edge due to the momentum difference between the jet central core and the boundary layer region near the wall. This result in a single unified mode, where there is no distinct preferred or shear mode. This phenomena is more dominant in case of the grids having higher blockage ratio with small grid opening. In the present study, investigation of the physics behind the evolution of unified mode and how the grids affect the overall turbulent flow field evolution has been reported. Experimental Fluid Mechanics.

  17. Influence of Spanwise Boundary Conditions on Slat Noise Simulations

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Choudhari, Meelan M.; Buning, Pieter G.

    2015-01-01

    The slat noise from the 30P/30N high-lift system is being investigated through computational fluid dynamics simulations with the OVERFLOW code in conjunction with a Ffowcs Williams-Hawkings acoustics solver. In the present study, two different spanwise grids are being used to investigate the effect of the spanwise extent and periodicity on the near-field unsteady structures and radiated noise. The baseline grid with periodic boundary conditions has a short span equal to 1/9th of the stowed chord, whereas the other, longer span grid adds stretched grids on both sides of the core, baseline grid to allow inviscid surface boundary conditions at both ends. The results indicate that the near-field mean statistics obtained using the two grids are similar to each other, as are the directivity and spectral shapes of the radiated noise. However, periodicity forces all acoustic waves with less than one wavelength across the span to be two-dimensional, without any variation in the span. The spanwise coherence of the acoustic waves is what is needed to make estimates of the noise that would be radiated from realistic span lengths. Simulations with periodic conditions need spans of at least six slat chords to allow spanwise variation in the low-frequencies associated with the peak of broadband slat noise. Even then, the full influence of the periodicity is unclear, so employing grids with a fine, central region and highly stretched meshes that go to slip walls may be a more efficient means of capturing the spanwise decorrelation of low-frequency acoustic phenomena.

  18. 20 CFR 666.140 - Which individuals receiving services are included in the core indicators of performance?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Which individuals receiving services are included in the core indicators of performance? 666.140 Section 666.140 Employees' Benefits EMPLOYMENT AND... the core indicators of performance? (a)(1) The core indicators of performance apply to all individuals...

  19. Operating a production pilot factory serving several scientific domains

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Würthwein, F.; Andrews, W.; Dost, J. M.; MacNeill, I.; McCrea, A.; Sheripon, E.; Murphy, C. W.

    2011-12-01

    Pilot infrastructures are becoming prominent players in the Grid environment. One of the major advantages is represented by the reduced effort required by the user communities (also known as Virtual Organizations or VOs) due to the outsourcing of the Grid interfacing services, i.e. the pilot factory, to Grid experts. One such pilot factory, based on the glideinWMS pilot infrastructure, is being operated by the Open Science Grid at University of California San Diego (UCSD). This pilot factory is serving multiple VOs from several scientific domains. Currently the three major clients are the analysis operations of the HEP experiment CMS, the community VO HCC, which serves mostly math, biology and computer science users, and the structural biology VO NEBioGrid. The UCSD glidein factory allows the served VOs to use Grid resources distributed over 150 sites in North and South America, in Europe, and in Asia. This paper presents the steps taken to create a production quality pilot factory, together with the challenges encountered along the road.

  20. Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Jiahui; Zhang, Ye; Liu, Yilu

    2014-01-01

    Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection methodmore » is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalimunthe, Amty Ma’rufah Ardhiyah; Mindara, Jajat Yuda; Panatarani, Camellia

    Smart grid and distributed generation should be the solution of the global climate change and the crisis energy of the main source of electrical power generation which is fossil fuel. In order to meet the rising electrical power demand and increasing service quality demands, as well as reduce pollution, the existing power grid infrastructure should be developed into a smart grid and distributed power generation which provide a great opportunity to address issues related to energy efficiency, energy security, power quality and aging infrastructure systems. The conventional of the existing distributed generation system is an AC grid while for amore » renewable resources requires a DC grid system. This paper explores the model of smart DC grid by introducing a model of smart DC grid with the stable power generation give a minimal and compressed circuitry that can be implemented very cost-effectively with simple components. The PC based application software for controlling was developed to show the condition of the grid and to control the grid become ‘smart’. The model is then subjected to a severe system perturbation, such as incremental change in loads to test the performance of the system again stability. It is concluded that the system able to detect and controlled the voltage stability which indicating the ability of power system to maintain steady voltage within permissible rangers in normal condition.« less

  2. Can developing countries leapfrog the centralized electrification paradigm?

    DOE PAGES

    Levin, Todd; Thomas, Valerie M.

    2016-02-04

    Due to the rapidly decreasing costs of small renewable electricity generation systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. Furthermore, by looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model.« less

  3. Sealife: a semantic grid browser for the life sciences applied to the study of infectious diseases.

    PubMed

    Schroeder, Michael; Burger, Albert; Kostkova, Patty; Stevens, Robert; Habermann, Bianca; Dieng-Kuntz, Rose

    2006-01-01

    The objective of Sealife is the conception and realisation of a semantic Grid browser for the life sciences, which will link the existing Web to the currently emerging eScience infrastructure. The SeaLife Browser will allow users to automatically link a host of Web servers and Web/Grid services to the Web content he/she is visiting. This will be accomplished using eScience's growing number of Web/Grid Services and its XML-based standards and ontologies. The browser will identify terms in the pages being browsed through the background knowledge held in ontologies. Through the use of Semantic Hyperlinks, which link identified ontology terms to servers and services, the SeaLife Browser will offer a new dimension of context-based information integration. In this paper, we give an overview over the different components of the browser and their interplay. This SeaLife Browser will be demonstrated within three application scenarios in evidence-based medicine, literature & patent mining, and molecular biology, all relating to the study of infectious diseases. The three applications vertically integrate the molecule/cell, the tissue/organ and the patient/population level by covering the analysis of high-throughput screening data for endocytosis (the molecular entry pathway into the cell), the expression of proteins in the spatial context of tissue and organs, and a high-level library on infectious diseases designed for clinicians and their patients. For more information see http://www.biote.ctu-dresden.de/sealife.

  4. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  5. Towards a centralized Grid Speedometer

    NASA Astrophysics Data System (ADS)

    Dzhunov, I.; Andreeva, J.; Fajardo, E.; Gutsche, O.; Luyckx, S.; Saiz, P.

    2014-06-01

    Given the distributed nature of the Worldwide LHC Computing Grid and the way CPU resources are pledged and shared around the globe, Virtual Organizations (VOs) face the challenge of monitoring the use of these resources. For CMS and the operation of centralized workflows, the monitoring of how many production jobs are running and pending in the Glidein WMS production pools is very important. The Dashboard Site Status Board (SSB) provides a very flexible framework to collect, aggregate and visualize data. The CMS production monitoring team uses the SSB to define the metrics that have to be monitored and the alarms that have to be raised. During the integration of CMS production monitoring into the SSB, several enhancements to the core functionality of the SSB were required; They were implemented in a generic way, so that other VOs using the SSB can exploit them. Alongside these enhancements, there were a number of changes to the core of the SSB framework. This paper presents the details of the implementation and the advantages for current and future usage of the new features in SSB.

  6. Lights Out: Foreseeable Catastrophic Effects of Geomagnetic Storms on the North American Power Grid and How to Mitigate Them

    DTIC Science & Technology

    2011-08-21

    poultry, pork , beef, fish, and other meat products also are typically automated operations, done on electrically driven processing lines. 53 Food ...Infrastructure ..................................................... 18 Power Outage Impact on Consumables ( Food , Water, Medication...transportation, consumables ( food , water, and medication), and emergency services, are so highly dependent on reliable power supply from the grid, a

  7. 50 CFR Figure 13 to Part 223 - Single Grid Hard TED Escape Opening

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Single Grid Hard TED Escape Opening 13 Figure 13 to Part 223 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MARINE MAMMALS THREATENED MARINE AND ANADROMOUS SPECIES Pt. 223, Fig. 13 Figure 13 to Part 223—Singl...

  8. SuperB Simulation Production System

    NASA Astrophysics Data System (ADS)

    Tomassetti, L.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Del Prete, D.; Di Simone, A.; Donvito, G.; Fella, A.; Franchini, P.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Paolini, A.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.

    2012-12-01

    The SuperB asymmetric e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2 s-1. The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.

  9. Grids: The Top Ten Questions

    DOE PAGES

    Schopf, Jennifer M.; Nitzberg, Bill

    2002-01-01

    The design and implementation of a national computing system and data grid has become a reachable goal from both the computer science and computational science point of view. A distributed infrastructure capable of sophisticated computational functions can bring many benefits to scientific work, but poses many challenges, both technical and socio-political. Technical challenges include having basic software tools, higher-level services, functioning and pervasive security, and standards, while socio-political issues include building a user community, adding incentives for sites to be part of a user-centric environment, and educating funding sources about the needs of this community. This paper details the areasmore » relating to Grid research that we feel still need to be addressed to fully leverage the advantages of the Grid.« less

  10. NPSS on NASA's IPG: Using CORBA and Globus to Coordinate Multidisciplinary Aeroscience Applications

    NASA Technical Reports Server (NTRS)

    Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Naiman, Cynthia G.; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, the NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. To this end, NPSS integrates multiple disciplines such as aerodynamics, structures, and heat transfer and supports "numerical zooming" between O-dimensional to 1-, 2-, and 3-dimensional component engine codes. In order to facilitate the timely and cost-effective capture of complex physical processes, NPSS uses object-oriented technologies such as C++ objects to encapsulate individual engine components and CORBA ORBs for object communication and deployment across heterogeneous computing platforms. Recently, the HPCC program has initiated a concept called the Information Power Grid (IPG), a virtual computing environment that integrates computers and other resources at different sites. IPG implements a range of Grid services such as resource discovery, scheduling, security, instrumentation, and data access, many of which are provided by the Globus toolkit. IPG facilities have the potential to benefit NPSS considerably. For example, NPSS should in principle be able to use Grid services to discover dynamically and then co-schedule the resources required for a particular engine simulation, rather than relying on manual placement of ORBs as at present. Grid services can also be used to initiate simulation components on parallel computers (MPPs) and to address inter-site security issues that currently hinder the coupling of components across multiple sites. These considerations led NASA Glenn and Globus project personnel to formulate a collaborative project designed to evaluate whether and how benefits such as those just listed can be achieved in practice. This project involves firstly development of the basic techniques required to achieve co-existence of commodity object technologies and Grid technologies; and secondly the evaluation of these techniques in the context of NPSS-oriented challenge problems. The work on basic techniques seeks to understand how "commodity" technologies (CORBA, DCOM, Excel, etc.) can be used in concert with specialized "Grid" technologies (for security, MPP scheduling, etc.). In principle, this coordinated use should be straightforward because of the Globus and IPG philosophy of providing low-level Grid mechanisms that can be used to implement a wide variety of application-level programming models. (Globus technologies have previously been used to implement Grid-enabled message-passing libraries, collaborative environments, and parameter study tools, among others.) Results obtained to date are encouraging: we have successfully demonstrated a CORBA to Globus resource manager gateway that allows the use of CORBA RPCs to control submission and execution of programs on workstations and MPPs; a gateway from the CORBA Trader service to the Grid information service; and a preliminary integration of CORBA and Grid security mechanisms. The two challenge problems that we consider are the following: 1) Desktop-controlled parameter study. Here, an Excel spreadsheet is used to define and control a CFD parameter study, via a CORBA interface to a high throughput broker that runs individual cases on different IPG resources. 2) Aviation safety. Here, about 100 near real time jobs running NPSS need to be submitted, run and data returned in near real time. Evaluation will address such issues as time to port, execution time, potential scalability of simulation, and reliability of resources. The full paper will present the following information: 1. A detailed analysis of the requirements that NPSS applications place on IPG. 2. A description of the techniques used to meet these requirements via the coordinated use of CORBA and Globus. 3. A description of results obtained to date in the first two challenge problems.

  11. A Control of a Mono and Multi Scale Measurement of a Grid

    NASA Astrophysics Data System (ADS)

    Elloumi, Imene; Ravelomanana, Sahobimaholy; Jelliti, Manel; Sibilla, Michelle; Desprats, Thierry

    The capacity to ensure the seamless mobility with the end-to-end Quality of Service (QoS) represents a vital criterion of success in the grid use. In this paper we hence posit a method of monitoring interconnection network of the grid (cluster, local grid and aggregate grids) in order to control its QoS. Such monitoring can guarantee a persistent control of the system state of health, a diagnostic and an optimization pertinent enough for better real time exploitation. A better exploitation is synonymous with identifying networking problems that affect the application domain. This can be carried out by control measurements as well as mono and multi scale for such metrics as: the bandwidth, CPU speed and load. The solution proposed, which is a management generic solution independently from the technologies, aims to automate human expertise and thereby more autonomy.

  12. Interconnection, Integration, and Interactive Impact Analysis of Microgrids and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Ning; Wang, Jianhui; Singh, Ravindra

    2017-01-01

    Distribution management systems (DMSs) are increasingly used by distribution system operators (DSOs) to manage the distribution grid and to monitor the status of both power imported from the transmission grid and power generated locally by a distributed energy resource (DER), to ensure that power flows and voltages along the feeders are maintained within designed limits and that appropriate measures are taken to guarantee service continuity and energy security. When microgrids are deployed and interconnected to the distribution grids, they will have an impact on the operation of the distribution grid. The challenge is to design this interconnection in such amore » way that it enhances the reliability and security of the distribution grid and the loads embedded in the microgrid, while providing economic benefits to all stakeholders, including the microgrid owner and operator and the distribution system operator.« less

  13. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2013-07-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  14. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M.; Hammerstrom, Donald J.; Kintner-Meyer, Michael C. W.; Tuffner, Francis K.

    2017-09-05

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  15. Grid regulation services for energy storage devices based on grid frequency

    DOEpatents

    Pratt, Richard M; Hammerstrom, Donald J; Kintner-Meyer, Michael C.W.; Tuffner, Francis K

    2014-04-15

    Disclosed herein are representative embodiments of methods, apparatus, and systems for charging and discharging an energy storage device connected to an electrical power distribution system. In one exemplary embodiment, a controller monitors electrical characteristics of an electrical power distribution system and provides an output to a bi-directional charger causing the charger to charge or discharge an energy storage device (e.g., a battery in a plug-in hybrid electric vehicle (PHEV)). The controller can help stabilize the electrical power distribution system by increasing the charging rate when there is excess power in the electrical power distribution system (e.g., when the frequency of an AC power grid exceeds an average value), or by discharging power from the energy storage device to stabilize the grid when there is a shortage of power in the electrical power distribution system (e.g., when the frequency of an AC power grid is below an average value).

  16. Earth Science community support in the EGI-Inspire Project

    NASA Astrophysics Data System (ADS)

    Schwichtenberg, H.

    2012-04-01

    The Earth Science Grid community is following its strategy of propagating Grid technology to the ES disciplines, setting up interactive collaboration among the members of the community and stimulating the interest of stakeholders on the political level since ten years already. This strategy was described in a roadmap published in an Earth Science Informatics journal. It was applied through different European Grid projects and led to a large Grid Earth Science VRC that covers a variety of ES disciplines; in the end, all of them were facing the same kind of ICT problems. .. The penetration of Grid in the ES community is indicated by the variety of applications, the number of countries in which ES applications are ported, the number of papers in international journals and the number of related PhDs. Among the six virtual organisations belonging to ES, one, ESR, is generic. Three others -env.see-grid-sci.eu, meteo.see-grid-sci.eu and seismo.see-grid-sci.eu- are thematic and regional (South Eastern Europe) for environment, meteorology and seismology. The sixth VO, EGEODE, is for the users of the Geocluster software. There are also ES users in national VOs or VOs related to projects. The services for the ES task in EGI-Inspire concerns the data that are a key part of any ES application. The ES community requires several interfaces to access data and metadata outside of the EGI infrastructure, e.g. by using grid-enabled database interfaces. The data centres have also developed service tools for basic research activities such as searching, browsing and downloading these datasets, but these are not accessible from applications executed on the Grid. The ES task in EGI-Inspire aims to make these tools accessible from the Grid. In collaboration with GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) this task is maintaining and evolving an interface in response to new requirements that will allow data in the GENESI-DR infrastructure to be accessed from EGI resources to enable future research activities by this HUC. The international climate community for IPCC has created the Earth System Grid (ESG) to store and share climate data. There is a need to interface ESG with EGI for climate studies - parametric, regional and impact aspects. Critical points concern the interoperability of security mechanism between both "organisations", data protection policy, data transfer, data storage and data caching. Presenter: Horst Schwichtenberg Co-Authors: Monique Petitdidier (IPSL), Andre Gemünd (SCAI), Wim Som de Cerff (KNMI), Michael Schnell (SCAI)

  17. Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks

    PubMed Central

    Honda, Kiyoshi; Shrestha, Aadit; Witayangkurn, Apichon; Chinnachodteeranun, Rassarin; Shimamura, Hiroshi

    2009-01-01

    The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG), which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks. PMID:22574018

  18. A Scalable proxy cache for Grid Data Access

    NASA Astrophysics Data System (ADS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Koeroo, Oscar Arthur; Starink, Ronald; Templon, Jeffrey Alan

    2012-12-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  19. 78 FR 31563 - Ryan White HIV/AIDS Program Core Medical Services Waiver; Application Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-24

    ... HIV/AIDS Program Core Medical Services Waiver; Application Requirements AGENCY: Health Resources and... Public Health Service Act, as amended by the Ryan White HIV/AIDS Treatment Extension Act of 2009 (Ryan... medical services, including antiretroviral drugs, for individuals with HIV/AIDS identified and eligible...

  20. A method for modeling finite-core vortices in wake-flow calculations

    NASA Technical Reports Server (NTRS)

    Stremel, P. M.

    1984-01-01

    A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.

Top