Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
Optimising LAN access to grid enabled storage elements
NASA Astrophysics Data System (ADS)
Stewart, G. A.; Cowan, G. A.; Dunne, B.; Elwell, A.; Millar, A. P.
2008-07-01
When operational, the Large Hadron Collider experiments at CERN will collect tens of petabytes of physics data per year. The worldwide LHC computing grid (WLCG) will distribute this data to over two hundred Tier-1 and Tier-2 computing centres, enabling particle physicists around the globe to access the data for analysis. Although different middleware solutions exist for effective management of storage systems at collaborating institutes, the patterns of access envisaged for Tier-2s fall into two distinct categories. The first involves bulk transfer of data between different Grid storage elements using protocols such as GridFTP. This data movement will principally involve writing ESD and AOD files into Tier-2 storage. Secondly, once datasets are stored at a Tier-2, physics analysis jobs will read the data from the local SE. Such jobs require a POSIX-like interface to the storage so that individual physics events can be extracted. In this paper we consider the performance of POSIX-like access to files held in Disk Pool Manager (DPM) storage elements, a popular lightweight SRM storage manager from EGEE.
gLExec and MyProxy integration in the ATLAS/OSG PanDA workload management system
NASA Astrophysics Data System (ADS)
Caballero, J.; Hover, J.; Litmaath, M.; Maeno, T.; Nilsson, P.; Potekhin, M.; Wenaus, T.; Zhao, X.
2010-04-01
Worker nodes on the grid exhibit great diversity, making it difficult to offer uniform processing resources. A pilot job architecture, which probes the environment on the remote worker node before pulling down a payload job, can help. Pilot jobs become smart wrappers, preparing an appropriate environment for job execution and providing logging and monitoring capabilities. PanDA (Production and Distributed Analysis), an ATLAS and OSG workload management system, follows this design. However, in the simplest (and most efficient) pilot submission approach of identical pilots carrying the same identifying grid proxy, end-user accounting by the site can only be done with application-level information (PanDA maintains its own end-user accounting), and end-user jobs run with the identity and privileges of the proxy carried by the pilots, which may be seen as a security risk. To address these issues, we have enabled PanDA to use gLExec, a tool provided by EGEE which runs payload jobs under an end-user's identity. End-user proxies are pre-staged in a credential caching service, MyProxy, and the information needed by the pilots to access them is stored in the PanDA DB. gLExec then extracts from the user's proxy the proper identity under which to run. We describe the deployment, installation, and configuration of gLExec, and how PanDA components have been augmented to use it. We describe how difficulties were overcome, and how security risks have been mitigated. Results are presented from OSG and EGEE Grid environments performing ATLAS analysis using PanDA and gLExec.
VOMS/VOMRS utilization patterns and convergence plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceccanti, A.; /INFN, CNAF; Ciaschini, V.
2010-01-01
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed atmore » Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.« less
VOMS/VOMRS utilization patterns and convergence plan
NASA Astrophysics Data System (ADS)
Ceccanti, A.; Ciaschini, V.; Dimou, M.; Garzoglio, G.; Levshina, T.; Traylen, S.; Venturi, V.
2010-04-01
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed at Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.
FermiGrid - experience and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chadwick, K.; Berman, E.; Canal, P.
2007-09-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less
FermiGrid—experience and future plans
NASA Astrophysics Data System (ADS)
Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.
2008-07-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.
A Messaging Infrastructure for WLCG
NASA Astrophysics Data System (ADS)
Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin
2011-12-01
During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.
Operational flash flood forecasting platform based on grid technology
NASA Astrophysics Data System (ADS)
Thierion, V.; Ayral, P.-A.; Angelini, V.; Sauvagnargues-Lesage, S.; Nativi, S.; Payrastre, O.
2009-04-01
Flash flood events of south of France such as the 8th and 9th September 2002 in the Grand Delta territory caused important economic and human damages. Further to this catastrophic hydrological situation, a reform of flood warning services have been initiated (set in 2006). Thus, this political reform has transformed the 52 existing flood warning services (SAC) in 22 flood forecasting services (SPC), in assigning them territories more hydrological consistent and new effective hydrological forecasting mission. Furthermore, national central service (SCHAPI) has been created to ease this transformation and support local services in their new objectives. New functioning requirements have been identified: - SPC and SCHAPI carry the responsibility to clearly disseminate to public organisms, civil protection actors and population, crucial hydrologic information to better anticipate potential dramatic flood event, - a new effective hydrological forecasting mission to these flood forecasting services seems essential particularly for the flash floods phenomenon. Thus, models improvement and optimization was one of the most critical requirements. Initially dedicated to support forecaster in their monitoring mission, thanks to measuring stations and rainfall radar images analysis, hydrological models have to become more efficient in their capacity to anticipate hydrological situation. Understanding natural phenomenon occuring during flash floods mainly leads present hydrological research. Rather than trying to explain such complex processes, the presented research try to manage the well-known need of computational power and data storage capacities of these services. Since few years, Grid technology appears as a technological revolution in high performance computing (HPC) allowing large-scale resource sharing, computational power using and supporting collaboration across networks. Nowadays, EGEE (Enabling Grids for E-science in Europe) project represents the most important effort in term of grid technology development. This paper presents an operational flash flood forecasting platform which have been developed in the framework of CYCLOPS European project providing one of virtual organizations of EGEE project. This platform has been designed to enable multi-simulations processes to ease forecasting operations of several supervised watersheds on Grand Delta (SPC-GD) territory. Grid technology infrastructure, in providing multiple remote computing elements enables the processing of multiple rainfall scenarios, derived to the original meteorological forecasting transmitted by Meteo-France, and their respective hydrological simulations. First results show that from one forecasting scenario, this new presented approach can permit simulations of more than 200 different scenarios to support forecasters in their aforesaid mission and appears as an efficient hydrological decision-making tool. Although, this system seems operational, model validity has to be confirmed. So, further researches are necessary to improve models core to be more efficient in term of hydrological aspects. Finally, this platform could be an efficient tool for developing others modelling aspects as calibration or data assimilation in real time processing.
From EGEE Operations Portal towards EGI Operations Portal
NASA Astrophysics Data System (ADS)
Cordier, Hélène; L'Orphelin, Cyril; Reynaud, Sylvain; Lequeux, Olivier; Loikkanen, Sinikka; Veyre, Pierre
Grid operators in EGEE have been using a dedicated dashboard as their central operational tool, stable and scalable for the last 5 years despite continuous upgrade from specifications by users, monitoring tools or data providers. In EGEE-III, recent regionalisation of operations led the Operations Portal developers to conceive a standalone instance of this tool. We will see how the dashboard reorganization paved the way for the re-engineering of the portal itself. The outcome is an easily deployable package customized with relevant information sources and specific decentralized operational requirements. This package is composed of a generic and scalable data access mechanism, Lavoisier; a renowned php framework for configuration flexibility, Symfony and a MySQL database. VO life cycle and operational information, EGEE broadcast and Downtime notifications are next for the major reorganization until all other key features of the Operations Portal are migrated to the framework. Features specifications will be sketched at the same time to adapt to EGI requirements and to upgrade. Future work on feature regionalisation, on new advanced features or strategy planning will be tracked in EGI- Inspire through the Operations Tools Advisory Group, OTAG, where all users, customers and third parties of the Operations Portal are represented from January 2010.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
A Security Architecture for Grid-enabling OGC Web Services
NASA Astrophysics Data System (ADS)
Angelini, Valerio; Petronzio, Luca
2010-05-01
In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.
Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.
Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià
2010-01-01
The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.
Using CREAM and CEMonitor for job submission and management in the gLite middleware
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Andreetto, P.; Bertocco, S.; Dalla Fina, S.; Dorigo, A.; Frizziero, E.; Gianelle, A.; Marzolla, M.; Mazzucato, M.; Mendez Lorenzo, P.; Miccio, V.; Sgaravatto, M.; Traldi, S.; Zangrando, L.
2010-04-01
In this paper we describe the use of CREAM and CEMonitor services for job submission and management within the gLite Grid middleware. Both CREAM and CEMonitor address one of the most fundamental operations of a Grid middleware, that is job submission and management. Specifically, CREAM is a job management service used for submitting, managing and monitoring computational jobs. CEMonitor is an event notification framework, which can be coupled with CREAM to provide the users with asynchronous job status change notifications. Both components have been integrated in the gLite Workload Management System by means of ICE (Interface to CREAM Environment). These software components have been released for production in the EGEE Grid infrastructure and, for what concerns the CEMonitor service, also in the OSG Grid. In this paper we report the current status of these services, the achieved results, and the issues that still have to be addressed.
IGI (the Italian Grid initiative) and its impact on the Astrophysics community
NASA Astrophysics Data System (ADS)
Pasian, F.; Vuerli, C.; Taffoni, G.
IGI - the Association for the Italian Grid Infrastructure - has been established as a consortium of 14 different national institutions to provide long term sustainability to the Italian Grid. Its formal predecessor, the Grid.it project, has come to a close in 2006; to extend the benefits of this project, IGI has taken over and acts as the national coordinator for the different sectors of the Italian e-Infrastructure present in EGEE. IGI plans to support activities in a vast range of scientificdisciplines - e.g. Physics, Astrophysics, Biology, Health, Chemistry, Geophysics, Economy, Finance - and any possible extensions to other sectors such as Civil Protection, e-Learning, dissemination in Universities and secondary schools. Among these, the Astrophysics community is active as a user, by porting applications of various kinds, but also as a resource provider in terms of computing power and storage, and as middleware developer.
Enhancement of HIV-1 VLP production using gene inhibition strategies.
Fuenmayor, Javier; Cervera, Laura; Rigau, Cristina; Gòdia, Francesc
2018-05-01
Gag polyprotein from HIV-1 is able to generate virus-like particles (VLPs) when recombinantly expressed in animal cell platforms. HIV-1 VLP production in HEK293 cells can be improved by the use of different strategies for increasing product titers. One of them is the so-called extended gene expression (EGE), based on repeated medium exchanges and retransfections of the cell culture to prolong the production phase. Another approach is the media supplementation with gene expression enhancers such as valproic acid and caffeine, despite their detrimental effect on cell viability. Valproic acid is a histone deacetylase inhibitor while caffeine has a phosphodiesterase inhibition effect. Here, the combination of the EGE protocol with additive supplementation to maximize VLP production is first tested. As an alternative to the direct additive supplementation, the replacement of these chemical additives by iRNA for obtaining the same inhibition action is also tested. The combination of the EGE protocol with caffeine and valproic acid supplementation resulted in a 1.5-fold improvement in HIV-1 VLP production compared with the EGE protocol alone, representing an overall 18-fold improvement over conventional batch cultivation. shRNAs encoded in the expression vector were tested to substitute valproic acid and caffeine. This novel strategy enhanced VLP production by 2.3 fold without any detrimental effect on cell viability (91.7%) compared with the batch cultivation (92.0%). Finally, the combination of shRNA with EGE resulted in more than 15.6-fold improvement compared with the batch standard protocol traditionally used. The methodology developed enables the production of high titers of HIV-1 VLPs avoiding the toxic effects of additives.
Grid-based International Network for Flu observation (g-INFO).
Doan, Trung-Tung; Bernard, Aurélien; Da-Costa, Ana Lucia; Bloch, Vincent; Le, Thanh-Hoa; Legre, Yannick; Maigne, Lydia; Salzemann, Jean; Sarramia, David; Nguyen, Hong-Quang; Breton, Vincent
2010-01-01
The 2009 H1N1 outbreak has demonstrated that continuing vigilance, planning, and strong public health research capability are essential defenses against emerging health threats. Molecular epidemiology of influenza virus strains provides scientists with clues about the temporal and geographic evolution of the virus. In the present paper, researchers from France and Vietnam are proposing a global surveillance network based on grid technology: the goal is to federate influenza data servers and deploy automatically molecular epidemiology studies. A first prototype based on AMGA and the WISDOM Production Environment extracts daily from NCBI influenza H1N1 sequence data which are processed through a phylogenetic analysis pipeline deployed on EGEE and AuverGrid e-infrastructures. The analysis results are displayed on a web portal (http://g-info.healthgrid.org) for epidemiologists to monitor H1N1 pandemics.
NASA Astrophysics Data System (ADS)
Licari, Daniele; Calzolari, Federico
2011-12-01
In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.
NASA Astrophysics Data System (ADS)
Parodi, A.; Craig, G. C.; Clematis, A.; Kranzlmueller, D.; Schiffers, M.; Morando, M.; Rebora, N.; Trasforini, E.; D'Agostino, D.; Keil, K.
2010-09-01
Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modeling tools, post processing methodologies and observational data and corresponding ICT (Information and Communication Technology) technologies are available. Recent European efforts in developing a platform for e-Science, such as EGEE (Enabling Grids for E-sciencE), SEEGRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, have demonstrated their abilities to provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind and the fact that European ICT-infrastructures are in the progress of transferring to a sustainable and permanent service utility as underlined by the European Grid Initiative (EGI) and the Partnership for Advanced Computing in Europe (PRACE), the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS, co-Founded by the EC under the 7th Framework Programme) project has been initiated. The goal of DRIHMS is the promotion of the Grids in particular and e-Infrastructures in general within the European hydrometeorological research (HMR) community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change. This paper will be devoted to provide an overview of DRIHMS ideas and to present the results of the DRIHMS HMR and ICT surveys.
ICT-based hydrometeorology science and natural disaster societal impact assessment
NASA Astrophysics Data System (ADS)
Parodi, A.; Clematis, A.; Craig, G. C.; Kranzmueller, D.
2009-09-01
In the Lisbon strategy, the 2005 European Council identified knowledge and innovation as the engines of sustainable growth and stated that it is essential to build a fully inclusive information society. In parallel, the World Conference on Disaster Reduction (Hyogo, 2005), defined among its thematic priorities the improvement of international cooperation in hydrometeorology research activities. This was recently confirmed at the joint press conference of the Center for Research on Epidemiology of Disasters (CRED) with the United Nations International Strategy for Disaster Reduction (UNISDR) Secretariat, held on January 2009, where it was noted that flood and storm events are among the natural disasters that most impact human life. Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modelling tools, post processing methodologies and observational data are available. Recent European efforts in developing a platform for e-science, like EGEE (Enabling Grids for E-sciencE), SEE-GRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind, the goal of the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS) project is the promotion of the Grid culture within the European hydrometeorological research community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change.
Ogawa, Koki; Fuchigami, Yuki; Hagimori, Masayori; Fumoto, Shintaro; Miura, Yusuke; Kawakami, Shigeru
2018-01-01
We previously developed anionic ternary bubble lipopolyplexes, an ultrasound-responsive carrier, expecting safe and efficient gene transfection. However, bubble lipopolyplexes have a low capacity for echo gas (C 3 F 8 ) encapsulation (EGE) in nonionic solution such as 5% glucose. On the other hand, we were able to prepare bubble lipopolyplexes by inserting phosphate-buffered saline before C 3 F 8 encapsulation. Surface charge regulation (SCR) by electrolytes stabilizes liposome/plasmid DNA (pDNA) complexes by accelerated membrane fusion. Considering these facts, we hypothesized that SCR by electrolytes such as NaCl would promote C 3 F 8 encapsulation in bubble lipopolyplexes mediated by accelerated membrane fusion. We defined this hypothesis as SCR-based EGE (SCR-EGE). Bubble lipopolyplexes prepared by the SCR-EGE method (SCR-EGE bubble lipopolyplexes) are expected to facilitate the gene transfection because of the high amount of C 3 F 8 . Therefore, we applied these methods for gene delivery to the brain and evaluated the characteristics of transgene expression in the brain. First, we measured the encapsulation efficiency of C 3 F 8 in SCR-EGE bubble lipopolyplexes. Next, we applied these bubble lipopolyplexes to the mouse brain; then, we evaluated the transfection efficiency. Furthermore, three-dimensional transgene distribution was observed using multicolor deep imaging. SCR-EGE bubble lipopolyplexes had a higher C 3 F 8 content than conventional bubble lipopolyplexes. In terms of safety, SCR-EGE bubble lipopolyplexes possessed an anionic potential and showed no aggregation with erythrocytes. After applying SCR-EGE bubble lipopolyplexes to the brain, high transgene expression was observed by combining with ultrasound irradiation. As a result, transgene expression mediated by SCR-EGE bubble lipopolyplexes was observed mainly on blood vessels and partially outside of blood vessels. The SCR-EGE method may promote C 3 F 8 encapsulation in bubble lipopolyplexes, and SCR-EGE bubble lipopolyplexes may be potent carriers for efficient and safe gene transfection in the brain, especially to the blood vessels.
Öztopuz, Özlem; Pekin, Gülseren; Park, Ro Dong; Eltem, Rengin
2018-05-03
Bacillus is an antagonistic bacteria that shows high effectiveness against different phytopathogenic fungi and produces various lytic enzymes, such as chitosanase, chitinase, protease, and gluconase. The aim of this study is to determine Bacillus spp. for lytic enzyme production and to evaluate the antifungal effects of the selected strains for biocontrol of mycotoxigenic and phytopathogenic fungi. A total of 92 endospore-forming bacterial isolates from the 24 fig orchard soil samples were screened for chitosanase production, and six best chitosanolytic isolates were selected to determine chitinase, protease, and N-acetyl-β-hexosaminidase activity and molecularly identified. The antagonistic activities of six Bacillus strains against Aspergillus niger EGE-K-213, Aspergillus foetidus EGE-K-211, Aspergillus ochraceus EGE-K-217, and Fusarium solani KCTC 6328 were evaluated. Fungal spore germination inhibition and biomass inhibition activities were also measured against A. niger EGE-K-213. The results demonstrated that Bacillus mojavensis EGE-B-5.2i and Bacillus thuringiensis EGE-B-14.1i were more efficient antifungal agents against A. niger EGE-K-213. B. mojavensis EGE-B-5.2i has shown maximum inhibition of the biomass (30.4%), and B. thuringiensis EGE-B-14.1i has shown maximum inhibition of spore germination (33.1%) at 12 h. This is the first study reporting the potential of antagonist Bacillus strains as biocontrol agents against mycotoxigenic fungi of fig orchads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron; Shank, James; Ernst, Michael
Under this SciDAC-2 grant the project’s goal w a s t o stimulate new discoveries by providing scientists with effective and dependable access to an unprecedented national distributed computational facility: the Open Science Grid (OSG). We proposed to achieve this through the work of the Open Science Grid Consortium: a unique hands-on multi-disciplinary collaboration of scientists, software developers and providers of computing resources. Together the stakeholders in this consortium sustain and use a shared distributed computing environment that transforms simulation and experimental science in the US. The OSG consortium is an open collaboration that actively engages new research communities. Wemore » operate an open facility that brings together a broad spectrum of compute, storage, and networking resources and interfaces to other cyberinfrastructures, including the US XSEDE (previously TeraGrid), the European Grids for ESciencE (EGEE), as well as campus and regional grids. We leverage middleware provided by computer science groups, facility IT support organizations, and computing programs of application communities for the benefit of consortium members and the US national CI.« less
Perforated duodenal ulcer: An unusual manifestation of allergic eosinophilic gastroenteritis.
Riggle, Kevin M; Wahbeh, Ghassan; Williams, Elizabeth M; Riehle, Kimberly J
2015-11-28
Spontaneous perforation of a duodenal ulcer secondary to allergic eosinophilic gastroenteritis (EGE) has not been previously reported. We present such a case in a teenager who presented with peritonitis. After exploration and operative repair of his ulcer, he continued to experience intermittent abdominal pain, and further evaluation revealed eosinophilic gastroenteritis in the setting of multiple food allergies. His EGE resolved after adhering to a restrictive diet. Both duodenal ulcers and EGE are very rarely seen in pediatric patients. EGE has a variable presentation depending on the layer(s) of bowel wall affected and the segment of the gastrointestinal tract that is involved. Once diagnosed, it may respond to dietary changes in patients with recognized food allergies, or to steroids in patients in whom an underlying cause is not identified. Our case highlights the need to keep EGE in the differential diagnosis when treating pediatric patients with duodenal ulcers. The epidemiology, pathophysiology, and treatment of EGE are also discussed, along with a review of the current literature.
[Cytotoxicity induced by gasoline engine exhausts associated with oxidative stress].
Che, Wangjun; Zhang, Zunzhen; Wu, Mei; Wang, Ling
2008-09-01
To evaluate the relationship between cytotoxic effects of the extracts of condensate, particulates and semivolatile organic compounds from gasoline engine exhausts (EGE) and oxidative stress. After A549 cells were treated with various concentrations of EGE for 2h, and cell viabilities were detected induced by EGE were examined by MTT assay. Meanwhile, the reactive oxygen species (ROS) in A549 cells induced by EGE were examined, 2',7'-dichlorodihy-drofluorescin diacetate (DCFH-DA) was used to catch ROS and its level measured by value of pixel fluorescence intensity. Furthermore, A549 cells pretreated with different concentrations of glutathione (GSH) were exposed to various concentrations of EGE for 2h, and then cell viabilities were examined. Viabilities of A549 cells significantly decreased in comparison to the solvent group when the concentrations of EGE were more than 3.9 ml/ml (P < 0.05). There were a dose-response relationships between the viabilities and the concentration of EGE (r = -0.81, P < 0.01). At the concentrations of 31.3 ml/ml and 62.5 ml/ml, the values of pixel fluorescence intensity were (125.0 +/- 19.2) and (168.9 +/- 16.9), which were significantly higher than those of control (8.5 +/- 1.4). In addition, the viabilities of cells pretreated with GSH gradually increased with the increases of the concentrations of GSH. There were also a significant difference between the pretreated and non-pretreated group at the concentrations of 0.5 mmol/L and 1.0 mmol/L. Oxidative stress could be one of the mechanisms of cytotoxic effects of EGE.
JTS and its Application in Environmental Protection Applications
NASA Astrophysics Data System (ADS)
Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta
2010-05-01
The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
NASA Technical Reports Server (NTRS)
Gayen, S. K.; Wang, W. B.; Petricevic, V.; Yoo, K. M.; Alfano, R. R.
1987-01-01
The Ti(3+)-doped Al2O3 has been recently demonstrated to be a tunable solid-state laser system with Ti(3+) as the laser-active ion. In this paper, the kinetics of vibrational transitions in the 2E(g)E(3/2) electronic state of Ti(3+):Al2O3a (crucial for characterizing new host materials for the Ti ion) was investigated. A 527-nm 5-ps pulse was used to excite a band of higher vibrational levels of the 2E(g)E(3/2) state, and the subsequent growth of population in the zero vibrational level and lower vibrational levels was monitored by a 3.9-micron picosecond probe pulse. The time evolution curve in the excited 2E(g)E(3/2) state at room temperature was found to be characterized by a sharp rise followed by a long decay, the long lifetime decay reflecting the depopulation of the zero and the lower vibrational levels of the 2E(g)E(3/2) state via radiative transitions. An upper limit of 3.5 ps was estimated for intra-2E(g)E(3/2)-state vibrational relaxation time.
Bedell, Alyse; Taft, Tiffany; Craven, Meredith R; Guadagnoli, Livia; Hirano, Ikuo; Gonsalves, Nirmala
2018-05-01
Eosinophilic gastritis (EG) and eosinophilic gastroenteritis (EGE) are chronic immune-mediated conditions of the digestive tract, which affect the stomach only, or the stomach and small intestines, respectively. Though these disorders are uncommon, they are being increasingly recognized and diagnosed. While health-related quality of life (HRQOL) has been evaluated in other eosinophilic gastrointestinal diseases, this study is the first to describe HRQOL impacts unique to EG/EGE. This study aims to qualitatively describe experiences of adults diagnosed with EG and EGE. We aim to identify impacts on HRQOL in this population in order to inform clinical care and assessment. Seven patients diagnosed with EG or EGE participated in semi-structured interviews assessing common domains of HRQOL. Four distinct themes emerged from qualitative analyses, which represent impacts to HRQOL: the psychological impact of the diagnosis, impact on social relationships, financial impact, and impact on the body. These generally improved over time and with effective treatment. This study demonstrated that patients with EG/EGE experience impacts to HRQOL, some of which differ from HRQOL of other eosinophilic gastrointestinal diseases. These results support the development of a disease-specific measure, or adaptation of an existing measure, to assess HRQOL in EG/EGE.
Data location-aware job scheduling in the grid. Application to the GridWay metascheduler
NASA Astrophysics Data System (ADS)
Delgado Peris, Antonio; Hernandez, Jose; Huedo, Eduardo; Llorente, Ignacio M.
2010-04-01
Grid infrastructures constitute nowadays the core of the computing facilities of the biggest LHC experiments. These experiments produce and manage petabytes of data per year and run thousands of computing jobs every day to process that data. It is the duty of metaschedulers to allocate the tasks to the most appropriate resources at the proper time. Our work reviews the policies that have been proposed for the scheduling of grid jobs in the context of very data-intensive applications. We indicate some of the practical problems that such models will face and describe what we consider essential characteristics of an optimum scheduling system: aim to minimise not only job turnaround time but also data replication, flexibility to support different virtual organisation requirements and capability to coordinate the tasks of data placement and job allocation while keeping their execution decoupled. These ideas have guided the development of an enhanced prototype for GridWay, a general purpose metascheduler, part of the Globus Toolkit and member of the EGEE's RESPECT program. Current GridWay's scheduling algorithm is unaware of data location. Our prototype makes it possible for job requests to set data needs not only as absolute requirements but also as functions for resource ranking. As our tests show, this makes it more flexible than currently used resource brokers to implement different data-aware scheduling algorithms.
Grid enablement of OpenGeospatial Web Services: the G-OWS Working Group
NASA Astrophysics Data System (ADS)
Mazzetti, Paolo
2010-05-01
In last decades two main paradigms for resource sharing emerged and reached maturity: the Web and the Grid. They both demonstrate suitable for building Distributed Computing Infrastructures (DCIs) supporting the coordinated sharing of resources (i.e. data, information, services, etc) on the Internet. Grid and Web DCIs have much in common as a result of their underlying Internet technology (protocols, models and specifications). However, being based on different requirements and architectural approaches, they show some differences as well. The Web's "major goal was to be a shared information space through which people and machines could communicate" [Berners-Lee 1996]. The success of the Web, and its consequent pervasiveness, made it appealing for building specialized systems like the Spatial Data Infrastructures (SDIs). In this systems the introduction of Web-based geo-information technologies enables specialized services for geospatial data sharing and processing. The Grid was born to achieve "flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources" [Foster 2001]. It specifically focuses on large-scale resource sharing, innovative applications, and, in some cases, high-performance orientation. In the Earth and Space Sciences (ESS) the most part of handled information is geo-referred (geo-information) since spatial and temporal meta-information is of primary importance in many application domains: Earth Sciences, Disasters Management, Environmental Sciences, etc. On the other hand, in several application areas there is the need of running complex models which require the large processing and storage capabilities that the Grids are able to provide. Therefore the integration of geo-information and Grid technologies might be a valuable approach in order to enable advanced ESS applications. Currently both geo-information and Grid technologies have reached a high level of maturity, allowing to build such an integration on existing solutions. More specifically, the Open Geospatial Consortium (OGC) Web Services (OWS) specifications play a fundamental role in geospatial information sharing (e.g. in INSPIRE Implementing Rules, GEOSS architecture, GMES Services, etc.). On the Grid side, the gLite middleware, developed in the European EGEE (Enabling Grids for E-sciencE) Projects, is widely spread in Europe and beyond, proving its high scalability and it is one of the middleware chosen for the future European Grid Infrastructure (EGI) initiative. Therefore the convergence between OWS and gLite technologies would be desirable for a seamless access to the Grid capabilities through OWS-compliant systems. Anyway, to achieve this harmonization there are some obstacles to overcome. Firstly, a semantics mismatch must be addressed: gLite handle low-level (e.g. close to the machine) concepts like "file", "data", "instruments", "job", etc., while geo-information services handle higher-level (closer to the human) concepts like "coverage", "observation", "measurement", "model", etc. Secondly, an architectural mismatch must be addressed: OWS implements a Web Service-Oriented-Architecture which is stateless, synchronous and with no embedded security (which is demanded to other specs), while gLite implements the Grid paradigm in an architecture which is stateful, asynchronous (even not fully event-based) and with strong embedded security (based on the VO paradigm). In recent years many initiatives and projects have worked out possible approaches for implementing Grid-enabled OWSs. Just to mention some: (i) in 2007 the OGC has signed a Memorandum of Understanding with the Open Grid Forum, "a community of users, developers, and vendors leading the global standardization effort for grid computing."; (ii) the OGC identified "WPS Profiles - Conflation; and Grid processing" as one of the tasks in the Geo Processing Workflow theme of the OWS Phase 6 (OWS-6); (iii) several national, European and international projects investigated different aspects of this integration, developing demonstrators and Proof-of-Concepts; In this context, "gLite enablement of OpenGeospatial Web Services" (G-OWS) is an initiative started in 2008 by the European CYCLOPS, GENESI-DR, and DORII Projects Consortia in order to collect/coordinate experiences on the enablement of OWS on top of the gLite middleware [GOWS]. Currently G-OWS counts ten member organizations from Europe and beyond, and four European Projects involved. It broadened its scope to the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Its operational objectives are the following: i) to contribute to the OGC-OGF initiative; ii) to release a reference implementation as standard gLite APIs (under the gLite software license); iii) to release a reference model (including procedures and guidelines) for OWS Grid-ification, as far as gLite is concerned; iv) to foster and promote the formation of consortiums for participation to projects/initiatives aimed at building Grid-enabled SDIs To achieve this objectives G-OWS bases its activities on two main guiding principles: a) the adoption of a service-oriented architecture based on the information modelling approach, and b) standardization as a means of achieving interoperability (i.e. adoption of standards from ISO TC211, OGC OWS, OGF). In the first year of activity G-OWS has designed a general architectural framework stemming from the FP6 CYCLOPS studies and enriched by the outcomes of other projects and initiatives involved (i.e. FP7 GENESI-DR, FP7 DORII, AIST GeoGrid, etc.). Some proof-of-concepts have been developed to demonstrate the flexibility and scalability of such architectural framework. The G-OWS WG developed implementations of gLite-enabled Web Coverage Service (WCS) and Web Processing Service (WPS), and an implementation of a Shibboleth authentication for gLite-enabled OWS in order to evaluate the possible integration of Web and Grid security models. The presentation will aim to communicate the G-OWS organization, activities, future plans and means to involve the ESSI community. References [Berners-Lee 1996] T. Berners-Lee, "WWW: Past, present, and future". IEEE Computer, 29(10), Oct. 1996, pp. 69-77. [Foster 2001] I. Foster, C. Kesselman and S. Tuecke, "The Anatomy of the Grid. The International Journal ofHigh Performance Computing Applications", 15(3):200-222, Fall 2001 [GOWS] G-OWS WG, https://www.g-ows.org/, accessed: 15 January 2010
Distributed data analysis in ATLAS
NASA Astrophysics Data System (ADS)
Nilsson, Paul; Atlas Collaboration
2012-12-01
Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.
Integrating Xgrid into the HENP distributed computing model
NASA Astrophysics Data System (ADS)
Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.
2008-07-01
Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.
Grid today, clouds on the horizon
NASA Astrophysics Data System (ADS)
Shiers, Jamie
2009-04-01
By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.
WLCG scale testing during CMS data challenges
NASA Astrophysics Data System (ADS)
Gutsche, O.; Hajdu, C.
2008-07-01
The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.
Eluri, Swathi; Book, Wendy M; Kodroff, Ellyn; Strobel, Mary Jo; Gebhart, Jessica H; Jones, Patricia D; Menard-Katcher, Paul; Ferris, Maria E; Dellon, Evan S
2017-07-01
A growing population of adolescents/young adults with eosinophilic esophagitis (EoE) and eosinophilic gastroenteritis (EGE) will need to transition from pediatric to adult health providers. Measuring health care transition (HCT) readiness is critical, but no studies have evaluated this process in EoE/EGE. We determined the scope and predictors of HCT knowledge in patients and parents with EoE/EGE and measured HCT readiness in adolescents/young adults. We conducted an online survey of patients 13 years or older and parents of patients with EoE/EGE who were diagnosed when 25 years or younger. Parents answered questions regarding their children and their own knowledge of HCT. HCT readiness was assessed in adolescents/young adults aged 13 to 25 years with the Self-Management and Transition to Adulthood with Rx Questionnaire (a 6-domain self-report tool) with a score range of 0 to 90. Four hundred fifty participants completed the survey: 205 patients and 245 parents. Included in the analysis (those diagnosed with EoE/EGE at age 25 years or younger) were 75 of 205 patients and children of 245 parent respondents. Overall, 78% (n = 52) of the patients and 76% (n = 187) of parents had no HCT knowledge. Mean HCT readiness score in adolescents/young adults (n = 50) was 30.4 ± 11.3 with higher scores in domains of provider communication and engagement during appointments. Mean parent-reported (n = 123) score was 35.6 ± 9.7 with higher scores in medication management and disease knowledge. There was a significant deficit in HCT knowledge, and HCT readiness scores were lower than other chronic health conditions. HCT preparation and readiness assessments should become a priority for adolescents/young adults with EoE/EGE and their parents.
ERIC Educational Resources Information Center
Akçay, Recep Cengiz; Üzüm, Püren Akçay
2016-01-01
The main purpose of this study is to define perceptions and attitudes of university students about freedom of claiming their educational rights. Research was designed within the framework of phenomenology which is one of the qualitative research designs. The study was conducted with 10 students from EGE University in the academic year of…
Hsieh, G-Y; Wang, J-D; Cheng, T-J; Chen, P-C
2005-08-01
It has been shown that female workers exposed to ethylene glycol ethers (EGEs) in the semiconductor industry have higher risks of spontaneous abortion, subfertility, and menstrual disturbances, and prolonged waiting time to pregnancy. To examine whether EGEs or other chemicals are associated with long menstrual cycles in female workers in the semiconductor manufacturing industry. Cross-sectional questionnaire survey during the annual health examination at a wafer manufacturing company in Taiwan in 1997. A three tiered exposure-assessment strategy was used to analyse the risk. A short menstrual cycle was defined to be a cycle less than 24 days and a long cycle to be more than 35 days. There were 606 valid questionnaires from 473 workers in fabrication jobs and 133 in non-fabrication areas. Long menstrual cycles were associated with workers in fabrication areas compared to those in non-fabrication areas. Using workers in non-fabrication areas as referents, workers in photolithography and diffusion areas had higher risks for long menstrual cycles. Workers exposed to EGEs and isopropanol, and hydrofluoric acid, isopropanol, and phosphorous compounds also showed increased risks of a long menstrual cycle. Exposure to multiple chemicals, including EGEs in photolithography, might be associated with long menstrual cycles, and may play an important role in a prolonged time to pregnancy in the wafer manufacturing industry; however, the prevalence in the design, possible exposure misclassification, and chance should be considered.
Eosinophilic Gastroenteritis as a Rare Cause of Recurrent Epigastric Pain
Safari, Mohammad Taghi; Shahrokh, Shabnam; Miri, Mohammad Bagher; Ehsani Ardakani, Mohammad Javad
2016-01-01
Eosinophilic gastroenteritis (EGE) is a rare inflammatory disorder of gastrointestinal tract characterized by eosinophilic infiltration of the bowel wall. It can mimic many gastrointestinal disorders due to its wide spectrum of presentations. Diagnose is mostly based on excluding other disorders and a high suspicion. Here we report a case of 26 year old man with a history of sever epigastric pain followed by nausea, vomiting since a few days before admission with final diagnosis of EGE. PMID:27274524
The Climate-G Portal: a Grid Enabled Scientifc Gateway for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni
2010-05-01
Grid portals are web gateways aiming at concealing the underlying infrastructure through a pervasive, transparent, user-friendly, ubiquitous and seamless access to heterogeneous and geographical spread resources (i.e. storage, computational facilities, services, sensors, network, databases). Definitively they provide an enhanced problem-solving environment able to deal with modern, large scale scientific and engineering problems. Scientific gateways are able to introduce a revolution in the way scientists and researchers organize and carry out their activities. Access to distributed resources, complex workflow capabilities, and community-oriented functionalities are just some of the features that can be provided by such a web-based environment. In the context of the EGEE NA4 Earth Science Cluster, Climate-G is a distributed testbed focusing on climate change research topics. The Euro-Mediterranean Center for Climate Change (CMCC) is actively participating in the testbed providing the scientific gateway (Climate-G Portal) to access to the entire infrastructure. The Climate-G Portal has to face important and critical challenges as well as has to satisfy and address key requirements. In the following, the most relevant ones are presented and discussed. Transparency: the portal has to provide a transparent access to the underlying infrastructure preventing users from dealing with low level details and the complexity of a distributed grid environment. Security: users must be authenticated and authorized on the portal to access and exploit portal functionalities. A wide set of roles is needed to clearly assign the proper one to each user. The access to the computational grid must be completely secured, since the target infrastructure to run jobs is a production grid environment. A security infrastructure (based on X509v3 digital certificates) is strongly needed. Pervasivity and ubiquity: the access to the system must be pervasive and ubiquitous. This is easily true due to the nature of the needed web approach. Usability and simplicity: the portal has to provide simple, high level and user friendly interfaces to ease the access and exploitation of the entire system. Coexistence of general purpose and domain oriented services: along with general purpose services (file transfer, job submission, etc.), the portal has to provide domain based services and functionalities. Subsetting of data, visualization of 2D maps around a virtual globe, delivery of maps through OGC compliant interfaces (i.e. Web Map Service - WMS) are just some examples. Since april 2009, about 70 users (85% coming from the climate change community) got access to the portal. A key challenge of this work is the idea to provide users with an integrated working environment, that is a place where scientists can find huge amount of data, complete metadata support, a wide set of data access services, data visualization and analysis tools, easy access to the underlying grid infrastructure and advanced monitoring interfaces.
Low-cost wireless voltage & current grid monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hines, Jacqueline
This report describes the development and demonstration of a novel low-cost wireless power distribution line monitoring system. This system measures voltage, current, and relative phase on power lines of up to 35 kV-class. The line units operate without any batteries, and without harvesting energy from the power line. Thus, data on grid condition is provided even in outage conditions, when line current is zero. This enhances worker safety by detecting the presence of voltage and current that may appear from stray sources on nominally isolated lines. Availability of low-cost power line monitoring systems will enable widespread monitoring of the distributionmore » grid. Real-time data on local grid operating conditions will enable grid operators to optimize grid operation, implement grid automation, and understand the impact of solar and other distributed sources on grid stability. The latter will enable utilities to implement eneygy storage and control systems to enable greater penetration of solar into the grid.« less
Ocean data management in OMP Data Service
NASA Astrophysics Data System (ADS)
Fleury, Laurence; André, François; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Ferré, Hélène; Mière, Arnaud
2014-05-01
The Observatoire Midi-Pyrénées Data Service (SEDOO) is a development team, dedicated to environmental data management and dissemination application set up, in the framework of intensive field campaigns and long term observation networks. SEDOO developped some applications dealing with ocean data only, but also generic databases that enable to store and distribute multidisciplinary datasets. SEDOO is in charge of the in situ data management and the data portal for international and multidisciplinary programmes as large as African Monsoon Multidisciplinary Analyses (AMMA) and Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS). The AMMA and MISTRALS databases are distributed and the data portals enable to access datasets managed by other data centres (IPSL, CORIOLIS...) through interoperability protocols (OPeNDAP, xml requests...). AMMA and MISTRALS metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). Most of the AMMA and MISTRALS in situ ocean data sets are homogenized and inserted in a relational database, in order to enable accurate data selection and download of different data sets in a shared format. Data selection criteria are location, period, physical property name, physical property range... The data extraction procedure include format output selection among CSV, NetCDF, Nasa Ames... The AMMA database - http://database.amma-international.org/ - contains field campaign observations in the Guinea Gulf (EGEE 2005-2007) and Atlantic Tropical Ocean (AEROSE-II 2006...), as well as long term monitoring data (PIRATA, ARGO...). Operational analysis (MERCATOR) and satellite products (TMI, SSMI...) are managed by IPSL data centre and can be accessed too. They have been projected over regular latitude-longitude grids and converted into the NetCDF format. The MISTRALS data portal - http://mistrals.sedoo.fr/ - enables to access ocean datasets produced by the contributing programmes: Hydrological cycle in the Mediterranean eXperiment (HyMeX), Chemistry-Aerosol Mediterranean eXperiment (ChArMEx), Marine Mediterranean eXperiment (MERMeX)... The programmes include many field campaigns from 2011 to 2015, collecting general and specific properties. Long term monitoring networks, like Mediterranean Ocean Observing System on Environment (MOOSE) or Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE), contribute to the MISTRALS data portal as well. Relevant model outputs and satellite products managed by external data centres (IPSL, ENEA...) can be accessed too. SEDOO manages the SSS (Sea Surface Salinity) national observation service data: http://sss.sedoo.fr/. SSS aims at collecting, validating, archiving and distributing in situ SSS measurements derived from Voluntary Observing Ship programs. The SSS data user interface enables to built multicriteria data request and download relevant datasets. SEDOO contributes to the SOLWARA project that aims at understanding the oceanic circulation in the Coral Sea and the Solomon Sea and their role in both the climate system and the oceanic chemistry. The research programme include in situ measurements, numerical modelling and compiled analyses of past data. The website http://thredds.sedoo.fr/solwara/ enables to access, visualize and download Solwara gridded data and model simulations, using Thredds associated services (OPEnDAP, NCSS and WMS). In order to improve the application user-friendliness, SSS and SOLWARA web interfaces are JEE applications build with GWT Framework, and share many modules.
None
2018-01-24
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Seti@Home. Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance. 4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
None
2018-06-20
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry. Michael Yoo, Managing Director, Head of the Technical Council, UBS. Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse. Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
None
2018-01-25
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industries Adam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
None
2018-02-02
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance. 4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Seti@Home. Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance. 4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry. Michael Yoo, Managing Director, Head of the Technical Council, UBS. Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse. Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance. 4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industries Adam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followedmore » by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Seti@Home. Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN. 3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.« less
None
2018-02-01
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
None
2018-01-24
The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing â from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with Seti@Home. Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN. 3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.
e-Science on Earthquake Disaster Mitigation by EUAsiaGrid
NASA Astrophysics Data System (ADS)
Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong
2010-05-01
Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the earthquake wave propagation to tsunami mitigation would be feasible once the user community support is in place.
NASA Technical Reports Server (NTRS)
Shyam, Vikram
2010-01-01
A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.
gProcess and ESIP Platforms for Satellite Imagery Processing over the Grid
NASA Astrophysics Data System (ADS)
Bacu, Victor; Gorgan, Dorian; Rodila, Denisa; Pop, Florin; Neagu, Gabriel; Petcu, Dana
2010-05-01
The Environment oriented Satellite Data Processing Platform (ESIP) is developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) co-funded by the European Commission through FP7 [1]. The gProcess Platform [2] is a set of tools and services supporting the development and the execution over the Grid of the workflow based processing, and particularly the satelite imagery processing. The ESIP [3], [4] is build on top of the gProcess platform by adding a set of satellite image processing software modules and meteorological algorithms. The satellite images can reveal and supply important information on earth surface parameters, climate data, pollution level, weather conditions that can be used in different research areas. Generally, the processing algorithms of the satellite images can be decomposed in a set of modules that forms a graph representation of the processing workflow. Two types of workflows can be defined in the gProcess platform: abstract workflow (PDG - Process Description Graph), in which the user defines conceptually the algorithm, and instantiated workflow (iPDG - instantiated PDG), which is the mapping of the PDG pattern on particular satellite image and meteorological data [5]. The gProcess platform allows the definition of complex workflows by combining data resources, operators, services and sub-graphs. The gProcess platform is developed for the gLite middleware that is available in EGEE and SEE-GRID infrastructures [6]. gProcess exposes the specific functionality through web services [7]. The Editor Web Service retrieves information on available resources that are used to develop complex workflows (available operators, sub-graphs, services, supported resources, etc.). The Manager Web Service deals with resources management (uploading new resources such as workflows, operators, services, data, etc.) and in addition retrieves information on workflows. The Executor Web Service manages the execution of the instantiated workflows on the Grid infrastructure. In addition, this web service monitors the execution and generates statistical data that are important to evaluate performances and to optimize execution. The Viewer Web Service allows access to input and output data. To prove and to validate the utility of the gProcess and ESIP platforms there were developed the GreenView and GreenLand applications. The GreenView related functionality includes the refinement of some meteorological data such as temperature, and the calibration of the satellite images based on field measurements. The GreenLand application performs the classification of the satellite images by using a set of vegetation indices. The gProcess and ESIP platforms are used as well in GiSHEO project [8] to support the processing of Earth Observation data over the Grid in eGLE (GiSHEO eLearning Environment). Experiments of performance assessment were conducted and they have revealed that the workflow-based execution could improve the execution time of a satellite image processing algorithm [9]. It is not a reliable solution to execute all the workflow nodes on different machines. The execution of some nodes can be more time consuming and they will be performed in a longer time than other nodes. The total execution time will be affected because some nodes will slow down the execution. It is important to correctly balance the workflow nodes. Based on some optimization strategy the workflow nodes can be grouped horizontally, vertically or in a hybrid approach. In this way, those operators will be executed on one machine and also the data transfer between workflow nodes will be lower. The dynamic nature of the Grid infrastructure makes it more exposed to the occurrence of failures. These failures can occur at worker node, services availability, storage element, etc. Currently gProcess has support for some basic error prevention and error management solutions. In future, some more advanced error prevention and management solutions will be integrated in the gProcess platform. References [1] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [2] Bacu V., Stefanut T., Rodila D., Gorgan D., Process Description Graph Composition by gProcess Platform. HiPerGRID - 3rd International Workshop on High Performance Grid Middleware, 28 May, Bucharest. Proceedings of CSCS-17 Conference, Vol.2., ISSN 2066-4451, pp. 423-430, (2009). [3] ESIP Platform, http://wiki.egee-see.org/index.php/JRA1_Commonalities [4] Gorgan D., Bacu V., Rodila D., Pop Fl., Petcu D., Experiments on ESIP - Environment oriented Satellite Data Processing Platform. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 157-166 (2009). [5] Radu, A., Bacu, V., Gorgan, D., Diagrammatic Description of Satellite Image Processing Workflow. Workshop on Grid Computing Applications Development (GridCAD) at the SYNASC Symposium, 28 September 2007, Timisoara, IEEE Computer Press, ISBN 0-7695-3078-8, 2007, pp. 341-348 (2007). [6] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [7] Rodila D., Bacu V., Gorgan D., Integration of Satellite Image Operators as Workflows in the gProcess Application. Proceedings of ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27-29 Aug, 2009 Cluj-Napoca. ISBN: 978-1-4244-5007-7, pp. 355-358 (2009). [8] GiSHEO consortium, Project site, http://gisheo.info.uvt.ro [9] Bacu V., Gorgan D., Graph Based Evaluation of Satellite Imagery Processing over Grid. ISPDC 2008 - 7th International Symposium on Parallel and Distributed Computing, July 1-5, 2008, Krakow, Poland. IEEE Computer Society 2008, ISBN: 978-0-7695-3472-5, pp. 147-154.
Semantics-enabled service discovery framework in the SIMDAT pharma grid.
Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert
2008-03-01
We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.
The use of electronic games in therapy: a review with clinical implications.
Horne-Moyer, H Lynn; Moyer, Brian H; Messer, Drew C; Messer, Elizabeth S
2014-12-01
Therapists and patients enjoy and benefit from interventions that use electronic games (EG) in health care and mental health settings, with a variety of diagnoses and therapeutic goals. We reviewed the use of electronic games designed specifically for a therapeutic purpose, electronic games for psychotherapy (EGP), also called serious games, and commercially produced games used as an adjunct to psychotherapy, electronic games for entertainment (EGE). Recent research on the benefits of EG in rehabilitation settings, EGP, and EGE indicates that electronic methods are often equivalent to more traditional treatments and may be more enjoyable or acceptable, at least to some consumers. Methodological concerns include the lack of randomized controlled trials (RCT) for many applications. Suggestions are offered for using EG in therapeutic practice.
Enabling campus grids with open science grid technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitzel, Derek; Bockelman, Brian; Swanson, David
2011-01-01
The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condormore » clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.« less
Modernizing Electricity Delivery
Explains how modern grid, or smart grid, investments can enable grid operators to respond faster to changes in grid conditions and allow for two-way communication between utilities and electricity end-users.
GLIDE: a grid-based light-weight infrastructure for data-intensive environments
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Malek, Sam; Beckman, Nels; Mikic-Rakic, Marija; Medvidovic, Nenad; Chrichton, Daniel J.
2005-01-01
The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among dynamic coalitions of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption. To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-28
GridAPPS-D is an open-source, open architecture, standards based platform for development of advanced electric power system planning and operations applications. GridAPPS-D provides a documented data abstraction for the application developer enabling creation of applications that can be run in any compliant system or platform. This enables development of applications that are platform vendor independent applications and applications that take advantage of the possibility of data rich and data driven applications based on deployment of smart grid devices and systems.
NASA Astrophysics Data System (ADS)
Bower, Ward
2011-09-01
An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.
Grid computing in large pharmaceutical molecular modeling.
Claus, Brian L; Johnson, Stephen R
2008-07-01
Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.
Radiosurgery planning supported by the GEMSS grid.
Fenner, J W; Mehrem, R A; Ganesan, V; Riley, S; Middleton, S E; Potter, K; Walton, L
2005-01-01
GEMSS (Grid Enabled Medical Simulation Services IST-2001-37153) is an EU project funded to provide a test bed for Grid-enabled health applications. Its purpose is evaluation of Grid computing in the health sector. The health context imposes particular constraints on Grid infrastructure design, and it is this that has driven the feature set of the middleware. In addition to security, the time critical nature of health applications is accommodated by a Quality of Service component, and support for a well defined business model is also included. This paper documents experience of a GEMSS compliant radiosurgery application running within the Medical Physics department at the Royal Hallamshire Hospital in the UK. An outline of the Grid-enabled RAPT radiosurgery application is presented and preliminary experience of its use in the hospital environment is reported. The performance of the software is compared against GammaPlan (an industry standard) and advantages/disadvantages are highlighted. The RAPT software relies on features of the GEMSS middleware that are integral to the success of this application, and together they provide a glimpse of an enabling technology that can impact upon patient management in the 21st century.
Development of equations for predicting methane emissions from ruminants.
Ramin, M; Huhtanen, P
2013-04-01
Ruminants contribute to global warming by releasing methane (CH4) gas by enteric fermentation. This has increased interest among animal scientists to develop and improve equations predicting CH4 production. The objectives of the current study were to collect a data set from respiration studies and to evaluate the effects of dietary and animal factors on CH4 production from diets that can safely be fed to dairy cows, using a mixed model regression analysis. Therefore, diets containing more than 75% concentrate on a dry matter (DM) basis were excluded from the analysis. The final data set included a total of 298 treatment means from 52 published papers with 207 cattle and 91 sheep diets. Dry matter intake per kilogram of body weight (DMIBW), organic matter digestibility estimated at the maintenance level of feeding (OMDm), and dietary concentrations of neutral detergent fiber (NDF), nonfiber carbohydrates (NFC), and ether extract (EE) were the variables of the best-fit equation predicting CH4 energy (CH4-E) as a proportion of gross energy intake (GE): CH4-E/GE (kJ/MJ)=-0.6 (±12.76) - 0.70 (±0.072) × DMIBW (g/kg) + 0.076 (±0.0118) × OMDm (g/kg) - 0.13 (±0.020) × EE (g/kg of DM) + 0.046 (±0.0097) × NDF (g/kg of DM) + 0.044 (±0.0094) × NFC (g/kg of DM), resulting in the lowest root mean square error adjusted for random study effect (adj. RMSE=3.26 kJ/MJ). Total CH4 production (L/d) in the cattle data set was closely related to DM intake. However, further inclusion of other variables improved the model: CH4 (L/d)=-64.0 (±35.0) + 26.0 (±1.02) × DM intake (kg/d) - 0.61 (±0.132) × DMI(2)(centered) + 0.25 (±0.051) × OMDm (g/kg) - 66.4 (±8.22) × EE intake (kg of DM/d) - 45.0 (±23.50) × NFC/(NDF + NFC), with adj. RMSE of 21.1 L/d. Cross-validation of the CH4-E/GE equation [observed CH4-E/GE=0.96 (±0.103) × predicted CH4-E/GE + 2.3 (±7.05); R(2)=0.85, adj. RMSE=3.38 kJ/MJ] indicated that differences in CH4 production between the diets could be predicted accurately. We conclude that feed intake is the main determinant of total CH4 production and that CH4-E/GE is negatively related to feeding level and dietary fat concentration and positively to diet digestibility, whereas dietary carbohydrate composition has only minor effects. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Emissions & Generation Resource Integrated Database (eGRID), eGRID2002 (with years 1996 - 2000 data)
The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, nitrous oxide, and mercury; emissions rates; net generation; resource mix; and many other attributes. eGRID2002 (years 1996 through 2000 data) contains 16 Excel spreadsheets and the Technical Support Document, as well as the eGRID Data Browser, User's Manual, and Readme file. Archived eGRID data can be viewed as spreadsheets or by using the eGRID Data Browser. The eGRID spreadsheets can be manipulated by data users and enables users to view all the data underlying eGRID. The eGRID Data Browser enables users to view key data using powerful search features. Note that the eGRID Data Browser will not run on a Mac-based machine without Windows emulation.
Kölling, Katharina; George, Gavin M; Künzli, Roland; Flütsch, Patrick; Zeeman, Samuel C
2015-01-01
Photosynthetic assimilation of carbon is a defining feature of the plant kingdom. The fixation of large amounts of carbon dioxide supports the synthesis of carbohydrates, which make up the bulk of plant biomass. Exact measurements of carbon assimilation rates are therefore crucial due to their impact on the plants metabolism, growth and reproductive success. Commercially available single-leaf cuvettes allow the detailed analysis of many photosynthetic parameters, including gas exchange, of a selected leaf area. However, these cuvettes can be difficult to use with small herbaceous plants such as Arabidopsis thaliana or plants having delicate or textured leaves. Furthermore, data from single leaves can be difficult to scale-up for a plant shoot with a complex architecture and tissues in different physiological states. Therefore, we constructed a versatile system-EGES-1-to simultaneously measure gas exchange in the whole shoots of multiple individual plants. Our system was designed to be able record data continuously over several days. The EGES-1 system yielded comparable measurements for eight plants for up to 6 days in stable, physiologically realistic conditions. The chambers seals have negligible permeability to carbon dioxide and the system is designed so as to detect any bulk-flow air leaks. We show that the system can be used to monitor plant responses to changing environmental conditions, such as changes in illumination or stress treatments, and to compare plants with phenotypically severe mutations. By incorporating interchangeable lids, the system could be used to measure photosynthetic gas exchange in several genera such as Arabidopsis, Nicotiana, Pisum, Lotus and Mesembryanthemum. EGES-1 can be introduced into a variety of growth facilities and measure gas exchange in the shoots diverse plant species grown in different growth media. It is ideal for comparing photosynthetic carbon assimilation of wild-type and mutant plants and/or plants undergoing selected experimental treatments. The system can deliver valuable data for whole-plant growth studies and help understanding mutant phenotypes. Overall, the EGES-1 is complementary to the readily-available single leaf systems that focus more on the photosynthetic process in within the leaf lamina.
NASA Astrophysics Data System (ADS)
Huang, Bor-Shouh; Liu, Chun-Chi; Yen, Eric; Liang, Wen-Tzong; Lin, Simon C.; Huang, Win-Gee; Lee, Shiann-Jong; Chen, Hsin-Yen
Experience from the 1994 giant Sumatra earthquake, seismic and tsunami hazard have been considered as important issues in the South China Sea and its surrounding region, and attracted many seismologist's interesting. Currently, more than 25 broadband seismic instruments are currently operated by Institute of Earth Sciences, Academia Sinica in northern Vietnam to study the geodynamic evolution of the Red river fracture zone and rearranged to distribute to southern Vietnam recently to study the geodynamic evolution and its deep structures of the South China Sea. Similar stations are planned to deploy in Philippines in near future. In planning, some high quality stations may be as permanent stations and added continuous GPS observations, and instruments to be maintained and operated by several cooperation institutes, for instance, Institute of Geophysics, Vietnamese Acadamy of Sciences and Technology in Vietnam and Philippine Institute of Volcanology and Seismology in Philippines. Finally, those stations will be planed to upgrade as real time transmission stations for earthquake monitoring and tsunami warning. However, high speed data transfer within different agencies is always a critical issue for successful network operation. By taking advantage of both EGEE and EUAsiaGrid e-Infrastructure, Academia Sinica Grid Computing Centre coordinates researchers from various Asian countries to construct a platform to high performance data transfer for huge parallel computation. Efforts from this data service and a newly build earthquake data centre for data management may greatly improve seismic network performance. Implementation of Grid infrastructure and e-science issues in this region may assistant development of earthquake research, monitor and natural hazard reduction. In the near future, we will search for new cooperation continually from the surrounding countries of the South China Sea to install new seismic stations to construct a complete seismic network of the South China Sea and encourage studies for earthquake sciences and natural hazard reductions.
Information Power Grid (IPG) Tutorial 2003
NASA Technical Reports Server (NTRS)
Meyers, George
2003-01-01
For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.
Experimental Evaluation of Grid Support Enabled PV Inverter Response to Abnormal Grid Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin A; Martin, Gregory D; Hurtt, James
As revised interconnection standards for grid-tied photovoltaic (PV) inverters address new advanced grid support functions (GSFs), there is increasing interest in inverter performance in the case of abnormal grid conditions. The growth of GSF-enabled inverters has outpaced the industry standards that define their operation, although recently published updates to UL1741 Supplement SA define test conditions for GSFs such as volt-var control, frequency-watt control, and voltage/frequency ride-through, among others. This paper describes the results of a comparative experimental evaluation on four commercially available, three-phase PV inverters in the 24.0-39.8 kVA power range on their GSF capability and its effect on abnormalmore » grid condition response. The evaluation examined the impact particular GSF implementations have on run-on times during islanding conditions, peak voltages in load rejection overvoltage scenarios, and peak currents during single-phase and three-phase fault events for individual inverters. Testing results indicated a wide variance in the performance of GSF enabled inverters to various test cases.« less
GreenView and GreenLand Applications Development on SEE-GRID Infrastructure
NASA Astrophysics Data System (ADS)
Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor
2010-05-01
The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).
University of Delaware Demonstrated at NREL Vehicle-to-Grid Characteristics
University of Delaware Demonstrated at NREL Vehicle-to-Grid Characteristics of Electric Vehicles At the Energy Systems Integration Facility (ESIF), the University of Delaware demonstrated the vehicle-to-grid , featuring vehilce-to-grid integration capabilities enabling it to feed power back to the grid and
ERIC Educational Resources Information Center
Udoh, Emmanuel E.
2010-01-01
Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
Europa Geophysical Explorer Mission Concept Studies
NASA Astrophysics Data System (ADS)
Green, J. R.; Abelson, R. D.; Smythe, W.; Spilker, T. R.; Shirley, J. H.
2005-12-01
The Strategic Road Map for Solar System Exploration recommended in May 2005 that NASA implement the Europa Geophysical Explorer (EGE) as a Flagship mission early in the next decade. This supported the recommendations of the National Research Council's Solar System Decadal Survey and the priorities of the Outer Planets Assessment Group (OPAG). The Europa Geophysical Explorer would: (1) Characterize tidal deformations of the surface of Europa and surface geology, to confirm the presence of a subsurface ocean; (2) Measure the three-dimensional structure and distribution of subsurface water; and (3) Determine surface composition from orbit, and potentially, prebiotic chemistry, in situ. As the next step in Europa exploration, EGE would build on previous Europa Orbiter concepts, for example, the original Europa Orbiter and the Jupiter Icy Moons Orbiter (JIMO). As well, a new set of draft Level One Requirements, provided by NASA sponsors, guided the concept development. These requirements included: (1) Earliest Launch: 2012; (2) Launch Vehicle: Delta IV Heavy or Atlas V; (3) Primary Propulsion: Chemical; (4) Power: Radioisotope Power System (RPS); (4) Orbital Mission: 30 days minimum to meet orbital science objectives; and (5) Earth Gravity Assists: Allowed. The previous studies and the new requirements contributed to the development of several scientifically capable and relatively mass-rich mission options. In particular, Earth-gravity assists (EGA) were allowed, resulting in an increased delivered mass. As well, there have been advances in radiation-hardened components and subsystems, due to the investments from the X-2000 technology program and JIMO. Finally, developments in radioisotope power systems (RPS) have added to the capability and reliability of the mission. Several potential mission options were explored using a variety of trade study methods, ranging from the work of the JPL EGE Team of scientists and engineers in partnership with the OPAG Europa Sub-Group Advisory Team, JPL's Team X, and parametric modeling and simulation tools. We explored the system impacts of selecting different science payloads, power systems, mission durations, Deep Space Network (DSN) architectures, trajectory types, and launch vehicles. The comparisons show that there are feasible mission options that provide potentially available mass for enhanced spacecraft margins and science return, in addition to a 150-kg orbiter science instrument payload mass. This presentation describes high-priority science objectives for an EGE mission, results of the recent studies, and implementation options.
A cross-domain communication resource scheduling method for grid-enabled communication networks
NASA Astrophysics Data System (ADS)
Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding
2011-10-01
To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.
Schnek: A C++ library for the development of parallel simulation codes on regular grids
NASA Astrophysics Data System (ADS)
Schmitz, Holger
2018-05-01
A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.
An Open Framework for Low-Latency Communications across the Smart Grid Network
ERIC Educational Resources Information Center
Sturm, John Andrew
2011-01-01
The recent White House (2011) policy paper for the Smart Grid that was released on June 13, 2011, "A Policy Framework for the 21st Century Grid: Enabling Our Secure Energy Future," defines four major problems to be solved and the one that is addressed in this dissertation is Securing the Grid. Securing the Grid is referred to as one of…
High liquid fuel yielding biofuel processes and a roadmap for the future transportation
NASA Astrophysics Data System (ADS)
Singh, Navneet R.
In a fossil-fuel deprived world when crude oil will be scarce and transportation need cannot be met with electricity and transportation liquid fuel must be produced, biomass derived liquid fuels can be a natural replacement. However, the carbon efficiency of the currently known biomass to liquid fuel conversion processes ranges from 35-40%, yielding 90 ethanol gallon equivalents (ege) per ton of biomass. This coupled with the fact that the efficiency at which solar energy is captured by biomass (<1%) is significantly lower than H 2 (10-27%) and electricity (20-42%), implies that sufficient land area is not available to meet the need for the entire transportation sector. To counter this dilemma, a number of processes have been proposed in this work: a hybrid hydrogen-carbon (H2CAR) process based on biomass gasification followed by the Fischer-Tropsch process such that 100% carbon efficiency is achieved yielding 330 ege/ton biomass using hydrogen derived from a carbon-free energy. The hydrogen requirement for the H2CAR process is 0.33 kg/liter of diesel. To decrease the hydrogen requirement associated with the H2CAR process, a hydrogen bio-oil (H2Bioil) process based on biomass fast-hydropyrolysis/hydrodeoxygenation is proposed which can achieve liquid fuel yield of 215 ege/ton consuming 0.11 kg hydrogen per liter of oil. Due to the lower hydrogen consumption of the H2Bioil process, synergistically integrated transition pathways are feasible where hot syngas derived from coal gasification (H2Bioil-C) or a natural gas reformer (H 2Bioil-NG) is used to supply the hydrogen and process heat for the biomass fast-hydropyrolysis/hydrodeoxygenation. Another off-shoot of the H2Bioil process is the H2Bioil-B process, where hydrogen required for the hydropyrolysis is obtained from gasification of a fraction of the biomass. H2Bioil-B achieves the highest liquid fuel yield (126-146 ege/ton of biomass) reported in the literature for any self-contained conversion of biomass to biofuel. Finally, an integration of the H2Bioil process with the H2CAR process is suggested which can achieve 100% carbon efficiency (330 ege/ton of biomass) at the expense of 0.24 kg hydrogen/liter of oil. A sun-to-fuel efficiency analysis shows that extracting CO2 from air and converting it to liquid fuel is at least two times more efficient than growing dedicated fuel crops and converting them to liquid fuel even for the highest biomass growth rates feasible by algae. This implies that liquid fuel should preferably be produced from sustainably available waste (SAW) biomass first and if the SAW biomass is unable to meet the demand for liquid fuel, then, CO2 should be extracted from air and converted to liquid fuel, rather than growing biomass. Furthermore, based on the Sun-to-Wheels recovery for different transportation pathways, synergistic and complementary use of electricity, hydrogen and biomass, all derived from solar energy, is presented in an energy efficient roadmap to successfully propel the entire future transportation sector.
Spaceflight Operations Services Grid (SOSG) Prototype Implementation and Feasibility Study
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Thigpen, William W.; Lisotta, Anthony J.; Redman, Sandra
2004-01-01
Science Operations Services Grid is focusing on building a prototype grid-based environment that incorporates existing and new spaceflight services to enable current and future NASA programs with cost savings and new and evolvable methods to conduct science in a distributed environment. The Science Operations Services Grid (SOSG) will provide a distributed environment for widely disparate organizations to conduct their systems and processes in a more efficient and cost effective manner. These organizations include those that: 1) engage in space-based science and operations, 2) develop space-based systems and processes, and 3) conduct scientific research, bringing together disparate scientific disciplines like geology and oceanography to create new information. In addition educational outreach will be significantly enhanced by providing to schools the same tools used by NASA with the ability of the schools to actively participate on many levels in the science generated by NASA from space and on the ground. The services range from voice, video and telemetry processing and display to data mining, high level processing and visualization tools all accessible from a single portal. In this environment, users would not require high end systems or processes at their home locations to use these services. Also, the user would need to know minimal details about the applications in order to utilize the services. In addition, security at all levels is an underlying goal of the project. The Science Operations Services Grid will focus on four tools that are currently used by the ISS Payload community along with nine more that are new to the community. Under the prototype four Grid virtual organizations PO) will be developed to represent four types of users. They are a Payload (experimenters) VO, a Flight Controllers VO, an Engineering and Science Collaborators VO and an Education and Public Outreach VO. The User-based services will be implemented to replicate the operational voice, video, telemetry and commanding systems. Once the User-based services are in place, they will be analyzed to establish feasibility for Grid enabling. If feasible then each User-based service will be Grid enabled. The remaining non-Grid services if not already Web enabled will be so enabled. In the end, four portals will be developed one for each VO. Each portal will contain the appropriate User-based services required for that VO to operate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yocum, D.R.; Berman, E.; Canal, P.
2007-05-01
As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.
PNNL Data-Intensive Computing for a Smarter Energy Grid
Carol Imhoff; Zhenyu (Henry) Huang; Daniel Chavarria
2017-12-09
The Middleware for Data-Intensive Computing (MeDICi) Integration Framework, an integrated platform to solve data analysis and processing needs, supports PNNL research on the U.S. electric power grid. MeDICi is enabling development of visualizations of grid operations and vulnerabilities, with goal of near real-time analysis to aid operators in preventing and mitigating grid failures.
Grid Enabled Geospatial Catalogue Web Service
NASA Technical Reports Server (NTRS)
Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush
2004-01-01
Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid
NASA Astrophysics Data System (ADS)
Yao, Tong
In recent years, wide bandgap (WBG) devices enable power converters with higher power density and higher efficiency. On the other hand, smart grid technologies are getting mature due to new battery technology and computer technology. In the near future, the two technologies will form the next generation of smart grid enabled by WBG devices. This dissertation deals with two applications: silicon carbide (SiC) device used for medium voltage level interface (7.2 kV to 240 V) and gallium nitride (GaN) device used for low voltage level interface (240 V/120 V). A 20 kW solid state transformer (SST) is designed with 6 kHz switching frequency SiC rectifier. Then three robust control design methods are proposed for each of its smart grid operation modes. In grid connected mode, a new LCL filter design method is proposed considering grid voltage THD, grid current THD and current regulation loop robust stability with respect to the grid impedance change. In grid islanded mode, micro synthesis method combined with variable structure control is used to design a robust controller for grid voltage regulation. For grid emergency mode, multivariable controller designed using Hinfinity synthesis method is proposed for accurate power sharing. Controller-hardware-in-the-loop (CHIL) testbed considering 7-SST system is setup with Real Time Digital Simulator (RTDS). The real TMS320F28335 DSP and Spartan 6 FPGA control board is used to interface a switching model SST in RTDS. And the proposed control methods are tested. For low voltage level application, a 3.3 kW smart grid hardware is built with 3 GaN inverters. The inverters are designed with the GaN device characterized using the proposed multi-function double pulse tester. The inverter is controlled by onboard TMS320F28379D dual core DSP with 200 kHz sampling frequency. Each inverter is tested to process 2.2 kW power with overall efficiency of 96.5 % at room temperature. The smart grid monitor system and fault interrupt devices (FID) based on Arduino Mega2560 are built and tested. The smart grid cooperates with GaN inverters through CAN bus communication. At last, the three GaN inverters smart grid achieved the function of grid connected to islanded mode smooth transition.
NASA Technical Reports Server (NTRS)
Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne
2004-01-01
The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taft, Jeffrey D.
The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.
The Elon Gap Experience: A Transformative First-Year Experience
ERIC Educational Resources Information Center
Morrison, Stephen T.; Burr, Katherine H.; Waters, Rexford A.; Hall, Eric E.
2016-01-01
The Elon Gap Experience (EGE) was conceived out of Elon University's most recent strategic plan, the Elon Commitment (Elon University, 2009). One theme calls for "strategic and innovative pathways in undergraduate and graduate education," specifically "to launch a service program as part of a gap-year program" (Elon University,…
Suicides in Adolescents: Benefit/Harm Balance of Antidepressants
ERIC Educational Resources Information Center
Saz, Ulas Eylem; Arslan, Mehmet Tayyip; Egemen, Ayten
2007-01-01
Introduction: Depression is an important cause of suicide in adolescents. It has been speculated that antidepressants themselves can increase the risk of suicide. Method: Cases of adolescents admitted to the Ege University Pediatric Emergency Department in Turkey due to suicide attempt were assessed. Results: Nine of 13 suicide attempts during…
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Grid Computing Education Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven Crumb
2008-01-15
The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.
THE VIRTUAL INSTRUMENT: SUPPORT FOR GRID-ENABLED MCELL SIMULATIONS
Casanova, Henri; Berman, Francine; Bartol, Thomas; Gokcay, Erhan; Sejnowski, Terry; Birnbaum, Adam; Dongarra, Jack; Miller, Michelle; Ellisman, Mark; Faerman, Marcio; Obertelli, Graziano; Wolski, Rich; Pomerantz, Stuart; Stiles, Joel
2010-01-01
Ensembles of widely distributed, heterogeneous resources, or Grids, have emerged as popular platforms for large-scale scientific applications. In this paper we present the Virtual Instrument project, which provides an integrated application execution environment that enables end-users to run and interact with running scientific simulations on Grids. This work is performed in the specific context of MCell, a computational biology application. While MCell provides the basis for running simulations, its capabilities are currently limited in terms of scale, ease-of-use, and interactivity. These limitations preclude usage scenarios that are critical for scientific advances. Our goal is to create a scientific “Virtual Instrument” from MCell by allowing its users to transparently access Grid resources while being able to steer running simulations. In this paper, we motivate the Virtual Instrument project and discuss a number of relevant issues and accomplishments in the area of Grid software development and application scheduling. We then describe our software design and report on the current implementation. We verify and evaluate our design via experiments with MCell on a real-world Grid testbed. PMID:20689618
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Martin, Gregory; Hurtt, James
As revised interconnection standards for grid-tied photovoltaic (PV) inverters address new advanced grid support functions (GSFs), there is increasing interest in inverter performance in the case of abnormal grid conditions. The growth of GSF-enabled inverters has outpaced the industry standards that define their operation, although recently published updates to UL1741 with Supplement SA define test conditions for GSFs such as volt-var control, frequency-watt control, and volt-age/frequency ride-through, among others. A comparative experimental evaluation has been completed on four commercially available, three-phase PV inverters in the 24.0-39.8 kVA power range on their GSF capability and the effect on abnormal grid conditionmore » response. This study examines the impact particular GSF implementations have on run-on times during islanding conditions, peak voltages in load rejection overvoltage scenarios, and peak currents during single-phase and three-phase fault events for individual inverters. This report reviews comparative test data, which shows that GSFs have little impact on the metrics of interest in most tests cases.« less
Smart Grid Enabled L2 EVSE for the Commercial Market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weeks, John; Pugh, Jerry
In 2011, the DOE issued Funding Opportunity DE-FOA-0000554 as a means of addressing two major task areas identified by the Grid Integration Tech Team (GITT) that would help transition Electric vehicles from a market driven by early adopters and environmental supporters to a market with mainstream volumes. Per DE-FOA-0000554, these tasks were: To reduce the cost of Electric Vehicle Supply Equipment (EVSE), thereby increasing the likelihood of the build out of EV charging infrastructure. The goal of increasing the number of EVSE available was to ease concerns over range anxiety, and promote the adoption of electric vehicles: To allow EVmore » loads to be managed via the smart grid, thereby maintaining power quality, reliability and affordability, while protecting installed distribution equipment. In December of that year, the DOE awarded one of the two contracts targeted toward commercial EVSE to Eaton, and in early 2012, we began in earnest the process of developing a Smart Grid Enable L2 EVSE for the Commercial Market (hereafter known as the DOE Charger). The design of the Smart Grid Enabled L2 EVSE was based primarily on the FOA requirements along with input from the Electric Transportation Infrastructure product line (hereafter ETI) marketing team who aided in development of the customer requirements.« less
Grid-Enabled High Energy Physics Research using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Mahmood, Akhtar
2005-04-01
At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
A Structured-Grid Quality Measure for Simulated Hypersonic Flows
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A structured-grid quality measure is proposed, combining three traditional measurements: intersection angles, stretching, and curvature. Quality assesses whether the grid generated provides the best possible tradeoffs in grid stretching and skewness that enable accurate flow predictions, whereas the grid density is assumed to be a constraint imposed by the available computational resources and the desired resolution of the flow field. The usefulness of this quality measure is assessed by comparing heat transfer predictions from grid convergence studies for grids of varying quality in the range of [0.6-0.8] on an 8'half-angle sphere-cone, at laminar, perfect gas, Mach 10 wind tunnel conditions.
Grid Generation Techniques Utilizing the Volume Grid Manipulator
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1998-01-01
This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.
European grid services for global earth science
NASA Astrophysics Data System (ADS)
Brewer, S.; Sipos, G.
2012-04-01
This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.
Establishing a K-12 Circuit Design Program
ERIC Educational Resources Information Center
Inceoglu, Mustafa M.
2010-01-01
Outreach, as defined by Wikipedia, is an effort by an organization or group to connect its ideas or practices to the efforts of other organizations, groups, specific audiences, or the general public. This paper describes a computer engineering outreach project of the Department of Computer Engineering at Ege University, Izmir, Turkey, to a local…
Effectiveness of Learning Strategies Taught to Teacher Candidates
ERIC Educational Resources Information Center
Engin, Gizem; Dikbayir, Ahmet; Genç, Salih Zeki
2017-01-01
The research was carried out with 41 people educated in Ege University, Faculty of Education, Social Studies Teacher Training Department during the fall semester of 2015-2016 academic year. Quasi-experimental design was used in the study. Within the scope of the research, prospective teachers were taught learning strategies lasting for ten weeks.…
The Leisure Behavior of the Turkish Prospective Teachers
ERIC Educational Resources Information Center
Aslan, Nese; Cansever, Belgin Arslan
2016-01-01
This study focused on prospective teachers' leisure behaviors. For this purpose, 47 fourth grade undergraduate students in Faculty of Education in Ege University, Izmir, Turkey participated. A qualitative research design was used in the study. In the process of analysing the data, Greimas' Actant Model as one of the analysing models in Semiology…
New Cadets and Other College Freshmen: Class of 1983
1980-04-01
selective four-year private coll4eges. bata are presented on secondary school and socioeconomic backgrounds, Values, interests, and activity patterns... School ................................ 6 9. Distance from Home to College .................................. 6 10. Parents’ Highest Level of Education...Handicaps ............................................. 12 15. Activities Engaged in During the Past Year ..................... 12 II. SECONDARY SCHOOL
A Solution Framework for Environmental Characterization Problems
This paper describes experiences developing a grid-enabled framework for solving environmental inverse problems. The solution approach taken here couples environmental simulation models with global search methods and requires readily available computational resources of the grid ...
Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1997-01-01
The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)
2003-01-01
The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote access from outside of NASA centers. A SAFE enabled IPG can enable IPG capabilities to be available to NASA mission design teams across different NASA center and partner company firewalls. This paper will first discuss some of the potential security issues for IPG to work across NASA center firewalls. We will then present the SAFE federated security model. Finally we will present the concept of the architecture of a SAFE enabled IPG and how it can benefit NASA mission development.
Survey of cyber security issues in smart grids
NASA Astrophysics Data System (ADS)
Chen, Thomas M.
2010-04-01
The future smart grid will enable cost savings and lower energy use by means of smart appliances and smart meters which support dynamic load management and real-time monitoring of energy use and distribution. The introduction of two-way communications and control into power grid introduces security and privacy concerns. This talk will survey the security and privacy issues in smart grids using the NIST reference model, and relate these issues to cyber security in the Internet.
The BioGRID Interaction Database: 2011 update
Stark, Chris; Breitkreutz, Bobby-Joe; Chatr-aryamontri, Andrew; Boucher, Lorrie; Oughtred, Rose; Livstone, Michael S.; Nixon, Julie; Van Auken, Kimberly; Wang, Xiaodong; Shi, Xiaoqi; Reguly, Teresa; Rust, Jennifer M.; Winter, Andrew; Dolinski, Kara; Tyers, Mike
2011-01-01
The Biological General Repository for Interaction Datasets (BioGRID) is a public database that archives and disseminates genetic and protein interaction data from model organisms and humans (http://www.thebiogrid.org). BioGRID currently holds 347 966 interactions (170 162 genetic, 177 804 protein) curated from both high-throughput data sets and individual focused studies, as derived from over 23 000 publications in the primary literature. Complete coverage of the entire literature is maintained for budding yeast (Saccharomyces cerevisiae), fission yeast (Schizosaccharomyces pombe) and thale cress (Arabidopsis thaliana), and efforts to expand curation across multiple metazoan species are underway. The BioGRID houses 48 831 human protein interactions that have been curated from 10 247 publications. Current curation drives are focused on particular areas of biology to enable insights into conserved networks and pathways that are relevant to human health. The BioGRID 3.0 web interface contains new search and display features that enable rapid queries across multiple data types and sources. An automated Interaction Management System (IMS) is used to prioritize, coordinate and track curation across international sites and projects. BioGRID provides interaction data to several model organism databases, resources such as Entrez-Gene and other interaction meta-databases. The entire BioGRID 3.0 data collection may be downloaded in multiple file formats, including PSI MI XML. Source code for BioGRID 3.0 is freely available without any restrictions. PMID:21071413
Electrolyzers Enhancing Flexibility in Electric Grids
Mohanpurkar, Manish; Luo, Yusheng; Terlip, Danny; ...
2017-11-10
This paper presents a real-time simulation with a hardware-in-the-loop (HIL)-based approach for verifying the performance of electrolyzer systems in providing grid support. Hydrogen refueling stations may use electrolyzer systems to generate hydrogen and are proposed to have the potential of becoming smarter loads that can proactively provide grid services. On the basis of experimental findings, electrolyzer systems with balance of plant are observed to have a high level of controllability and hence can add flexibility to the grid from the demand side. A generic front end controller (FEC) is proposed, which enables an optimal operation of the load on themore » basis of market and grid conditions. This controller has been simulated and tested in a real-time environment with electrolyzer hardware for a performance assessment. It can optimize the operation of electrolyzer systems on the basis of the information collected by a communication module. Real-time simulation tests are performed to verify the performance of the FEC-driven electrolyzers to provide grid support that enables flexibility, greater economic revenue, and grid support for hydrogen producers under dynamic conditions. In conclusion, the FEC proposed in this paper is tested with electrolyzers, however, it is proposed as a generic control topology that is applicable to any load.« less
2010-11-01
subsections discuss the design of the simulations. 3.12.1 Lanchester5D Simulation A Lanchester simulation was developed to conduct performance...benchmarks using the WarpIV Kernel and HyperWarpSpeed. The Lanchester simulation contains a user-definable number of grid cells in which blue and red...forces engage in battle using Lanchester equations. Having a user-definable number of grid cells enables the simulation to be stressed with high entity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley J; Denholm, Paul L; Feldman, David J
During the past decade, solar power has experienced transformative price declines, enabling it to become a viable electricity source that is supplying 1% of U.S. and world electricity. Further cost reductions are expected to enable substantially greater solar deployment, and new Department of Energy cost targets for utility-scale photovoltaics (PV) and concentrating solar thermal power are $0.03/kW h and $0.05/kW h by 2030, respectively. However, cost reductions are no longer the only significant challenge for PV - addressing grid integration challenges and increasing grid flexibility are critical as the penetration of PV electricity on the grid increases. The development ofmore » low cost energy storage is particularly synergistic with low cost PV, as cost declines in each technology are expected to support greater market opportunities for the other.« less
Structured grid technology to enable flow simulation in an integrated system environment
NASA Astrophysics Data System (ADS)
Remotigue, Michael Gerard
An application-driven Computational Fluid Dynamics (CFD) environment needs flexible and general tools to effectively solve complex problems in a timely manner. In addition, reusable, portable, and maintainable specialized libraries will aid in rapidly developing integrated systems or procedures. The presented structured grid technology enables the flow simulation for complex geometries by addressing grid generation, grid decomposition/solver setup, solution, and interpretation. Grid generation is accomplished with the graphical, arbitrarily-connected, multi-block structured grid generation software system (GUM-B) developed and presented here. GUM-B is an integrated system comprised of specialized libraries for the graphical user interface and graphical display coupled with a solid-modeling data structure that utilizes a structured grid generation library and a geometric library based on Non-Uniform Rational B-Splines (NURBS). A presented modification of the solid-modeling data structure provides the capability for arbitrarily-connected regions between the grid blocks. The presented grid generation library provides algorithms that are reliable and accurate. GUM-B has been utilized to generate numerous structured grids for complex geometries in hydrodynamics, propulsors, and aerodynamics. The versatility of the libraries that compose GUM-B is also displayed in a prototype to automatically regenerate a grid for a free-surface solution. Grid decomposition and solver setup is accomplished with the graphical grid manipulation and repartition software system (GUMBO) developed and presented here. GUMBO is an integrated system comprised of specialized libraries for the graphical user interface and graphical display coupled with a structured grid-tools library. The described functions within the grid-tools library reduce the possibility of human error during decomposition and setup for the numerical solver by accounting for boundary conditions and connectivity. GUMBO is linked with a flow solver interface, to the parallel UNCLE code, to provide load balancing tools and solver setup. Weeks of boundary condition and connectivity specification and validation has been reduced to hours. The UNCLE flow solver is utilized for the solution of the flow field. To accelerate convergence toward a quick engineering answer, a full multigrid (FMG) approach coupled with UNCLE, which is a full approximation scheme (FAS), is presented. The prolongation operators used in the FMG-FAS method are compared. The procedure is demonstrated on a marine propeller in incompressible flow. Interpretation of the solution is accomplished by vortex feature detection. Regions of "Intrinsic Swirl" are located by interrogating the velocity gradient tensor for complex eigenvalues. The "Intrinsic Swirl" parameter is visualized on a solution of a marine propeller to determine if any vortical features are captured. The libraries and the structured grid technology presented herein are flexible and general enough to tackle a variety of complex applications. This technology has significantly enabled the capability of the ERC personnel to effectively calculate solutions for complex geometries.
Integrated geometry and grid generation system for complex configurations
NASA Technical Reports Server (NTRS)
Akdag, Vedat; Wulf, Armin
1992-01-01
A grid generation system was developed that enables grid generation for complex configurations. The system called ICEM/CFD is described and its role in computational fluid dynamics (CFD) applications is presented. The capabilities of the system include full computer aided design (CAD), grid generation on the actual CAD geometry definition using robust surface projection algorithms, interfacing easily with known CAD packages through common file formats for geometry transfer, grid quality evaluation of the volume grid, coupling boundary condition set-up for block faces with grid topology generation, multi-block grid generation with or without point continuity and block to block interface requirement, and generating grid files directly compatible with known flow solvers. The interactive and integrated approach to the problem of computational grid generation not only substantially reduces manpower time but also increases the flexibility of later grid modifications and enhancements which is required in an environment where CFD is integrated into a product design cycle.
Yang, Tzuhsiung; Berry, John F
2018-06-04
The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.
Military Cyberspace: From Evolution to Revolution
2012-02-08
support the GCCs and enable USCYBERCOM to accomplish its mission? 15. SUBJECT TERMS Network Operations, Global Information Grid ( GIG ), Network...DATE: 08 February 2012 WORD COUNT: 5,405 PAGES: 30 KEY TERMS: Network Operations, Global Information Grid ( GIG ), Network Architecture...defense of the DOD global information grid ( GIG ). The DOD must pursue an enterprise approach to network management in the cyberspace domain to
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
Teacher Images in Spain and Turkey: A Cross-Cultural Study
ERIC Educational Resources Information Center
Aslan, Nese
2016-01-01
The purpose of this study was to investigate the metaphorical images of "teacher" produced by 55 Spanish and 72 Turkish preservice teachers at Universitat de Barcelona, in Barcelona, Spain, and at Ege University, in Izmir, Turkey. It is based on a theory of teacher socialization which affirms that cultural values have an impact on the…
Social and Emotional Outcomes of Child Sexual Abuse: A Clinical Sample in Turkey
ERIC Educational Resources Information Center
Ozbaran, Burcu; Erermis, Serpil; Bukusoglu, Nagehan; Bildik, Tezan; Tamar, Muge; Ercan, Eyyup Sabri; Aydin, Cahide; Cetin, Saniye Korkmaz
2009-01-01
Childhood sexual abuse is a traumatic life event that may cause psychiatric disorders such as posttraumatic stress disorder and depression. During 2003-2004, 20 sexually abused children were referred to the Child and Adolescent Psychiatry Clinic of Ege University in Izmir, Turkey. Two years later, the psychological adjustment of these children (M…
The Meaning of Marriage According to University Students: A Phenomenological Study
ERIC Educational Resources Information Center
Koçyigit Özyigit, Melike
2017-01-01
The aim of this study is to reveal the meanings university students attribute to marriage. The sample of the study consists of 14 final year students (7 males and 7 females), whose ages range between 22 and 32, studying in the Education Faculty at Ege University. The study is of "phenomenological research design". "Semi-structured…
Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, A. C.; Feng, W. C.; Belford, Geneva G.
2001-01-01
Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less
Data distribution service-based interoperability framework for smart grid testbed infrastructure
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
2016-03-02
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2015-01-12
The combined team of GE Global Research, Federal Express, National Renewable Energy Laboratory, and Consolidated Edison has successfully achieved the established goals contained within the Department of Energy’s Smart Grid Capable Electric Vehicle Supply Equipment funding opportunity. The final program product, shown charging two vehicles in Figure 1, reduces by nearly 50% the total installed system cost of the electric vehicle supply equipment (EVSE) as well as enabling a host of new Smart Grid enabled features. These include bi-directional communications, load control, utility message exchange and transaction management information. Using the new charging system, Utilities or energy service providers willmore » now be able to monitor transportation related electrical loads on their distribution networks, send load control commands or preferences to individual systems, and then see measured responses. Installation owners will be able to authorize usage of the stations, monitor operations, and optimally control their electricity consumption. These features and cost reductions have been developed through a total system design solution.« less
An Extensible Information Grid for Risk Management
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David G.
2003-01-01
This paper describes recent work on developing an extensible information grid for risk management at NASA - a RISK INFORMATION GRID. This grid is being developed by integrating information grid technology with risk management processes for a variety of risk related applications. To date, RISK GRID applications are being developed for three main NASA processes: risk management - a closed-loop iterative process for explicit risk management, program/project management - a proactive process that includes risk management, and mishap management - a feedback loop for learning from historical risks that escaped other processes. This is enabled through an architecture involving an extensible database, structuring information with XML, schemaless mapping of XML, and secure server-mediated communication using standard protocols.
Coalition FORCEnet Implementation Analysis
2006-09-01
C2 grid, and Engagement grid. As a result, enabled Network- Centric warfare for Coalition Forces shows a significant increase in capabilities. Joint...209 14. SUBJECT TERMS FORCEnet, Coalition Forces, AUSCANNZUKUS, Network- Centric Warfare (NCW), Data Mining, EXTEND Modeling, Expeditionary...NETWORK- CENTRIC WARFARE AND FORCENET .....................................................................................................1 B
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... good example of an enabling Smart Grid technology that can empower both utilities and consumers to... Information and Communication Technologies (ICT) sector by integrating broadband into the developing Smart...'s years [[Page 26204
Large temporal scale and capacity subsurface bulk energy storage with CO2
NASA Astrophysics Data System (ADS)
Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.
2017-12-01
Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.
NASA Astrophysics Data System (ADS)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats
2014-06-01
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by "Big Data" will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt
Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared overmore » the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.« less
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
de Araújo, Paulo Régis C; Filho, Raimir Holanda; Rodrigues, Joel J P C; Oliveira, João P C M; Braga, Stephanie A
2018-04-24
At present, the standardisation of electrical equipment communications is on the rise. In particular, manufacturers are releasing equipment for the smart grid endowed with communication protocols such as DNP3, IEC 61850, and MODBUS. However, there are legacy equipment operating in the electricity distribution network that cannot communicate using any of these protocols. Thus, we propose an infrastructure to allow the integration of legacy electrical equipment to smart grids by using wireless sensor networks (WSNs). In this infrastructure, each legacy electrical device is connected to a sensor node, and the sink node runs a middleware that enables the integration of this device into a smart grid based on suitable communication protocols. This middleware performs tasks such as the translation of messages between the power substation control centre (PSCC) and electrical equipment in the smart grid. Moreover, the infrastructure satisfies certain requirements for communication between the electrical equipment and the PSCC, such as enhanced security, short response time, and automatic configuration. The paper’s contributions include a solution that enables electrical companies to integrate their legacy equipment into smart-grid networks relying on any of the above mentioned communication protocols. This integration will reduce the costs related to the modernisation of power substations.
de Araújo, Paulo Régis C.; Filho, Raimir Holanda; Oliveira, João P. C. M.; Braga, Stephanie A.
2018-01-01
At present, the standardisation of electrical equipment communications is on the rise. In particular, manufacturers are releasing equipment for the smart grid endowed with communication protocols such as DNP3, IEC 61850, and MODBUS. However, there are legacy equipment operating in the electricity distribution network that cannot communicate using any of these protocols. Thus, we propose an infrastructure to allow the integration of legacy electrical equipment to smart grids by using wireless sensor networks (WSNs). In this infrastructure, each legacy electrical device is connected to a sensor node, and the sink node runs a middleware that enables the integration of this device into a smart grid based on suitable communication protocols. This middleware performs tasks such as the translation of messages between the power substation control centre (PSCC) and electrical equipment in the smart grid. Moreover, the infrastructure satisfies certain requirements for communication between the electrical equipment and the PSCC, such as enhanced security, short response time, and automatic configuration. The paper’s contributions include a solution that enables electrical companies to integrate their legacy equipment into smart-grid networks relying on any of the above mentioned communication protocols. This integration will reduce the costs related to the modernisation of power substations. PMID:29695099
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek
Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.
High energy collimating fine grids for HESP program
NASA Technical Reports Server (NTRS)
Eberhard, Carol D.; Frazier, Edward
1993-01-01
There is a need to develop fine pitch x-ray collimator grids as an enabling technology for planned future missions. The grids consist of an array of thin parallel strips of x-ray absorbing material, such as tungsten, with pitches ranging from 34 microns to 2.036 millimeters. The grids are the key components of a new class of spaceborne instruments known as 'x-ray modulation collimators.' These instruments are the first to produce images of celestial sources in the hard x-ray and gamma-ray spectral regions.
1987-02-18
linguistic disasters. They are devoid of meaning, grammar, concept, reason and fluency . However, the journal that we are reading now is far ahead of...1984 list but were included in the 1985 list: Meric Textile, Sivas Cement , Nigde Cement , Ege Metal, Istanbul Piston Ring Casings, Denizli Printed...Textile, Gimsan, Teknik Rubber, Ipsan Textile, Turk Cement , Bozuyuk Ceramic, Kampana Leather and Shoesoles Industry, Vestel, Turkish Shipbuilding
Spaceflight Operations Services Grid (SOSG) Project
NASA Technical Reports Server (NTRS)
Bradford, Robert; Lisotta, Anthony
2004-01-01
The motivation, goals, and objectives of the Space Operations Services Grid Project (SOSG) are covered in this viewgraph presentation. The goals and objectives of SOSG include: 1) Developing a grid-enabled prototype providing Space-based ground operations end user services through a collaborative effort between NASA, academia, and industry to assess the technical and cost feasibility of implementation of Grid technologies in the Space Operations arena; 2) Provide to space operations organizations and processes, through a single secure portal(s), access to all the information technology (Grid and Web based) services necessary for program/project development, operations and the ultimate creation of new processes, information and knowledge.
The neural component-process architecture of endogenously generated emotion
Kanske, Philipp; Singer, Tania
2017-01-01
Abstract Despite the ubiquity of endogenous emotions and their role in both resilience and pathology, the processes supporting their generation are largely unknown. We propose a neural component process model of endogenous generation of emotion (EGE) and test it in two functional magnetic resonance imaging (fMRI) experiments (N = 32/293) where participants generated and regulated positive and negative emotions based on internal representations, usin self-chosen generation methods. EGE activated nodes of salience (SN), default mode (DMN) and frontoparietal control (FPCN) networks. Component processes implemented by these networks were established by investigating their functional associations, activation dynamics and integration. SN activation correlated with subjective affect, with midbrain nodes exclusively distinguishing between positive and negative affect intensity, showing dynamics consistent generation of core affect. Dorsomedial DMN, together with ventral anterior insula, formed a pathway supporting multiple generation methods, with activation dynamics suggesting it is involved in the generation of elaborated experiential representations. SN and DMN both coupled to left frontal FPCN which in turn was associated with both subjective affect and representation formation, consistent with FPCN supporting the executive coordination of the generation process. These results provide a foundation for research into endogenous emotion in normal, pathological and optimal function. PMID:27522089
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Solving Navigational Uncertainty Using Grid Cells on Robots
Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.
2010-01-01
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
e-Science and its implications.
Hey, Tony; Trefethen, Anne
2003-08-15
After a definition of e-science and the Grid, the paper begins with an overview of the technological context of Grid developments. NASA's Information Power Grid is described as an early example of a 'prototype production Grid'. The discussion of e-science and the Grid is then set in the context of the UK e-Science Programme and is illustrated with reference to some UK e-science projects in science, engineering and medicine. The Open Standards approach to Grid middleware adopted by the community in the Global Grid Forum is described and compared with community-based standardization processes used for the Internet, MPI, Linux and the Web. Some implications of the imminent data deluge that will arise from the new generation of e-science experiments in terms of archiving and curation are then considered. The paper concludes with remarks about social and technological issues posed by Grid-enabled 'collaboratories' in both scientific and commercial contexts.
An optimized top contact design for solar cell concentrators
NASA Technical Reports Server (NTRS)
Desalvo, Gregory C.; Barnett, Allen M.
1985-01-01
A new grid optimization scheme is developed for point focus solar cell concentrators which employs a separated grid and busbar concept. Ideally, grid lines act as the primary current collectors and receive all of the current from the semiconductor region. Busbars are the secondary collectors which pick up current from the grids and carry it out of the active region of the solar cell. This separation of functions leads to a multithickness metallization design, where the busbars are made larger in cross section than the grids. This enables the busbars to carry more current per unit area of shading, which is advantageous under high solar concentration where large current densities are generated. Optimized grid patterns using this multilayer concept can provide a 1.6 to 20 percent increase in output power efficiency over optimized single thickness grids.
Advanced Power Electronics and Smart Inverters | Grid Modernization | NREL
provide grid services such as voltage and frequency regulation, ride-through, dynamic current injection impacts of smart inverters on distribution systems. These activities are focused on enabling high combines high-voltage silicon carbide with revolutionary concepts such as additive manufacturing and multi
Stability Test for Transient-Temperature Calculations
NASA Technical Reports Server (NTRS)
Campbell, W.
1984-01-01
Graphical test helps assure numerical stability of calculations of transient temperature or diffusion in composite medium. Rectangular grid forms basis of two-dimensional finite-difference model for heat conduction or other diffusion like phenomena. Model enables calculation of transient heat transfer among up to four different materials that meet at grid point.
Open Science in the Cloud: Towards a Universal Platform for Scientific and Statistical Computing
NASA Astrophysics Data System (ADS)
Chine, Karim
The UK, through the e-Science program, the US through the NSF-funded cyber infrastructure and the European Union through the ICT Calls aimed to provide "the technological solution to the problem of efficiently connecting data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge".1 The Grid (Foster, 2002; Foster; Kesselman, Nick, & Tuecke, 2002), foreseen as a major accelerator of discovery, didn't meet the expectations it had excited at its beginnings and was not adopted by the broad population of research professionals. The Grid is a good tool for particle physicists and it has allowed them to tackle the tremendous computational challenges inherent to their field. However, as a technology and paradigm for delivering computing on demand, it doesn't work and it can't be fixed. On one hand, "the abstractions that Grids expose - to the end-user, to the deployers and to application developers - are inappropriate and they need to be higher level" (Jha, Merzky, & Fox), and on the other hand, academic Grids are inherently economically unsustainable. They can't compete with a service outsourced to the Industry whose quality and price would be driven by market forces. The virtualization technologies and their corollary, the Infrastructure-as-a-Service (IaaS) style cloud, hold the promise to enable what the Grid failed to deliver: a sustainable environment for computational sciences that would lower the barriers for accessing federated computational resources, software tools and data; enable collaboration and resources sharing and provide the building blocks of a ubiquitous platform for traceable and reproducible computational research.
ERIC Educational Resources Information Center
Kolhede, Eric
2001-01-01
This 5-year study of undergraduates at a small western private college revealed similarities and differences between males and females in their expectations of business programs (e.g., women's greater desire for experiential learning), which point to product development and promotional strategies that can be targeted toward female students. (EV)
M1 Abrams Tank Procedure Guides
1982-07-01
Vemp =. ... . Mheek mustlam- sitlee no"Ageper- 2. Cal.5 . . swetgh: (PAge LI) 3. CaLS . ... Zen (pag 13) BOUzSzMT M CAL. SO I.. Teak peatima. .. lafel...Cleee ,-. 13.~~ .eds a." . .ege ..... abeW "d us (P P U) :. -z KINK CAL. 50 3. vempe " .. ... .led (pg ) 2. o . .. .ad tio"ge mgety .. 1 4. IC omee .C.1
Baker, Kyri; Jin, Xin; Vaidynathan, Deepthi; Jones, Wesley; Christensen, Dane; Sparn, Bethany; Woods, Jason; Sorensen, Harry; Lunacek, Monte
2016-08-04
Dataset demonstrating the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the-loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfort bounds and physical hardware limitations.
Grid-based Meteorological and Crisis Applications
NASA Astrophysics Data System (ADS)
Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin
2010-05-01
We present several applications from domain of meteorology and crisis management we developed and/or plan to develop. Particularly, we present IMS Model Suite - a complex software system designed to address the needs of accurate forecast of weather and hazardous weather phenomena, environmental pollution assessment, prediction of consequences of nuclear accident and radiological emergency. We discuss requirements on computational means and our experiences how to meet them by grid computing. The process of a pollution assessment and prediction of the consequences in case of radiological emergence results in complex data-flows and work-flows among databases, models and simulation tools (geographical databases, meteorological and dispersion models, etc.). A pollution assessment and prediction requires running of 3D meteorological model (4 nests with resolution from 50 km to 1.8 km centered on nuclear power plant site, 38 vertical levels) as well as running of the dispersion model performing the simulation of the release transport and deposition of the pollutant with respect to the numeric weather prediction data, released material description, topography, land use description and user defined simulation scenario. Several post-processing options can be selected according to particular situation (e.g. doses calculation). Another example is a forecasting of fog as one of the meteorological phenomena hazardous to the aviation as well as road traffic. It requires complicated physical model and high resolution meteorological modeling due to its dependence on local conditions (precise topography, shorelines and land use classes). An installed fog modeling system requires a 4 time nested parallelized 3D meteorological model with 1.8 km horizontal resolution and 42 levels vertically (approx. 1 million points in 3D space) to be run four times daily. The 3D model outputs and multitude of local measurements are utilized by SPMD-parallelized 1D fog model run every hour. The fog forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.
Orchestrating Bulk Data Movement in Grid Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vazhkudai, SS
2005-01-25
Data Grids provide a convenient environment for researchers to manage and access massively distributed bulk data by addressing several system and transfer challenges inherent to these environments. This work addresses issues involved in the efficient selection and access of replicated data in Grid environments in the context of the Globus Toolkit{trademark}, building middleware that (1) selects datasets in highly replicated environments, enabling efficient scheduling of data transfer requests; (2) predicts transfer times of bulk wide-area data transfers using extensive statistical analysis; and (3) co-allocates bulk data transfer requests, enabling parallel downloads from mirrored sites. These efforts have demonstrated a decentralizedmore » data scheduling architecture, a set of forecasting tools that predict bandwidth availability within 15% error and co-allocation architecture, and heuristics that expedites data downloads by up to 2 times.« less
Intelligent automated surface grid generation
NASA Technical Reports Server (NTRS)
Yao, Ke-Thia; Gelsey, Andrew
1995-01-01
The goal of our research is to produce a flexible, general grid generator for automated use by other programs, such as numerical optimizers. The current trend in the gridding field is toward interactive gridding. Interactive gridding more readily taps into the spatial reasoning abilities of the human user through the use of a graphical interface with a mouse. However, a sometimes fruitful approach to generating new designs is to apply an optimizer with shape modification operators to improve an initial design. In order for this approach to be useful, the optimizer must be able to automatically grid and evaluate the candidate designs. This paper describes and intelligent gridder that is capable of analyzing the topology of the spatial domain and predicting approximate physical behaviors based on the geometry of the spatial domain to automatically generate grids for computational fluid dynamics simulators. Typically gridding programs are given a partitioning of the spatial domain to assist the gridder. Our gridder is capable of performing this partitioning. This enables the gridder to automatically grid spatial domains of wide range of configurations.
Wide-area situation awareness in electric power grid
NASA Astrophysics Data System (ADS)
Greitzer, Frank L.
2010-04-01
Two primary elements of the US energy policy are demand management and efficiency and renewable sources. Major objectives are clean energy transmission and integration, reliable energy transmission, and grid cyber security. Development of the Smart Grid seeks to achieve these goals by lowering energy costs for consumers, achieving energy independence and reducing greenhouse gas emissions. The Smart Grid is expected to enable real time wide-area situation awareness (SA) for operators. Requirements for wide-area SA have been identified among interoperability standards proposed by the Federal Energy Regulatory Commission and the National Institute of Standards and Technology to ensure smart-grid functionality. Wide-area SA and enhanced decision support and visualization tools are key elements in the transformation to the Smart Grid. This paper discusses human factors research to promote SA in the electric power grid and the Smart Grid. Topics that will be discussed include the role of human factors in meeting US energy policy goals, the impact and challenges for Smart Grid development, and cyber security challenges.
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
NASA Technical Reports Server (NTRS)
Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)
1998-01-01
Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.
Towards Dynamic Authentication in the Grid — Secure and Mobile Business Workflows Using GSet
NASA Astrophysics Data System (ADS)
Mangler, Jürgen; Schikuta, Erich; Witzany, Christoph; Jorns, Oliver; Ul Haq, Irfan; Wanek, Helmut
Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement "The Grid is all about sharing." has to be extended by "... and making money out of it!". To allow for the realization of this vision the trust-worthyness of the underlying technology needs to be ensured. This can be achieved by the use of gSET (Gridified Secure Electronic Transaction) as a basic technology for trust management and secure accounting in the presented Grid based workflow. We present a framework, conceptually and technically, from the area of the Mobile-Grid, which justifies the Grid infrastructure as a viable platform to enable commercially successful business workflows.
Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics
NASA Astrophysics Data System (ADS)
Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc
2018-02-01
Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.
Sonoma Clean Power (SCP) customers are eligible to receive a free JuiceNet-enabled EVSE from eMotorWerks eligible to receive a free JuicePlug (smart grid adapter) to convert to a JuiceNet-enabled EVSE. Customers
A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research
Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue
2012-01-01
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123
A roadmap for caGrid, an enterprise Grid architecture for biomedical research.
Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil
2008-01-01
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.
Performance evaluation of a 2-mode PV grid connected system in Thailand -- Case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jivacate, C.; Mongconvorawan, S.; Sinratanapukdee, E.
A PV grid connected system with small battery bank has been set up in a rural district, North Thailand in order to demonstrate a 2-mode operation concept. The objective is to gain experience on the PV grid connected concept without battery storage. However, due to the evening peak demand and a rather weak distribution grid which is typical in rural areas, small battery bank is still required to enable the maximum energy transfer to grid for the time being before moving fully to the no battery mode. The analyzed data seems to indicate possible performance improvement by re-arranging the numbermore » of PV modules and battery in the string.« less
Network gateway security method for enterprise Grid: a literature review
NASA Astrophysics Data System (ADS)
Sujarwo, A.; Tan, J.
2017-03-01
The computational Grid has brought big computational resources closer to scientists. It enables people to do a large computational job anytime and anywhere without any physical border anymore. However, the massive and spread of computer participants either as user or computational provider arise problems in security. The challenge is on how the security system, especially the one which filters data in the gateway could works in flexibility depends on the registered Grid participants. This paper surveys what people have done to approach this challenge, in order to find the better and new method for enterprise Grid. The findings of this paper is the dynamically controlled enterprise firewall to secure the Grid resources from unwanted connections with a new firewall controlling method and components.
RXIO: Design and implementation of high performance RDMA-capable GridFTP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Yuan; Yu, Weikuan; Vetter, Jeffrey S.
2011-12-21
For its low-latency, high bandwidth, and low CPU utilization, Remote Direct Memory Access (RDMA) has established itself as an effective data movement technology in many networking environments. However, the transport protocols of grid run-time systems, such as GridFTP in Globus, are not yet capable of utilizing RDMA. In this study, we examine the architecture of GridFTP for the feasibility of enabling RDMA. An RDMA-capable XIO (RXIO) framework is designed and implemented to extend its XIO system and match the characteristics of RDMA. Our experimental results demonstrate that RDMA can significantly improve the performance of GridFTP, reducing the latency by 32%more » and increasing the bandwidth by more than three times. In achieving such performance improvements, RDMA dramatically cuts down CPU utilization of GridFTP clients and servers. In conclusion, these results demonstrate that RXIO can effectively exploit the benefits of RDMA for GridFTP. It offers a good prototype to further leverage GridFTP on wide-area RDMA networks.« less
The functional micro-organization of grid cells revealed by cellular-resolution imaging
Heys, James G.; Rangarajan, Krsna V.; Dombeck, Daniel A.
2015-01-01
Summary Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater micro-circuit level understanding of the brain’s representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to non-grid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: The similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a “Mexican Hat” shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. PMID:25467986
A smart grid simulation testbed using Matlab/Simulink
NASA Astrophysics Data System (ADS)
Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei
2014-06-01
The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.
Web-HLA and Service-Enabled RTI in the Simulation Grid
NASA Astrophysics Data System (ADS)
Huang, Jijie; Li, Bo Hu; Chai, Xudong; Zhang, Lin
HLA-based simulations in a grid environment have now become a main research hotspot in the M&S community, but there are many shortcomings of the current HLA running in a grid environment. This paper analyzes the analogies between HLA and OGSA from the software architecture point of view, and points out the service-oriented method should be introduced into the three components of HLA to overcome its shortcomings. This paper proposes an expanded running architecture that can integrate the HLA with OGSA and realizes a service-enabled RTI (SE-RTI). In addition, in order to handle the bottleneck problem that is how to efficiently realize the HLA time management mechanism, this paper proposes a centralized way by which the CRC of the SE-RTI takes charge of the time management and the dispatching of TSO events of each federate. Benchmark experiments indicate that the running velocity of simulations in Internet or WAN is properly improved.
Performance Evaluation of a SLA Negotiation Control Protocol for Grid Networks
NASA Astrophysics Data System (ADS)
Cergol, Igor; Mirchandani, Vinod; Verchere, Dominique
A framework for an autonomous negotiation control protocol for service delivery is crucial to enable the support of heterogeneous service level agreements (SLAs) that will exist in distributed environments. We have first given a gist of our augmented service negotiation protocol to support distinct service elements. The augmentations also encompass related composition of the services and negotiation with several service providers simultaneously. All the incorporated augmentations will enable to consolidate the service negotiation operations for telecom networks, which are evolving towards Grid networks. Furthermore, our autonomous negotiation protocol is based on a distributed multi-agent framework to create an open market for Grid services. Second, we have concisely presented key simulation results of our work in progress. The results exhibit the usefulness of our negotiation protocol for realistic scenarios that involves different background traffic loading, message sizes and traffic flow asymmetry between background and negotiation traffics.
One recognition sequence, seven restriction enzymes, five reaction mechanisms
Gowers, Darren M.; Bellamy, Stuart R.W.; Halford, Stephen E.
2004-01-01
The diversity of reaction mechanisms employed by Type II restriction enzymes was investigated by analysing the reactions of seven endonucleases at the same DNA sequence. NarI, KasI, Mly113I, SfoI, EgeI, EheI and BbeI cleave DNA at several different positions in the sequence 5′-GGCGCC-3′. Their reactions on plasmids with one or two copies of this sequence revealed five distinct mechanisms. These differ in terms of the number of sites the enzyme binds, and the number of phosphodiester bonds cleaved per turnover. NarI binds two sites, but cleaves only one bond per DNA-binding event. KasI also cuts only one bond per turnover but acts at individual sites, preferring intact to nicked sites. Mly113I cuts both strands of its recognition sites, but shows full activity only when bound to two sites, which are then cleaved concertedly. SfoI, EgeI and EheI cut both strands at individual sites, in the manner historically considered as normal for Type II enzymes. Finally, BbeI displays an absolute requirement for two sites in close physical proximity, which are cleaved concertedly. The range of reaction mechanisms for restriction enzymes is thus larger than commonly imagined, as is the number of enzymes needing two recognition sites. PMID:15226412
Data management and analysis for the Earth System Grid
NASA Astrophysics Data System (ADS)
Williams, D. N.; Ananthakrishnan, R.; Bernholdt, D. E.; Bharathi, S.; Brown, D.; Chen, M.; Chervenak, A. L.; Cinquini, L.; Drach, R.; Foster, I. T.; Fox, P.; Hankin, S.; Henson, V. E.; Jones, P.; Middleton, D. E.; Schwidder, J.; Schweitzer, R.; Schuler, R.; Shoshani, A.; Siebenlist, F.; Sim, A.; Strand, W. G.; Wilhelmi, N.; Su, M.
2008-07-01
The international climate community is expected to generate hundreds of petabytes of simulation data within the next five to seven years. This data must be accessed and analyzed by thousands of analysts worldwide in order to provide accurate and timely estimates of the likely impact of climate change on physical, biological, and human systems. Climate change is thus not only a scientific challenge of the first order but also a major technological challenge. In order to address this technological challenge, the Earth System Grid Center for Enabling Technologies (ESG-CET) has been established within the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC)-2 program, with support from the offices of Advanced Scientific Computing Research and Biological and Environmental Research. ESG-CET's mission is to provide climate researchers worldwide with access to the data, information, models, analysis tools, and computational capabilities required to make sense of enormous climate simulation datasets. Its specific goals are to (1) make data more useful to climate researchers by developing Grid technology that enhances data usability; (2) meet specific distributed database, data access, and data movement needs of national and international climate projects; (3) provide a universal and secure web-based data access portal for broad multi-model data collections; and (4) provide a wide-range of Grid-enabled climate data analysis tools and diagnostic methods to international climate centers and U.S. government agencies. Building on the successes of the previous Earth System Grid (ESG) project, which has enabled thousands of researchers to access tens of terabytes of data from a small number of ESG sites, ESG-CET is working to integrate a far larger number of distributed data providers, high-bandwidth wide-area networks, and remote computers in a highly collaborative problem-solving environment.
Grid-based implementation of XDS-I as part of image-enabled EHR for regional healthcare in Shanghai.
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Guangrong; Ling, Yun; Peng, Derong
2011-03-01
Due to the rapid growth of Shanghai city to 20 million residents, the balance between healthcare supply and demand has become an important issue. The local government hopes to ameliorate this problem by developing an image-enabled electronic healthcare record (EHR) sharing mechanism between certain hospitals. This system is designed to enable healthcare collaboration and reduce healthcare costs by allowing review of prior examination data obtained at other hospitals. Here, we present a design method and implementation solution of image-enabled EHRs (i-EHRs) and describe the implementation of i-EHRs in four hospitals and one regional healthcare information center, as well as their preliminary operating results. We designed the i-EHRs with service-oriented architecture (SOA) and combined the grid-based image management and distribution capability, which are compliant with IHE XDS-I integration profile. There are seven major components and common services included in the i-EHRs. In order to achieve quick response for image retrieving in low-bandwidth network environments, we use a JPEG2000 interactive protocol and progressive display technique to transmit images from a Grid Agent as Imaging Source Actor to the PACS workstation as Imaging Consumer Actor. The first phase of pilot testing of our image-enabled EHR was implemented in the Zhabei district of Shanghai for imaging document sharing and collaborative diagnostic purposes. The pilot testing began in October 2009; there have been more than 50 examinations daily transferred between the City North Hospital and the three community hospitals for collaborative diagnosis. The feedback from users at all hospitals is very positive, with respondents stating the system to be easy to use and reporting no interference with their normal radiology diagnostic operation. The i-EHR system can provide event-driven automatic image delivery for collaborative imaging diagnosis across multiple hospitals based on work flow requirements. This project demonstrated that the grid-based implementation of IHE XDS-I for image-enabled EHR could scale effectively to serve a regional healthcare solution with collaborative imaging services. The feedback from users of community hospitals and large hospital is very positive.
A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.
Shankaranarayanan, Avinas; Amaldas, Christine
2010-11-01
With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.
NASA Technical Reports Server (NTRS)
Lansing, Faiza S.; Rascoe, Daniel L.
1993-01-01
This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.
Domed, 40-cm-Diameter Ion Optics for an Ion Thruster
NASA Technical Reports Server (NTRS)
Soulas, George C.; Haag, Thomas W.; Patterson, Michael J.
2006-01-01
Improved accelerator and screen grids for an ion accelerator have been designed and tested in a continuing effort to increase the sustainable power and thrust at the high end of the accelerator throttling range. The accelerator and screen grids are undergoing development for intended use as NASA s Evolutionary Xenon Thruster (NEXT) a spacecraft thruster that would have an input-power throttling range of 1.2 to 6.9 kW. The improved accelerator and screen grids could also be incorporated into ion accelerators used in such industrial processes as ion implantation and ion milling. NEXT is a successor to the NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) thruster - a state-of-the-art ion thruster characterized by, among other things, a beam-extraction diameter of 28 cm, a span-to-gap ratio (defined as this diameter divided by the distance between the grids) of about 430, and a rated peak input power of 2.3 kW. To enable the NEXT thruster to operate at the required higher peak power, the beam-extraction diameter was increased to 40 cm almost doubling the beam-extraction area over that of NSTAR (see figure). The span-to-gap ratio was increased to 600 to enable throttling to the low end of the required input-power range. The geometry of the apertures in the grids was selected on the basis of experience in the use of grids of similar geometry in the NSTAR thruster. Characteristics of the aperture geometry include a high open-area fraction in the screen grid to reduce discharge losses and a low open-area fraction in the accelerator grid to reduce losses of electrically neutral gas atoms or molecules. The NEXT accelerator grid was made thicker than that of the NSTAR to make more material available for erosion, thereby increasing the service life and, hence, the total impulse. The NEXT grids are made of molybdenum, which was chosen because its combination of high strength and low thermal expansion helps to minimize thermally and inertially induced deflections of the grids. A secondary reason for choosing molybdenum is the availability of a large database for this material. To keep development costs low, the NEXT grids have been fabricated by the same techniques used to fabricate the NSTAR grids. In tests, the NEXT ion optics have been found to outperform the NSTAR ion optics, as expected.
Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.
Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar
2012-01-01
Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haratyk, Geoffrey; Komiyama, Ryoichi; Forsberg, Charles
Affordable reliable energy made possible a large middle class in the industrial world. Concerns about climate change require a transition to nuclear, wind, and solar—but these energy sources in current forms do not have the capability to meet the requirements for variable affordable energy. Researchers from the Massachusetts Institute of Technology, the University of Tokyo, the Tokyo Institute of Technology and the Institute for Energy Economics are undertaking a series of studies to address how to make this transition to a low carbon world. Three areas are being investigated. The first area is the development of electricity grid models tomore » understand the impacts of different choices of technologies and different limits on greenhouse gas emissions. The second area is the development of technologies to enable variable electricity to the grid while capital-intensive nuclear, wind and solar generating plants operate at full capacity to minimize costs. Technologies to enable meeting variable electricity demand while operating plants at high-capacity factors include use of heat and hydrogen storage. The third area is the development of electricity market rules to enable transition to a low-carbon grid.« less
Advanced Methodology for Simulation of Complex Flows Using Structured Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David
1995-01-01
Detailed simulations of viscous flows in complicated geometries pose a significant challenge to current capabilities of Computational Fluid Dynamics (CFD). To enable routine application of CFD to this class of problems, advanced methodologies are required that employ (a) automated grid generation, (b) adaptivity, (c) accurate discretizations and efficient solvers, and (d) advanced software techniques. Each of these ingredients contributes to increased accuracy, efficiency (in terms of human effort and computer time), and/or reliability of CFD software. In the long run, methodologies employing structured grid systems will remain a viable choice for routine simulation of flows in complex geometries only if genuinely automatic grid generation techniques for structured grids can be developed and if adaptivity is employed more routinely. More research in both these areas is urgently needed.
Turbulent Output-Based Anisotropic Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Carlson, Jan-Renee
2010-01-01
Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.
An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua
2011-07-09
Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Gridmore » Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.« less
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Finite Control Set Model Predictive Control for Multiple Distributed Generators Microgrids
NASA Astrophysics Data System (ADS)
Babqi, Abdulrahman Jamal
This dissertation proposes two control strategies for AC microgrids that consist of multiple distributed generators (DGs). The control strategies are valid for both grid-connected and islanded modes of operation. In general, microgrid can operate as a stand-alone system (i.e., islanded mode) or while it is connected to the utility grid (i.e., grid connected mode). To enhance the performance of a micrgorid, a sophisticated control scheme should be employed. The control strategies of microgrids can be divided into primary and secondary controls. The primary control regulates the output active and reactive powers of each DG in grid-connected mode as well as the output voltage and frequency of each DG in islanded mode. The secondary control is responsible for regulating the microgrid voltage and frequency in the islanded mode. Moreover, it provides power sharing schemes among the DGs. In other words, the secondary control specifies the set points (i.e. reference values) for the primary controllers. In this dissertation, Finite Control Set Model Predictive Control (FCS-MPC) was proposed for controlling microgrids. FCS-MPC was used as the primary controller to regulate the output power of each DG (in the grid-connected mode) or the voltage of the point of DG coupling (in the islanded mode of operation). In the grid-connected mode, Direct Power Model Predictive Control (DPMPC) was implemented to manage the power flow between each DG and the utility grid. In the islanded mode, Voltage Model Predictive Control (VMPC), as the primary control, and droop control, as the secondary control, were employed to control the output voltage of each DG and system frequency. The controller was equipped with a supplementary current limiting technique in order to limit the output current of each DG in abnormal incidents. The control approach also enabled smooth transition between the two modes. The performance of the control strategy was investigated and verified using PSCAD/EMTDC software platform. This dissertation also proposes a control and power sharing strategy for small-scale microgrids in both grid-connected and islanded modes based on centralized FCS-MPC. In grid-connected mode, the controller was capable of managing the output power of each DG and enabling flexible power regulation between the microgrid and the utility grid. In islanded mode, the controller regulated the microgrid voltage and frequency, and provided a precise power sharing scheme among the DGs. In addition, the power sharing can be adjusted flexibly by changing the sharing ratio. The proposed control also enabled plug-and-play operation. Moreover, a smooth transition between the two modes of operation was achieved without any disturbance in the system. Case studies were carried out in order to validate the proposed control strategy with the PSCAD/EMTDA software package.
NASA Astrophysics Data System (ADS)
Kirubi, Charles Gathu
Community micro-grids have played a central role in increasing access to off-grid rural electrification (RE) in many regions of the developing world, notably South Asia. However, the promise of community micro-grids in sub-Sahara Africa remains largely unexplored. My study explores the potential and limits of community micro-grids as options for increasing access to off-grid RE in sub-Sahara Africa. Contextualized in five community micro-grids in rural Kenya, my study is framed through theories of collective action and combines qualitative and quantitative methods, including household surveys, electronic data logging and regression analysis. The main contribution of my research is demonstrating the circumstances under which community micro-grids can contribute to rural development and the conditions under which individuals are likely to initiate and participate in such projects collectively. With regard to rural development, I demonstrate that access to electricity enables the use of electric equipment and tools by small and micro-enterprises, resulting in significant improvement in productivity per worker (100--200% depending on the task at hand) and a corresponding growth in income levels in the order of 20--70%, depending on the product made. Access to electricity simultaneously enables and improves delivery of social and business services from a wide range of village-level infrastructure (e.g. schools, markets, water pumps) while improving the productivity of agricultural activities. Moreover, when local electricity users have an ability to charge and enforce cost-reflective tariffs and electricity consumption is closely linked to productive uses that generate incomes, cost recovery is feasible. By their nature---a new technology delivering highly valued services by the elites and other members, limited local experience and expertise, high capital costs---community micro-grids are good candidates for elite-domination. Even so, elite control does not necessarily lead to elite capture. Experiences from different micro-grid settings illustrate the manner in which a coincidence of interest between the elites and the rest of members and access to external support can create incentives and mechanisms to enable community-wide access to scarce services, hence mitigating elite capture. Moreover, access to external support was found to increase the likelihood of participation for the relatively poor households. The policy-relevant message from this research is two-fold. In rural areas with suitable sites for micro-hydro power, the potential for community micro-grids appear considerable to the extent that this option would seem to represent "the road not taken" as far as policies and initiatives aimed at expanding RE are concerned in Kenya and other African countries with comparable settings. However, local participatory initiatives not complimented by external technical assistance run a considerable risk of locking rural households into relatively more costly and poor-quality services. By taking advantage of existing and/or building a dense network of local organizations, including micro-finance agencies, the government and development partners can make available to local communities the necessary support---financial, technical or regulatory---essential for efficient design of micro-grids in addition to facilitating equitable distribution of electricity benefits.
Pohjonen, Hanna; Ross, Peeter; Blickman, Johan G; Kamman, Richard
2007-01-01
Emerging technologies are transforming the workflows in healthcare enterprises. Computing grids and handheld mobile/wireless devices are providing clinicians with enterprise-wide access to all patient data and analysis tools on a pervasive basis. In this paper, emerging technologies are presented that provide computing grids and streaming-based access to image and data management functions, and system architectures that enable pervasive computing on a cost-effective basis. Finally, the implications of such technologies are investigated regarding the positive impacts on clinical workflows.
The functional micro-organization of grid cells revealed by cellular-resolution imaging.
Heys, James G; Rangarajan, Krsna V; Dombeck, Daniel A
2014-12-03
Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater microcircuit-level understanding of the brain's representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to nongrid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: the similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a "Mexican hat"-shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. Copyright © 2014 Elsevier Inc. All rights reserved.
A VO-Driven Astronomical Data Grid in China
NASA Astrophysics Data System (ADS)
Cui, C.; He, B.; Yang, Y.; Zhao, Y.
2010-12-01
With the implementation of many ambitious observation projects, including LAMOST, FAST, and Antarctic observatory at Doom A, observational astronomy in China is stepping into a brand new era with emerging data avalanche. In the era of e-Science, both these cutting-edge projects and traditional astronomy research need much more powerful data management, sharing and interoperability. Based on data-grid concept, taking advantages of the IVOA interoperability technologies, China-VO is developing a VO-driven astronomical data grid environment to enable multi-wavelength science and large database science. In the paper, latest progress and data flow of the LAMOST, architecture of the data grid, and its supports to the VO are discussed.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
Enabling Efficient Intelligence Analysis in Degraded Environments
2013-06-01
Magnets Grid widget for multidimensional information exploration ; and a record browser of Visual Summary Cards widget for fast visual identification of...evolution analysis; a Magnets Grid widget for multi- dimensional information exploration ; and a record browser of Visual Summary Cards widget for fast...attention and inattentional blindness. It also explores and develops various techniques to represent information in a salient way and provide efficient
Upgrades of Two Computer Codes for Analysis of Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2005-01-01
Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.
DEM Based Modeling: Grid or TIN? The Answer Depends
NASA Astrophysics Data System (ADS)
Ogden, F. L.; Moreno, H. A.
2015-12-01
The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.
Twelve Principles for Green Energy Storage in Grid Applications.
Arbabzadeh, Maryam; Johnson, Jeremiah X; Keoleian, Gregory A; Rasmussen, Paul G; Thompson, Levi T
2016-01-19
The introduction of energy storage technologies to the grid could enable greater integration of renewables, improve system resilience and reliability, and offer cost effective alternatives to transmission and distribution upgrades. The integration of energy storage systems into the electrical grid can lead to different environmental outcomes based on the grid application, the existing generation mix, and the demand. Given this complexity, a framework is needed to systematically inform design and technology selection about the environmental impacts that emerge when considering energy storage options to improve sustainability performance of the grid. To achieve this, 12 fundamental principles specific to the design and grid application of energy storage systems are developed to inform policy makers, designers, and operators. The principles are grouped into three categories: (1) system integration for grid applications, (2) the maintenance and operation of energy storage, and (3) the design of energy storage systems. We illustrate the application of each principle through examples published in the academic literature, illustrative calculations, and a case study with an off-grid application of vanadium redox flow batteries (VRFBs). In addition, trade-offs that can emerge between principles are highlighted.
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Donvit, Giacinto; Falzone, Alberto; Rocca, Giuseppe La; Maggi, Giorgio Pietro; Milanesi, Luciano; Vicarioicario, Saverio
This paper depicts the solution proposed by INFN to allow users, not owning a personal digital certificate and therefore not belonging to any specific Virtual Organization (VO), to access Grid infrastructures via the GENIUS Grid portal enabled with robot certificates. Robot certificates, also known as portal certificates, are associated with a specific application that the user wants to share with the whole Grid community and have recently been introduced by the EUGridPMA (European Policy Management Authority for Grid Authentication) to perform automated tasks on Grids on behalf of users. They are proven to be extremely useful to automate grid service monitoring, data processing production, distributed data collection systems, etc. In this paper, robot certificates have been used to allow bioinformaticians involved in the Italian LIBI project to perform large scale phylogenetic analyses. The distributed environment set up in this work strongly simplify the grid access of occasional users and represents a valuable step forward to wide the communities of users.
Opportunity to Plug Your Car Into the Electric Grid is Arriving
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griego, G.
2010-06-01
Plug-in hybrid electric vehicles are hitting the U.S. market for the first time this year. Similar to hybrid electric vehicles, they feature a larger battery and plug-in charger that allows consumers to replace a portion of their fossil fuel by simply plugging their cars into standard 110-volt outlets at home or wherever outlets are available. If these vehicles become widely accepted, consumers and the environment will benefit, according to a computer modeling study by Xcel Energy and the Department of Energy's National Renewable Energy Laboratory. Researchers found that each PHEV would cut carbon dioxide emissions in half and save ownersmore » up to $450 in annual fuel costs and up to 240 gallons of gasoline. The study also looked at the impact of PHEVs on the electric grid in Colorado if used on a large scale. Integrating large numbers of these vehicles will depend on the adoption of smart-grid technology - adding digital elements to the electric power system to improve efficiency and enable more dynamic communication between consumers and producers of electricity. Using an intelligent monitoring system that keeps track of all electricity flowing in the system, a smart grid could enable optimal PHEV battery-charging much the same way it would enable users to manage their energy use in household appliances and factory processes to reduce energy costs. When a smart grid is implemented, consumers will have many low-cost opportunities to charge PHEVs at different times of the day. Plug-in vehicles could contribute electricity at peak times, such as summer evenings, while taking electricity from the grid at low-use times such as the middle of the night. Electricity rates could offer incentives for drivers to 'give back' electricity when it is most needed and to 'take' it when it is plentiful. The integration of PHEVs, solar arrays and wind turbines into the grid at larger scales will require a more modern electricity system. Technology already exists to allow customers to feed excess power from their own renewable energy systems back to the grid. As more homes and businesses find opportunities to plan power flows to and from the grid for economic gain using their renewable energy systems and PHEVs, more sophisticated systems will be needed. A smart grid will improve the efficiency of energy consumption, manage real-time power flows and provide two-way metering needed to compensate small power producers. Many states are working toward the smart-grid concept, particularly to incorporate renewable sources into their utility grids. According to the Department of Energy, 30 states have developed and adopted renewable portfolio standards, which require up to 20 percent of a state's energy portfolio to come exclusively from renewable sources by this year, and up to 30 percent in the future. NREL has been laying the foundation for both PHEVs and the smart grid for many years with work including modifying hybrid electric cars with plug-in technology; studying fuel economy, batteries and power electronics; exploring options for recharging batteries with solar and wind technologies; and measuring reductions in greenhouse gas emissions. The laboratory participated in development of smart-grid implementation standards with industry, utilities, government and others to guide the integration of renewable and other small electricity generation and storage sources. Dick DeBlasio, principal program manager for electricity programs, is now leading the Institute of Electrical and Electronics Engineers Standards efforts to connect the dots regarding power generation, communication and information technologies.« less
NASA Astrophysics Data System (ADS)
Böhm, R.; Hufnagl, E.; Kupfer, R.; Engler, T.; Hausding, J.; Cherif, C.; Hufenbach, W.
2013-12-01
A significant improvement in the properties of plastic components can be achieved by introducing flexible multiaxial textile grids as reinforcement. This reinforcing concept is based on the layerwise bonding of biaxially or multiaxially oriented, completely stretched filaments of high-performance fibers, e.g. glass or carbon, and thermoplastic components, using modified warp knitting techniques. Such pre-consolidated grid-like textiles are particularly suitable for use in injection moulding, since the grid geometry is very robust with respect to flow pressure and temperature on the one hand and possesses an adjustable spacing to enable a complete filling of the mould cavity on the other hand. The development of pre-consolidated textile grids and their further processing into composites form the basis for providing tailored parts with a large number of additional integrated functions like fibrous sensors or electroconductive fibres. Composites reinforced in that way allow new product groups for promising lightweight structures to be opened up in future. The article describes the manufacturing process of this new composite class and their variability regarding reinforcement and function integration. An experimentally based study of the mechanical properties is performed. For this purpose, quasi-static and highly dynamic tensile tests have been carried out as well as impact penetration experiments. The reinforcing potential of the multiaxial grids is demonstrated by means of evaluating drop tower experiments on automotive components. It has been shown that the load-adapted reinforcement enables a significant local or global improvement of the properties of plastic components depending on industrial requirements.
The National Grid Project: A system overview
NASA Technical Reports Server (NTRS)
Gaither, Adam; Gaither, Kelly; Jean, Brian; Remotigue, Michael; Whitmire, John; Soni, Bharat; Thompson, Joe; Dannenhoffer,, John; Weatherill, Nigel
1995-01-01
The National Grid Project (NGP) is a comprehensive numerical grid generation software system that is being developed at the National Science Foundation (NSF) Engineering Research Center (ERC) for Computational Field Simulation (CFS) at Mississippi State University (MSU). NGP is supported by a coalition of U.S. industries and federal laboratories. The objective of the NGP is to significantly decrease the amount of time it takes to generate a numerical grid for complex geometries and to increase the quality of these grids to enable computational field simulations for applications in industry. A geometric configuration can be discretized into grids (or meshes) that have two fundamental forms: structured and unstructured. Structured grids are formed by intersecting curvilinear coordinate lines and are composed of quadrilateral (2D) and hexahedral (3D) logically rectangular cells. The connectivity of a structured grid provides for trivial identification of neighboring points by incrementing coordinate indices. Unstructured grids are composed of cells of any shape (commonly triangles, quadrilaterals, tetrahedra and hexahedra), but do not have trivial identification of neighbors by incrementing an index. For unstructured grids, a set of points and an associated connectivity table is generated to define unstructured cell shapes and neighboring points. Hybrid grids are a combination of structured grids and unstructured grids. Chimera (overset) grids are intersecting or overlapping structured grids. The NGP system currently provides a user interface that integrates both 2D and 3D structured and unstructured grid generation, a solid modeling topology data management system, an internal Computer Aided Design (CAD) system based on Non-Uniform Rational B-Splines (NURBS), a journaling language, and a grid/solution visualization system.
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Collaboration in a Wireless Grid Innovation Testbed by Virtual Consortium
NASA Astrophysics Data System (ADS)
Treglia, Joseph; Ramnarine-Rieks, Angela; McKnight, Lee
This paper describes the formation of the Wireless Grid Innovation Testbed (WGiT) coordinated by a virtual consortium involving academic and non-academic entities. Syracuse University and Virginia Tech are primary university partners with several other academic, government, and corporate partners. Objectives include: 1) coordinating knowledge sharing, 2) defining key parameters for wireless grids network applications, 3) dynamically connecting wired and wireless devices, content and users, 4) linking to VT-CORNET, Virginia Tech Cognitive Radio Network Testbed, 5) forming ad hoc networks or grids of mobile and fixed devices without a dedicated server, 6) deepening understanding of wireless grid application, device, network, user and market behavior through academic, trade and popular publications including online media, 7) identifying policy that may enable evaluated innovations to enter US and international markets and 8) implementation and evaluation of the international virtual collaborative process.
Role of EGE-related Growth Factor Cripto in Murine Mammary Tumorigenesis
1998-10-01
overhaul or (ii) is a release or disclosure of technical data (other than detailed manufacturing or process data) to, or use of such data by, a foreign...5101-5107 (1997). 8. Thinakaran, G. et aL Endoproteolysis of presenilin 1 and accumulation of processed derivatives in trically expressed in a...streak and head process distally, but is tb ikr proximally towards the embryonic/extra- (Fig. 1k, 1). Cripto expression disappears completely by the late
Hypersonic code efficiency and validation studies
NASA Technical Reports Server (NTRS)
Bennett, Bradford C.
1992-01-01
Renewed interest in hypersonic and supersonic flows spurred the development of the Compressible Navier-Stokes (CNS) code. Originally developed for external flows, CNS was modified to enable it to also be applied to internal high speed flows. In the initial phase of this study CNS was applied to both internal flow applications and fellow researchers were taught to run CNS. The second phase of this research was the development of surface grids over various aircraft configurations for the High Speed Research Program (HSRP). The complex nature of these configurations required the development of improved surface grid generation techniques. A significant portion of the grid generation effort was devoted to testing and recommending modifications to early versions of the S3D surface grid generation code.
Research on Resilience of Power Systems Under Natural Disasters—A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yezhou; Chen, Chen; Wang, Jianhui
2016-03-01
Natural disasters can cause large blackouts. Research into natural disaster impacts on electric power systems is emerging to understand the causes of the blackouts, explore ways to prepare and harden the grid, and increase the resilience of the power grid under such events. At the same time, new technologies such as smart grid, micro grid, and wide area monitoring applications could increase situational awareness as well as enable faster restoration of the system. This paper aims to consolidate and review the progress of the research field towards methods and tools of forecasting natural disaster related power system disturbances, hardening andmore » pre-storm operations, and restoration models. Challenges and future research opportunities are also presented in the paper.« less
How to engage end-users in smart energy behaviour?
NASA Astrophysics Data System (ADS)
Valkering, Pieter; Laes, Erik; Kessels, Kris; Uyterlinde, Matthijs; Straver, Koen
2014-12-01
End users will play a crucial role in up-coming smart grids that aim to link end-users and energy providers in a better balanced and more efficient electricity system. Within this context, this paper aims to deliver a coherent view on current good practice in end-user engagement in smart grid projects. It draws from a recent review of theoretical insights from sustainable consumption behaviour, social marketing and innovation systems and empirical insights from recent smart grid projects to create an inventory of common motivators, enablers and barriers of behavioural change, and the end-user engagement principles that can be derived from that. We conclude with identifying current research challenges as input for a research agenda on end-user engagement in smart grids.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2016-01-01
This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2017-01-01
This manual describes the installation and execution of FUN3D version 13.2, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2014-01-01
This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2014-01-01
This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2017-01-01
This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2016-01-01
This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
A Study of ATLAS Grid Performance for Distributed Analysis
NASA Astrophysics Data System (ADS)
Panitkin, Sergey; Fine, Valery; Wenaus, Torre
2012-12-01
In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2018-01-01
This manual describes the installation and execution of FUN3D version 13.3, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Emma; Kiliccote, Sila; McParland, Charles
2014-07-01
This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (µPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. µPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the grid’s increasingly complex loads that include features such as large volumes of distributed generation.more » Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using µPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendors’ sensors and advanced measurement devices. In addition, data from advanced sources such as µPMUs could be used to validate models to improve/ensure accuracy, providing information on normally estimated values such as underground conductor impedance, and characterization of complex loads. Although the input of high-fidelity data to existing tools will be challenging, µPMU data on phase angle (as well as other data from advanced sensors) will be useful for basic operational decisions that are based on a trend of changing data.« less
Development of a Web Based Simulating System for Earthquake Modeling on the Grid
NASA Astrophysics Data System (ADS)
Seber, D.; Youn, C.; Kaiser, T.
2007-12-01
Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, T. A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. This paper presents a procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
Grid Stiffened Structure Analysis Tool
NASA Technical Reports Server (NTRS)
1999-01-01
The Grid Stiffened Analysis Tool contract is contract performed by Boeing under NASA purchase order H30249D. The contract calls for a "best effort" study comprised of two tasks: (1) Create documentation for a composite grid-stiffened structure analysis tool, in the form of a Microsoft EXCEL spread sheet, that was developed by originally at Stanford University and later further developed by the Air Force, and (2) Write a program that functions as a NASTRAN pre-processor to generate an FEM code for grid-stiffened structure. In performing this contract, Task 1 was given higher priority because it enables NASA to make efficient use of a unique tool they already have; Task 2 was proposed by Boeing because it also would be beneficial to the analysis of composite grid-stiffened structures, specifically in generating models for preliminary design studies. The contract is now complete, this package includes copies of the user's documentation for Task 1 and a CD ROM & diskette with an electronic copy of the user's documentation and an updated version of the "GRID 99" spreadsheet.
Geometry definition and grid generation for a complete fighter aircraft
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.
1986-01-01
Recent advances in computing power and numerical solution procedures have enabled computational fluid dynamicists to attempt increasingly difficult problems. In particular, efforts are focusing on computations of complex three-dimensional flow fields about realistic aerodynamic bodies. To perform such computations, a very accurate and detailed description of the surface geometry must be provided, and a three-dimensional grid must be generated in the space around the body. The geometry must be supplied in a format compatible with the grid generation requirements, and must be verified to be free of inconsistencies. A procedure for performing the geometry definition of a fighter aircraft that makes use of a commercial computer-aided design/computer-aided manufacturing system is presented. Furthermore, visual representations of the geometry are generated using a computer graphics system for verification of the body definition. Finally, the three-dimensional grids for fighter-like aircraft are generated by means of an efficient new parabolic grid generation method. This method exhibits good control of grid quality.
GridPix detectors: Production and beam test results
NASA Astrophysics Data System (ADS)
Koppert, W. J. C.; van Bakel, N.; Bilevych, Y.; Colas, P.; Desch, K.; Fransen, M.; van der Graaf, H.; Hartjes, F.; Hessey, N. P.; Kaminski, J.; Schmitz, J.; Schön, R.; Zappon, F.
2013-12-01
The innovative GridPix detector is a Time Projection Chamber (TPC) that is read out with a Timepix-1 pixel chip. By using wafer post-processing techniques an aluminium grid is placed on top of the chip. When operated, the electric field between the grid and the chip is sufficient to create electron induced avalanches which are detected by the pixels. The time-to-digital converter (TDC) records the drift time enabling the reconstruction of high precision 3D track segments. Recently GridPixes were produced on full wafer scale, to meet the demand for more reliable and cheaper devices in large quantities. In a recent beam test the contribution of both diffusion and time walk to the spatial and angular resolutions of a GridPix detector with a 1.2 mm drift gap are studied in detail. In addition long term tests show that in a significant fraction of the chips the protection layer successfully quenches discharges, preventing harm to the chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markel, T.; Meintz, A.; Hardy, K.
2015-05-28
The report begins with a discussion of the current state of the energy and transportation systems, followed by a summary of some VGI scenarios and opportunities. The current efforts to create foundational interface standards are detailed, and the requirements for enabling PEVs as a grid resource are presented. Existing technology demonstrations that include vehicle to grid functions are summarized. The report also includes a data-based discussion on the magnitude and variability of PEVs as a grid resource, followed by an overview of existing simulation tools that vi This report is available at no cost from the National Renewable Energy Laboratorymore » (NREL) at www.nrel.gov/publications. can be used to explore the expansion of VGI to larger grid functions that might offer system and customer value. The document concludes with a summary of the requirements and potential action items that would support greater adoption of VGI.« less
Interplay Between Energy-Market Dynamics and Physical Stability of a Smart Power Grid
NASA Astrophysics Data System (ADS)
Picozzi, Sergio; Mammoli, Andrea; Sorrentino, Francesco
2013-03-01
A smart power grid is being envisioned for the future which, among other features, should enable users to play the dual role of consumers as well as producers and traders of energy, thanks to emerging renewable energy production and energy storage technologies. As a complex dynamical system, any power grid is subject to physical instabilities. With existing grids, such instabilities tend to be caused by natural disasters, human errors, or weather-related peaks in demand. In this work we analyze the impact, upon the stability of a smart grid, of the energy-market dynamics arising from users' ability to buy from and sell energy to other users. The stability analysis of the resulting dynamical system is performed assuming different proposed models for this market of the future, and the corresponding stability regions in parameter space are identified. We test our theoretical findings by comparing them with data collected from some existing prototype systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bracho, Riccardo; Linvill, Carl; Sedano, Richard
With the vision to transform the power sector, Mexico included in the new laws and regulations deployment of smart grid technologies and provided various attributes to the Ministry of Energy and the Energy Regulatory Commission to enact public policies and regulation. The use of smart grid technologies can have a significant impact on the integration of variable renewable energy resources while maintaining reliability and stability of the system, significantly reducing technical and non-technical electricity losses in the grid, improving cyber security, and allowing consumers to make distributed generation and demand response decisions. This report describes for Mexico's Ministry of Energymore » (SENER) an overall approach (Optimal Feasible Pathway) for moving forward with smart grid policy development in Mexico to enable increasing electric generation from renewable energy in a way that optimizes system stability and reliability in an efficient and cost-effective manner.« less
Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations
NASA Technical Reports Server (NTRS)
Chan, William M.
2004-01-01
Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.
Unlocking the potential of smart grid technologies with behavioral science
Sintov, Nicole D.; Schultz, P. Wesley
2015-01-01
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings. PMID:25914666
Unlocking the potential of smart grid technologies with behavioral science.
Sintov, Nicole D; Schultz, P Wesley
2015-01-01
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizing the impact of smart grid technologies. In this paper, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.
Unlocking the potential of smart grid technologies with behavioral science
Sintov, Nicole D.; Schultz, P. Wesley
2015-04-09
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less
Unlocking the potential of smart grid technologies with behavioral science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sintov, Nicole D.; Schultz, P. Wesley
Smart grid systems aim to provide a more stable and adaptable electricity infrastructure, and to maximize energy efficiency. Grid-linked technologies vary widely in form and function, but generally share common potentials: to reduce energy consumption via efficiency and/or curtailment, to shift use to off-peak times of day, and to enable distributed storage and generation options. Although end users are central players in these systems, they are sometimes not central considerations in technology or program design, and in some cases, their motivations for participating in such systems are not fully appreciated. Behavioral science can be instrumental in engaging end-users and maximizingmore » the impact of smart grid technologies. In this study, we present emerging technologies made possible by a smart grid infrastructure, and for each we highlight ways in which behavioral science can be applied to enhance their impact on energy savings.« less
Using OSG Computing Resources with (iLC)Dirac
NASA Astrophysics Data System (ADS)
Sailer, A.; Petric, M.; CLICdp Collaboration
2017-10-01
CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called ‘SiteDirectors’, which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional site-specific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were developed. Not only is the usage of these types of computing elements now completely transparent for all DIRAC instances, which makes DIRAC a flexible solution for OSG based virtual organisations, but it also allows LCG Grid Sites to move to the HTCondor-CE software, without shutting DIRAC based VOs out of their site. In these proceedings we detail how we interfaced the DIRAC system to the HTCondor-CE and Globus computing elements and explain the encountered obstacles and solutions developed, and how the linear collider community uses resources in the OSG.
Grid-enabled mammographic auditing and training system
NASA Astrophysics Data System (ADS)
Yap, M. H.; Gale, A. G.
2008-03-01
Effective use of new technologies to support healthcare initiatives is important and current research is moving towards implementing secure grid-enabled healthcare provision. In the UK, a large-scale collaborative research project (GIMI: Generic Infrastructures for Medical Informatics), which is concerned with the development of a secure IT infrastructure to support very widespread medical research across the country, is underway. In the UK, there are some 109 breast screening centers and a growing number of individuals (circa 650) nationally performing approximately 1.5 million screening examinations per year. At the same, there is a serious, and ongoing, national workforce issue in screening which has seen a loss of consultant mammographers and a growth in specially trained technologists and other non-radiologists. Thus there is a need to offer effective and efficient mammographic training so as to maintain high levels of screening skills. Consequently, a grid based system has been proposed which has the benefit of offering very large volumes of training cases that the mammographers can access anytime and anywhere. A database, spread geographically across three university systems, of screening cases is used as a test set of known cases. The GIMI mammography training system first audits these cases to ensure that they are appropriately described and annotated. Subsequently, the cases are utilized for training in a grid-based system which has been developed. This paper briefly reviews the background to the project and then details the ongoing research. In conclusion, we discuss the contributions, limitations, and future plans of such a grid based approach.
Improving Grid Resilience through Informed Decision-making (IGRID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric
The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less
NASA Astrophysics Data System (ADS)
Lai, Changliang; Wang, Junbiao; Liu, Chuang
2014-10-01
Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.
2017-07-01
forecasts and observations on a common grid, which enables the application a number of different spatial verification methods that reveal various...forecasts of continuous meteorological variables using categorical and object-based methods . White Sands Missile Range (NM): Army Research Laboratory (US... Research version of the Weather Research and Forecasting Model adapted for generating short-range nowcasts and gridded observations produced by the
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Salvador B.
Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, andmore » secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.« less
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghatikar, Girish; Mashayekh, Salman; Stadler, Michael
Distributed power systems in the U.S. and globally are evolving to provide reliable and clean energy to consumers. In California, existing regulations require significant increases in renewable generation, as well as identification of customer-side distributed energy resources (DER) controls, communication technologies, and standards for interconnection with the electric grid systems. As DER deployment expands, customer-side DER control and optimization will be critical for system flexibility and demand response (DR) participation, which improves the economic viability of DER systems. Current DER systems integration and communication challenges include leveraging the existing DER and DR technology and systems infrastructure, and enabling optimized cost,more » energy and carbon choices for customers to deploy interoperable grid transactions and renewable energy systems at scale. Our paper presents a cost-effective solution to these challenges by exploring communication technologies and information models for DER system integration and interoperability. This system uses open standards and optimization models for resource planning based on dynamic-pricing notifications and autonomous operations within various domains of the smart grid energy system. It identifies architectures and customer engagement strategies in dynamic DR pricing transactions to generate feedback information models for load flexibility, load profiles, and participation schedules. The models are tested at a real site in California—Fort Hunter Liggett (FHL). Furthermore, our results for FHL show that the model fits within the existing and new DR business models and networked systems for transactive energy concepts. Integrated energy systems, communication networks, and modeling tools that coordinate supply-side networks and DER will enable electric grid system operators to use DER for grid transactions in an integrated system.« less
Earth Science community support in the EGI-Inspire Project
NASA Astrophysics Data System (ADS)
Schwichtenberg, H.
2012-04-01
The Earth Science Grid community is following its strategy of propagating Grid technology to the ES disciplines, setting up interactive collaboration among the members of the community and stimulating the interest of stakeholders on the political level since ten years already. This strategy was described in a roadmap published in an Earth Science Informatics journal. It was applied through different European Grid projects and led to a large Grid Earth Science VRC that covers a variety of ES disciplines; in the end, all of them were facing the same kind of ICT problems. .. The penetration of Grid in the ES community is indicated by the variety of applications, the number of countries in which ES applications are ported, the number of papers in international journals and the number of related PhDs. Among the six virtual organisations belonging to ES, one, ESR, is generic. Three others -env.see-grid-sci.eu, meteo.see-grid-sci.eu and seismo.see-grid-sci.eu- are thematic and regional (South Eastern Europe) for environment, meteorology and seismology. The sixth VO, EGEODE, is for the users of the Geocluster software. There are also ES users in national VOs or VOs related to projects. The services for the ES task in EGI-Inspire concerns the data that are a key part of any ES application. The ES community requires several interfaces to access data and metadata outside of the EGI infrastructure, e.g. by using grid-enabled database interfaces. The data centres have also developed service tools for basic research activities such as searching, browsing and downloading these datasets, but these are not accessible from applications executed on the Grid. The ES task in EGI-Inspire aims to make these tools accessible from the Grid. In collaboration with GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) this task is maintaining and evolving an interface in response to new requirements that will allow data in the GENESI-DR infrastructure to be accessed from EGI resources to enable future research activities by this HUC. The international climate community for IPCC has created the Earth System Grid (ESG) to store and share climate data. There is a need to interface ESG with EGI for climate studies - parametric, regional and impact aspects. Critical points concern the interoperability of security mechanism between both "organisations", data protection policy, data transfer, data storage and data caching. Presenter: Horst Schwichtenberg Co-Authors: Monique Petitdidier (IPSL), Andre Gemünd (SCAI), Wim Som de Cerff (KNMI), Michael Schnell (SCAI)
Auspice: Automatic Service Planning in Cloud/Grid Environments
NASA Astrophysics Data System (ADS)
Chiu, David; Agrawal, Gagan
Recent scientific advances have fostered a mounting number of services and data sets available for utilization. These resources, though scattered across disparate locations, are often loosely coupled both semantically and operationally. This loosely coupled relationship implies the possibility of linking together operations and data sets to answer queries. This task, generally known as automatic service composition, therefore abstracts the process of complex scientific workflow planning from the user. We have been exploring a metadata-driven approach toward automatic service workflow composition, among other enabling mechanisms, in our system, Auspice: Automatic Service Planning in Cloud/Grid Environments. In this paper, we present a complete overview of our system's unique features and outlooks for future deployment as the Cloud computing paradigm becomes increasingly eminent in enabling scientific computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Youngho; Hur, Kyeon; Kang, Yong
This study investigates the emerging harmonic stability concerns to be addressed by grid planners in generation interconnection studies, owing to the increased adoption of renewable energy resources connected to the grid via power electronic converters. The wideband and high-frequency electromagnetic transient (EMT) characteristics of these converter-interfaced generators (CIGs) and their interaction with the grid impedance are not accurately captured in the typical dynamic studies conducted by grid planners. This paper thus identifies the desired components to be studied and subsequently develops a practical process for integrating a new CIG into a grid with the existing CIGs. The steps of thismore » process are as follows: the impedance equation of a CIG using its control dynamics and an interface filter to the grid, for example, an LCL filter (inductor-capacitor-inductor type), is developed; an equivalent impedance model including the existing CIGs nearby and the grid observed from the point of common coupling are derived; the system stability for credible operating scenarios is assessed. Detailed EMT simulations validate the accuracy of the impedance models and stability assessment for various connection scenarios. Here, by complementing the conventional EMT simulation studies, the proposed analytical approach enables grid planners to identify critical design parameters for seamlessly integrating a new CIG and ensuring the reliability of the grid.« less
Design and Implementation of Real-Time Off-Grid Detection Tool Based on FNET/GridEye
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiahui; Zhang, Ye; Liu, Yilu
2014-01-01
Real-time situational awareness tools are of critical importance to power system operators, especially during emergencies. The availability of electric power has become a linchpin of most post disaster response efforts as it is the primary dependency for public and private sector services, as well as individuals. Knowledge of the scope and extent of facilities impacted, as well as the duration of their dependence on backup power, enables emergency response officials to plan for contingencies and provide better overall response. Based on real-time data acquired by Frequency Disturbance Recorders (FDRs) deployed in the North American power grid, a real-time detection methodmore » is proposed. This method monitors critical electrical loads and detects the transition of these loads from an on-grid state, where the loads are fed by the power grid to an off-grid state, where the loads are fed by an Uninterrupted Power Supply (UPS) or a backup generation system. The details of the proposed detection algorithm are presented, and some case studies and off-grid detection scenarios are also provided to verify the effectiveness and robustness. Meanwhile, the algorithm has already been implemented based on the Grid Solutions Framework (GSF) and has effectively detected several off-grid situations.« less
Accessing eSDO Solar Image Processing and Visualization through AstroGrid
NASA Astrophysics Data System (ADS)
Auden, E.; Dalla, S.
2008-08-01
The eSDO project is funded by the UK's Science and Technology Facilities Council (STFC) to integrate Solar Dynamics Observatory (SDO) data, algorithms, and visualization tools with the UK's Virtual Observatory project, AstroGrid. In preparation for the SDO launch in January 2009, the eSDO team has developed nine algorithms covering coronal behaviour, feature recognition, and global / local helioseismology. Each of these algorithms has been deployed as an AstroGrid Common Execution Architecture (CEA) application so that they can be included in complex VO workflows. In addition, the PLASTIC-enabled eSDO "Streaming Tool" online movie application allows users to search multi-instrument solar archives through AstroGrid web services and visualise the image data through galleries, an interactive movie viewing applet, and QuickTime movies generated on-the-fly.
A methodology toward manufacturing grid-based virtual enterprise operation platform
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Yicheng; Xu, Wei; Xu, Lida; Zhao, Xianhua; Wang, Li; Fu, Liuliu
2010-08-01
Virtual enterprises (VEs) have become one of main types of organisations in the manufacturing sector through which the consortium companies organise their manufacturing activities. To be competitive, a VE relies on the complementary core competences among members through resource sharing and agile manufacturing capacity. Manufacturing grid (M-Grid) is a platform in which the production resources can be shared. In this article, an M-Grid-based VE operation platform (MGVEOP) is presented as it enables the sharing of production resources among geographically distributed enterprises. The performance management system of the MGVEOP is based on the balanced scorecard and has the capacity of self-learning. The study shows that a MGVEOP can make a semi-automated process possible for a VE, and the proposed MGVEOP is efficient and agile.
Frequency Regulation Services from Connected Residential Devices: Short Paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Jin, Xin; Vaidhynathan, Deepthi
In this paper, we demonstrate the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the- loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfortmore » bounds and physical hardware limitations. Future research will address the issues of cybersecurity threats, participation rates, and reducing equipment wear-and-tear while providing grid services.« less
Cho, Youngho; Hur, Kyeon; Kang, Yong; ...
2017-09-08
This study investigates the emerging harmonic stability concerns to be addressed by grid planners in generation interconnection studies, owing to the increased adoption of renewable energy resources connected to the grid via power electronic converters. The wideband and high-frequency electromagnetic transient (EMT) characteristics of these converter-interfaced generators (CIGs) and their interaction with the grid impedance are not accurately captured in the typical dynamic studies conducted by grid planners. This paper thus identifies the desired components to be studied and subsequently develops a practical process for integrating a new CIG into a grid with the existing CIGs. The steps of thismore » process are as follows: the impedance equation of a CIG using its control dynamics and an interface filter to the grid, for example, an LCL filter (inductor-capacitor-inductor type), is developed; an equivalent impedance model including the existing CIGs nearby and the grid observed from the point of common coupling are derived; the system stability for credible operating scenarios is assessed. Detailed EMT simulations validate the accuracy of the impedance models and stability assessment for various connection scenarios. Here, by complementing the conventional EMT simulation studies, the proposed analytical approach enables grid planners to identify critical design parameters for seamlessly integrating a new CIG and ensuring the reliability of the grid.« less
Experiences of engineering Grid-based medical software.
Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T
2007-08-01
Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2006-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid
Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.; ...
2017-03-24
This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less
Using ESB and BPEL for Evolving Healthcare Systems Towards Pervasive, Grid-Enabled SOA
NASA Astrophysics Data System (ADS)
Koufi, V.; Malamateniou, F.; Papakonstantinou, D.; Vassilacopoulos, G.
Healthcare organizations often face the challenge of integrating diverse and geographically disparate information technology systems to respond to changing requirements and to exploit the capabilities of modern technologies. Hence, systems evolution, through modification and extension of the existing information technology infrastructure, becomes a necessity. Moreover, the availability of these systems at the point of care when needed is a vital issue for the quality of healthcare provided to patients. This chapter takes a process perspective of healthcare delivery within and across organizational boundaries and presents a disciplined approach for evolving healthcare systems towards a pervasive, grid-enabled service-oriented architecture using the enterprise system bus middleware technology for resolving integration issues, the business process execution language for supporting collaboration requirements and grid middleware technology for both addressing common SOA scalability requirements and complementing existing system functionality. In such an environment, appropriate security mechanisms must ensure authorized access to integrated healthcare services and data. To this end, a security framework addressing security aspects such as authorization and access control is also presented.
Optimum Aggregation and Control of Spatially Distributed Flexible Resources in Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Mendaza, Iker Diaz de Cerio; Myers, Kurt S.
This paper presents an algorithm to optimally aggregate spatially distributed flexible resources at strategic microgrid/smart-grid locations. The aggregation reduces a distribution network having thousands of nodes to an equivalent network with a few aggregated nodes, thereby enabling distribution system operators (DSOs) to make faster operational decisions. Moreover, the aggregation enables flexibility from small distributed flexible resources to be traded to different power and energy markets. A hierarchical control architecture comprising a combination of centralized and decentralized control approaches is proposed to practically deploy the aggregated flexibility. The proposed method serves as a great operational tool for DSOs to decide themore » exact amount of required flexibilities from different network section(s) for solving grid constraint violations. The effectiveness of the proposed method is demonstrated through simulation of three operational scenarios in a real low voltage distribution system having high penetrations of electric vehicles and heat pumps. Finally, the simulation results demonstrated that the aggregation helps DSOs not only in taking faster operational decisions, but also in effectively utilizing the available flexibility.« less
Boosting CSP Production with Thermal Energy Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, P.; Mehos, M.
2012-06-01
Combining concentrating solar power (CSP) with thermal energy storage shows promise for increasing grid flexibility by providing firm system capacity with a high ramp rate and acceptable part-load operation. When backed by energy storage capability, CSP can supplement photovoltaics by adding generation from solar resources during periods of low solar insolation. The falling cost of solar photovoltaic (PV) - generated electricity has led to a rapid increase in the deployment of PV and projections that PV could play a significant role in the future U.S. electric sector. The solar resource itself is virtually unlimited; however, the actual contribution of PVmore » electricity is limited by several factors related to the current grid. The first is the limited coincidence between the solar resource and normal electricity demand patterns. The second is the limited flexibility of conventional generators to accommodate this highly variable generation resource. At high penetration of solar generation, increased grid flexibility will be needed to fully utilize the variable and uncertain output from PV generation and to shift energy production to periods of high demand or reduced solar output. Energy storage is one way to increase grid flexibility, and many storage options are available or under development. In this article, however, we consider a technology already beginning to be used at scale - thermal energy storage (TES) deployed with concentrating solar power (CSP). PV and CSP are both deployable in areas of high direct normal irradiance such as the U.S. Southwest. The role of these two technologies is dependent on their costs and relative value, including how their value to the grid changes as a function of what percentage of total generation they contribute to the grid, and how they may actually work together to increase overall usefulness of the solar resource. Both PV and CSP use solar energy to generate electricity. A key difference is the ability of CSP to utilize high-efficiency TES, which turns CSP into a partially dispatchable resource. The addition of TES produces additional value by shifting the delivery of solar energy to periods of peak demand, providing firm capacity and ancillary services, and reducing integration challenges. Given the dispatchability of CSP enabled by TES, it is possible that PV and CSP are at least partially complementary. The dispatchability of CSP with TES can enable higher overall penetration of the grid by solar energy by providing solar-generated electricity during periods of cloudy weather or at night, when PV-generated power is unavailable. Such systems also have the potential to improve grid flexibility, thereby enabling greater penetration of PV energy (and other variable generation sources such as wind) than if PV were deployed without CSP.« less
Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences
NASA Astrophysics Data System (ADS)
Schissel, D. P.
2004-11-01
The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.
Yıldız-Atıkan, Başak; Karapınar, Bülent; Aydemir, Şöhret; Vardar, Fadıl
2015-01-01
Ventilator-associated pneumonia (VAP) is defined as pneumonia occuring in any period of mechanical ventilation. There is no optimal diagnostic method in current use and in this study we aimed to compare two non-invasive diagnostic methods used in diagnosis of VAP in children. This prospective study was conducted in 8 bedded Pediatric Intensive Care Unit at Ege University Children´s Hospital. Endotracheal aspiration (ETA) and non-bronchoscopic bronchoalveolar lavage (BAL) were performed in case of developing VIP after 48 hours of ventilation. Quantitative cultures were examined in Ege University Department of Diagnostic Microbiology, Bacteriology Laboratory. Fourty-one patients were enrolled in the study. The mean age of study subjects was 47.2±53.6 months. A total of 28 in 82 specimens taken with both methods were negative/negative; 28 had positive result with ETA and a negative result with non-bronchoscopic BAL and both results were negative in 26 specimens. There were no patients whose respiratory specimen culture was negative with ETA and positive with non-bronchoscopic BAL. These results imply that there is a significant difference between two diagnostic methods (p < 0.001). Negative non-bronchoscopic BAL results are recognized as absence of VAP; therefore, ETA results were compared with this method. ETA's sensitivity, specificity, negative and positive predictive values were 100%, 50%, 100% and 48% respectively. The study revealed the ease of usability and the sensitivity of non-bronchoscopic BAL, in comparison with ETA.
Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids
Zhang, Liping; Tang, Shanyu; Luo, He
2016-01-01
In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham- Yahalom logic. PMID:27007951
Elliptic Curve Cryptography-Based Authentication with Identity Protection for Smart Grids.
Zhang, Liping; Tang, Shanyu; Luo, He
2016-01-01
In a smart grid, the power service provider enables the expected power generation amount to be measured according to current power consumption, thus stabilizing the power system. However, the data transmitted over smart grids are not protected, and then suffer from several types of security threats and attacks. Thus, a robust and efficient authentication protocol should be provided to strength the security of smart grid networks. As the Supervisory Control and Data Acquisition system provides the security protection between the control center and substations in most smart grid environments, we focus on how to secure the communications between the substations and smart appliances. Existing security approaches fail to address the performance-security balance. In this study, we suggest a mitigation authentication protocol based on Elliptic Curve Cryptography with privacy protection by using a tamper-resistant device at the smart appliance side to achieve a delicate balance between performance and security of smart grids. The proposed protocol provides some attractive features such as identity protection, mutual authentication and key agreement. Finally, we demonstrate the completeness of the proposed protocol using the Gong-Needham-Yahalom logic.
Smart Grid Development Issues for Terrestrial and Space Applications
NASA Technical Reports Server (NTRS)
Soeder, James F.
2011-01-01
The development of the so called Smart Grid has as many definitions as individuals working in the area. Based on the technology or technologies that are of interest, be it high speed communication, renewable generation, smart meters, energy storage, advanced sensors, etc. they can become the individual defining characteristic of the Smart Grid. In reality the smart grid encompasses all of these items and quite at bit more. This discussion attempts to look at what the needs are for the grid of the future, such as the issues of increased power flow capability, use of renewable energy, increased security and efficiency and common power and data standards. It also shows how many of these issues are common with the needs of NASA for future exploration programs. A common theme to address both terrestrial and space exploration issues is to develop micro-grids that advertise the ability to enable the load leveling of large power generation facilities. However, for microgrids to realize their promise there needs to a holistic systems approach to their development and integration. The overall system integration issues are presented along with potential solution methodologies.
Smart Grid Development Issues for Terrestrial and Space Applications
NASA Technical Reports Server (NTRS)
Soeder, James F.
2014-01-01
The development of the so called Smart Grid has as many definitions as individuals working in the area. Based on the technology or technologies that are of interest, be it high speed communication, renewable generation, smart meters, energy storage, advanced sensors, etc. they can become the individual defining characteristic of the Smart Grid. In reality the smart grid encompasses all of these items and quite at bit more. This discussion attempts to look at what the needs are for the grid of the future, such as the issues of increased power flow capability, use of renewable energy, increased security and efficiency and common power and data standards. It also shows how many of these issues are common with the needs of NASA for future exploration programs. A common theme to address both terrestrial and space exploration issues is to develop micro-grids that advertise the ability to enable the load leveling of large power generation facilities. However, for microgrids to realize their promise there needs to a holistic systems approach to their development and integration. The overall system integration issues are presented along with potential solution methodologies.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
Towed-grid system for production and calorimetric study of homogenous quantum turbulence
NASA Astrophysics Data System (ADS)
Ciapurin, Roman; Thompson, Kyle; Ihas, Gary G.
2011-10-01
The decay of quantum turbulence is not fully understood in superfluid helium at milikelvin temperatures where the viscous normal component is absent. Vibrating grid experiments performed periously produced inhomogeneous turbulence, making the results hard to interpret. We have developed experimental methods to produce homogeneous isotropic turbulence by pulling a grid at a variable constant velocity through superfluid 4He. While using calorimetric technique to measure the energy dissipation, the Meissner effect was employed to eliminate all heat sources except from turbulent decay. A controlled divergent magnetic field provides the lift to a hollow cylindrical superconducting actuator to which the grid is attached. Position sensing is performed by measuring the inductance change of a coil when a superconductor, similar to that of the actuator, is moved inside it. This position sensing technique proved to be reliable under varying temperatures and magnetic fields, making it perfect for use in the towed-grid experiment where a rise in temperature emerges from turbulent decay. Additionally, the reproducible dependency of the grid's position on the applied magnetic field enables complete control of the actuator's motion.
System design and implementation of digital-image processing using computational grids
NASA Astrophysics Data System (ADS)
Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping
2005-06-01
As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Brian; Huque, Aminul; Rogers, Lindsey
In 2011, EPRI began a four-year effort under the Department of Energy (DOE) SunShot Initiative Solar Energy Grid Integration Systems - Advanced Concepts (SEGIS-AC) to demonstrate smart grid ready inverters with utility communication. The objective of the project was to successfully implement and demonstrate effective utilization of inverters with grid support functionality to capture the full value of distributed photovoltaic (PV). The project leveraged ongoing investments and expanded PV inverter capabilities, to enable grid operators to better utilize these grid assets. Developing and implementing key elements of PV inverter grid support capabilities will increase the distribution system’s capacity for highermore » penetration levels of PV, while reducing the cost. The project team included EPRI, Yaskawa-Solectria Solar, Spirae, BPL Global, DTE Energy, National Grid, Pepco, EDD, NPPT and NREL. The project was divided into three phases: development, deployment, and demonstration. Within each phase, the key areas included: head-end communications for Distributed Energy Resources (DER) at the utility operations center; methods for coordinating DER with existing distribution equipment; back-end PV plant master controller; and inverters with smart-grid functionality. Four demonstration sites were chosen in three regions of the United States with different types of utility operating systems and implementations of utility-scale PV inverters. This report summarizes the project and findings from field demonstration at three utility sites.« less
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Context-dependent spatially periodic activity in the human entorhinal cortex
Nguyen, T. Peter; Török, Ágoston; Shen, Jason Y.; Briggs, Deborah E.; Modur, Pradeep N.; Buchanan, Robert J.
2017-01-01
The spatially periodic activity of grid cells in the entorhinal cortex (EC) of the rodent, primate, and human provides a coordinate system that, together with the hippocampus, informs an individual of its location relative to the environment and encodes the memory of that location. Among the most defining features of grid-cell activity are the 60° rotational symmetry of grids and preservation of grid scale across environments. Grid cells, however, do display a limited degree of adaptation to environments. It remains unclear if this level of environment invariance generalizes to human grid-cell analogs, where the relative contribution of visual input to the multimodal sensory input of the EC is significantly larger than in rodents. Patients diagnosed with nontractable epilepsy who were implanted with entorhinal cortical electrodes performing virtual navigation tasks to memorized locations enabled us to investigate associations between grid-like patterns and environment. Here, we report that the activity of human entorhinal cortical neurons exhibits adaptive scaling in grid period, grid orientation, and rotational symmetry in close association with changes in environment size, shape, and visual cues, suggesting scale invariance of the frequency, rather than the wavelength, of spatially periodic activity. Our results demonstrate that neurons in the human EC represent space with an enhanced flexibility relative to neurons in rodents because they are endowed with adaptive scalability and context dependency. PMID:28396399
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
Short Paper: Frequency Regulation Services from Connected Residential Devices: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Jin, Xin; Vaidhynathan, Deepthi
In this paper, we demonstrate the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the- loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfortmore » bounds and physical hardware limitations. Future research will address the issues of cybersecurity threats, participation rates, and reducing equipment wear-and-tear while providing grid services.« less
Collaboration Services: Enabling Chat in Disadvantaged Grids
2014-06-01
grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol A.; Jacobson, Arne; Park, Won Young
Highly efficient direct current (DC) appliances have the potential to dramatically increase the affordability of off-grid solar power systems used for rural electrification in developing countries by reducing the size of the systems required. For example, the combined power requirement of a highly efficient color TV, four DC light emitting diode (LED) lamps, a mobile phone charger, and a radio is approximately 18 watts and can be supported by a small solar power system (at 27 watts peak, Wp). Price declines and efficiency advances in LED technology are already enabling rapidly increased use of small off-grid lighting systems in Africamore » and Asia. Similar progress is also possible for larger household-scale solar home systems that power appliances such as lights, TVs, fans, radios, and mobile phones. When super-efficient appliances are used, the total cost of solar home systems and their associated appliances can be reduced by as much as 50%. The results vary according to the appliances used with the system. These findings have critical relevance for efforts to provide modern energy services to the 1.2 billion people worldwide without access to the electrical grid and one billion more with unreliable access. However, policy and market support are needed to realize rapid adoption of super-efficient appliances.« less
NASA Astrophysics Data System (ADS)
Benjamin, Christopher J.; Wright, Kyle J.; Bolton, Scott C.; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L.; Jiang, Wen; Thompson, David H.
2016-10-01
We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with Nα, Nα-dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His6-T7 bacteriophage and His6-GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His6-GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu, Chandrayee; Ghatikar, Girish
The United States and India have among the largest economies in the world, and they continue to work together to address current and future challenges in reliable electricity supply. The acceleration to efficient, grid-responsive, resilient buildings represents a key energy security objective for federal and state agencies in both countries. The weaknesses in the Indian grid system were manifest in 2012, in the country’s worst blackout, which jeopardized the lives of half of India’s 1.2 billion people. While both countries are investing significantly in power sector reform, India, by virtue of its colossal growth rate in commercial energy intensity andmore » commercial floor space, is better placed than the United States to integrate and test state-of-art Smart Grid technologies in its future grid-responsive commercial buildings. This paper presents a roadmap of technical collaboration between the research organizations, and public-private stakeholders in both countries to accelerate the building-to-grid integration through pilot studies in India.« less
Benjamin, Christopher J; Wright, Kyle J; Bolton, Scott C; Hyun, Seok-Hee; Krynski, Kyle; Grover, Mahima; Yu, Guimei; Guo, Fei; Kinzer-Ursem, Tamara L; Jiang, Wen; Thompson, David H
2016-10-17
We report the fabrication of transmission electron microscopy (TEM) grids bearing graphene oxide (GO) sheets that have been modified with N α , N α -dicarboxymethyllysine (NTA) and deactivating agents to block non-selective binding between GO-NTA sheets and non-target proteins. The resulting GO-NTA-coated grids with these improved antifouling properties were then used to isolate His 6 -T7 bacteriophage and His 6 -GroEL directly from cell lysates. To demonstrate the utility and simplified workflow enabled by these grids, we performed cryo-electron microscopy (cryo-EM) of His 6 -GroEL obtained from clarified E. coli lysates. Single particle analysis produced a 3D map with a gold standard resolution of 8.1 Å. We infer from these findings that TEM grids modified with GO-NTA are a useful tool that reduces background and improves both the speed and simplicity of biological sample preparation for high-resolution structure elucidation by cryo-EM.
Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm
Vigneswari, T.; Mohamed, M. A. Maluk
2014-01-01
Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557
Design study of a 120-keV, He-3 neutral beam injector
NASA Astrophysics Data System (ADS)
Blum, A. S.; Barr, W. L.; Dexter, W. L.; Moir, R. W.; Wilcox, T. P.; Fink, J. H.
1981-01-01
A design for a 120-keV, 2.3-MW, He-3 neutral beam injector for use on a D-(He-3) fusion reactor is described. The constraint that limits operating life when injecting He is its high sputtering rate. The sputtering is partly controlled by using an extra grid to prevent ion flow from the neutralizer duct to the electron suppressor grid, but a tradeoff between beam current and operating life is still required. Hollow grid wires functioning as mercury heat pipes cool the grid and enable steady state operation. Voltage holding and radiation effects on the acceleration grid structure are discussed. The vacuum system is also briefly described, and the use of a direct energy converter to recapture energy from unneutralized ions exiting the neutralizer is also analyzed. Of crucial importance to the technical feasibility of the (He-3)-burning reactor are the injector efficiency and cost; these are 53% and $5.5 million, respectively, when power supplies are included.
Design study of a 120-keV,3He neutral beam injector
NASA Astrophysics Data System (ADS)
Blum, A. S.; Barr, W. L.; Dexter, W. L.; Fink, J. H.; Moir, R. W.; Wilcox, T. P.
1981-01-01
We describe a design for a 120-keV, 2.3-MW,3He neutral beam injector for use on a D-3He fusion reactor. The constraint that limits operating life when injecting He is its high sputtering rate. The sputtering is partly controlled by using an extra grid to prevent ion flow from the neutralizer duct to the electron suppressor grid, but a tradeoff between beam current and operating life is still required. Hollow grid wires functioning as mercury heat pipes cool the grid and enable steady state operation. Voltage holding and radiation effects on the acceleration grid structure are discussed. We also briefly describe the vacuum system and analyze use of a direct energy converter to recapture energy from unneutralized ions exiting the neutralizer. Of crucial importance to the technical feasibility of the3He-burning reactor are the injector efficiency and cost; these are 53% and 5.5 million, respectively, when power supplies are included.
Wire-chamber radiation detector with discharge control
Perez-Mendez, V.; Mulera, T.A.
1982-03-29
A wire chamber; radiation detector has spaced apart parallel electrodes and grids defining an ignition region in which charged particles or other ionizing radiations initiate brief localized avalanche discharges and defining an adjacent memory region in which sustained glow discharges are initiated by the primary discharges. Conductors of the grids at each side of the memory section extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles were detected by sequentially transmitting pulses to the conductors of one grid while detecting transmissions of the pulses to the orthogonal conductors of the other grid through glow discharges. One of the grids bounding the memory region is defined by an array of conductive elements each of which is connected to the associated readout conductor through a separate resistance. The wire chamber avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or; near simultaneous charged particles have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.
An overview of distributed microgrid state estimation and control for smart grids.
Rana, Md Masud; Li, Li
2015-02-12
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.
A Semantic Grid Oriented to E-Tourism
NASA Astrophysics Data System (ADS)
Zhang, Xiao Ming
With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.
geoknife: Reproducible web-processing of large gridded datasets
Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.
2016-01-01
Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.
Marken, Ken
2018-01-09
The Department of Energy (DOE) Office of Electricity Delivery and Energy Reliability (OE) has been tasked to lead national efforts to modernize the electric grid, enhance security and reliability of the energy infrastructure, and facilitate recovery from disruptions to energy supplies. LANL has pioneered the development of coated conductors â high-temperature superconducting (HTS) tapes â which permit dramatically greater current densities than conventional copper cable, and enable new technologies to secure the national electric grid. Sustained world-class research from concept, demonstration, transfer, and ongoing industrial support has moved this idea from the laboratory to the commercial marketplace.
Security and Resilience | Grid Modernization | NREL
Security and Resilience Security and Resilience NREL develops tools and solutions to enable a more Consortium, NREL collaborates with industry, academia, and other research organizations to find solutions to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, Jason
Jason Harper, an electrical engineer in Argonne National Laboratory's EV-Smart Grid Interoperability Center, discusses his SpEC Module invention that will enable fast charging of electric vehicles in under 15 minutes. The module has been licensed to BTCPower.
Space-based Science Operations Grid Prototype
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Welch, Clara L.; Redman, Sandra
2004-01-01
Grid technology is the up and coming technology that is enabling widely disparate services to be offered to users that is very economical, easy to use and not available on a wide basis. Under the Grid concept disparate organizations generally defined as "virtual organizations" can share services i.e. sharing discipline specific computer applications, required to accomplish the specific scientific and engineering organizational goals and objectives. Grids are emerging as the new technology of the future. Grid technology has been enabled by the evolution of increasingly high speed networking. Without the evolution of high speed networking Grid technology would not have emerged. NASA/Marshall Space Flight Center's (MSFC) Flight Projects Directorate, Ground Systems Department is developing a Space-based Science Operations Grid prototype to provide to scientists and engineers the tools necessary to operate space-based science payloads/experiments and for scientists to conduct public and educational outreach. In addition Grid technology can provide new services not currently available to users. These services include mission voice and video, application sharing, telemetry management and display, payload and experiment commanding, data mining, high order data processing, discipline specific application sharing and data storage, all from a single grid portal. The Prototype will provide most of these services in a first step demonstration of integrated Grid and space-based science operations technologies. It will initially be based on the International Space Station science operational services located at the Payload Operations Integration Center at MSFC, but can be applied to many NASA projects including free flying satellites and future projects. The Prototype will use the Internet2 Abilene Research and Education Network that is currently a 10 Gb backbone network to reach the University of Alabama at Huntsville and several other, as yet unidentified, Space Station based science experimenters. There is an international aspect to the Grid involving the America's Pathway (AMPath) network, the Chilean REUNA Research and Education Network and the University of Chile in Santiago that will further demonstrate how extensive these services can be used. From the user's perspective, the Prototype will provide a single interface and logon to these varied services without the complexity of knowing the where's and how's of each service. There is a separate and deliberate emphasis on security. Security will be addressed by specifically outlining the different approaches and tools used. Grid technology, unlike the Internet, is being designed with security in mind. In addition we will show the locations, configurations and network paths associated with each service and virtual organization. We will discuss the separate virtual organizations that we define for the varied user communities. These will include certain, as yet undetermined, space-based science functions and/or processes and will include specific virtual organizations required for public and educational outreach and science and engineering collaboration. We will also discuss the Grid Prototype performance and the potential for further Grid applications both space-based and ground based projects and processes. In this paper and presentation we will detail each service and how they are integrated using Grid
Grid computing enhances standards-compatible geospatial catalogue service
NASA Astrophysics Data System (ADS)
Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang
2010-04-01
A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nouidui, Thierry; Wetter, Michael
SimulatorToFMU is a software package written in Python which allows users to export a memoryless Python-driven simulation program or script as a Functional Mock-up Unit (FMU) for model exchange or co-simulation.In CyDER (Cyber Physical Co-simulation Platform for Distributed Energy Resources in Smart Grids), SimulatorToFMU will allow exporting OPAL-RT as an FMU. This will enable OPAL-RT to be linked to CYMDIST and GridDyn FMUs through a standardized open source interface.
Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)
2002-01-01
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
Self-assembled monolayers improve protein distribution on holey carbon cryo-EM supports
Meyerson, Joel R.; Rao, Prashant; Kumar, Janesh; Chittori, Sagar; Banerjee, Soojay; Pierson, Jason; Mayer, Mark L.; Subramaniam, Sriram
2014-01-01
Poor partitioning of macromolecules into the holes of holey carbon support grids frequently limits structural determination by single particle cryo-electron microscopy (cryo-EM). Here, we present a method to deposit, on gold-coated carbon grids, a self-assembled monolayer whose surface properties can be controlled by chemical modification. We demonstrate the utility of this approach to drive partitioning of ionotropic glutamate receptors into the holes, thereby enabling 3D structural analysis using cryo-EM methods. PMID:25403871
Triangle geometry processing for surface modeling and cartesian grid generation
Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY
2002-09-03
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
FLAME: A platform for high performance computing of complex systems, applied for three case studies
Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...
2011-01-01
FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.
Global gridded crop specific agricultural areas from 1961-2014
NASA Astrophysics Data System (ADS)
Konar, M.; Jackson, N. D.
2017-12-01
Current global cropland datasets are limited in crop specificity and temporal resolution. Time series maps of crop specific agricultural areas would enable us to better understand the global agricultural geography of the 20th century. To this end, we develop a global gridded dataset of crop specific agricultural areas from 1961-2014. To do this, we downscale national cropland information using a probabilistic approach. Our method relies upon gridded Global Agro-Ecological Zones (GAEZ) maps, the History Database of the Global Environment (HYDE), and crop calendars from Sacks et al. (2010). We estimate crop-specific agricultural areas for a 0.25 degree spatial grid and annual time scale for all major crops. We validate our global estimates for the year 2000 with Monfreda et al. (2008) and our time series estimates within the United States using government data. This database will contribute to our understanding of global agricultural change of the past century.
Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deschamps, T; Schwartz, P; Trebotich, D
In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less
A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0
Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.
2014-01-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072
A grid-enabled web service for low-resolution crystal structure refinement.
O'Donovan, Daniel J; Stokes-Rees, Ian; Nam, Yunsun; Blacklow, Stephen C; Schröder, Gunnar F; Brunger, Axel T; Sliz, Piotr
2012-03-01
Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
NASA Technical Reports Server (NTRS)
Janus, J. Mark; Whitfield, David L.
1990-01-01
Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.
Smart Grid, Smart Inverters for a Smart Energy Future | State, Local, and
, legislation which defines the state's interconnection standards and permits the interconnection of smart the cost and benefits of advanced inverter enabling legislation. Expect conversations concerning
Albin, Aaron; Ji, Xiaonan; Borlawsky, Tara B; Ye, Zhan; Lin, Simon; Payne, Philip Ro; Huang, Kun; Xiang, Yang
2014-10-07
The Unified Medical Language System (UMLS) contains many important ontologies in which terms are connected by semantic relations. For many studies on the relationships between biomedical concepts, the use of transitively associated information from ontologies and the UMLS has been shown to be effective. Although there are a few tools and methods available for extracting transitive relationships from the UMLS, they usually have major restrictions on the length of transitive relations or on the number of data sources. Our goal was to design an efficient online platform that enables efficient studies on the conceptual relationships between any medical terms. To overcome the restrictions of available methods and to facilitate studies on the conceptual relationships between medical terms, we developed a Web platform, onGrid, that supports efficient transitive queries and conceptual relationship studies using the UMLS. This framework uses the latest technique in converting natural language queries into UMLS concepts, performs efficient transitive queries, and visualizes the result paths. It also dynamically builds a relationship matrix for two sets of input biomedical terms. We are thus able to perform effective studies on conceptual relationships between medical terms based on their relationship matrix. The advantage of onGrid is that it can be applied to study any two sets of biomedical concept relations and the relations within one set of biomedical concepts. We use onGrid to study the disease-disease relationships in the Online Mendelian Inheritance in Man (OMIM). By crossvalidating our results with an external database, the Comparative Toxicogenomics Database (CTD), we demonstrated that onGrid is effective for the study of conceptual relationships between medical terms. onGrid is an efficient tool for querying the UMLS for transitive relations, studying the relationship between medical terms, and generating hypotheses.
ARC SDK: A toolbox for distributed computing and data applications
NASA Astrophysics Data System (ADS)
Skou Andersen, M.; Cameron, D.; Lindemann, J.
2014-06-01
Grid middleware suites provide tools to perform the basic tasks of job submission and retrieval and data access, however these tools tend to be low-level, operating on individual jobs or files and lacking in higher-level concepts. User communities therefore generally develop their own application-layer software catering to their specific communities' needs on top of the Grid middleware. It is thus important for the Grid middleware to provide a friendly, well documented and simple to use interface for the applications to build upon. The Advanced Resource Connector (ARC), developed by NorduGrid, provides a Software Development Kit (SDK) which enables applications to use the middleware for job and data management. This paper presents the architecture and functionality of the ARC SDK along with an example graphical application developed with the SDK. The SDK consists of a set of libraries accessible through Application Programming Interfaces (API) in several languages. It contains extensive documentation and example code and is available on multiple platforms. The libraries provide generic interfaces and rely on plugins to support a given technology or protocol and this modular design makes it easy to add a new plugin if the application requires supporting additional technologies.The ARC Graphical Clients package is a graphical user interface built on top of the ARC SDK and the Qt toolkit and it is presented here as a fully functional example of an application. It provides a graphical interface to enable job submission and management at the click of a button, and allows data on any Grid storage system to be manipulated using a visual file system hierarchy, as if it were a regular file system.
Spectral Topography Generation for Arbitrary Grids
NASA Astrophysics Data System (ADS)
Oh, T. J.
2015-12-01
A new topography generation tool utilizing spectral transformation technique for both structured and unstructured grids is presented. For the source global digital elevation data, the NASA Shuttle Radar Topography Mission (SRTM) 15 arc-second dataset (gap-filling by Jonathan de Ferranti) is used and for land/water mask source, the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) 30 arc-second land water mask dataset v5 is used. The original source data is coarsened to a intermediate global 2 minute lat-lon mesh. Then, spectral transformation to the wave space and inverse transformation with wavenumber truncation is performed for isotropic topography smoothness control. Target grid topography mapping is done by bivariate cubic spline interpolation from the truncated 2 minute lat-lon topography. Gibbs phenomenon in the water region can be removed by overwriting ocean masked target coordinate grids with interpolated values from the intermediate 2 minute grid. Finally, a weak smoothing operator is applied on the target grid to minimize the land/water surface height discontinuity that might have been introduced by the Gibbs oscillation removal procedure. Overall, the new topography generation approach provides spectrally-derived, smooth topography with isotropic resolution and minimum damping, enabling realistic topography forcing in the numerical model. Topography is generated for the cubed-sphere grid and tested on the KIAPS Integrated Model (KIM).
An infrastructure for the integration of geoscience instruments and sensors on the Grid
NASA Astrophysics Data System (ADS)
Pugliese, R.; Prica, M.; Kourousias, G.; Del Linz, A.; Curri, A.
2009-04-01
The Grid, as a computing paradigm, has long been in the attention of both academia and industry[1]. The distributed and expandable nature of its general architecture result to scalability and more efficient utilisation of the computing infrastructures. The scientific community, including that of geosciences, often handles problems with very high requirements in data processing, transferring, and storing[2,3]. This has raised the interest on Grid technologies but these are often viewed solely as an access gateway to HPC. Suitable Grid infrastructures could provide the geoscience community with additional benefits like those of sharing, remote access and control of scientific systems. These systems can be scientific instruments, sensors, robots, cameras and any other device used in geosciences. The solution for practical, general, and feasible Grid-enabling of such devices requires non-intrusive extensions on core parts of the current Grid architecture. We propose an extended version of an architecture[4] that can serve as the solution to the problem. The solution we propose is called Grid Instrument Element (IE) [5]. It is an addition to the existing core Grid parts; the Computing Element (CE) and the Storage Element (SE) that serve the purposes that their name suggests. The IE that we will be referring to, and the related technologies have been developed in the EU project on the Deployment of Remote Instrumentation Infrastructure (DORII1). In DORII, partners of various scientific communities including those of Earthquake, Environmental science, and Experimental science, have adopted the technology of the Instrument Element in order to integrate to the Grid their devices. The Oceanographic and coastal observation and modelling Mediterranean Ocean Observing Network (OGS2), a DORII partner, is in the process of deploying the above mentioned Grid technologies on two types of observational modules: Argo profiling floats and a novel Autonomous Underwater Vehicle (AUV). In this paper i) we define the need for integration of instrumentation in the Grid, ii) we introduce the solution of the Instrument Element, iii) we demonstrate a suitable end-user web portal for accessing Grid resources, iv) we describe from the Grid-technological point of view the process of the integration to the Grid of two advanced environmental monitoring devices. References [1] M. Surridge, S. Taylor, D. De Roure, and E. Zaluska, "Experiences with GRIA—Industrial Applications on a Web Services Grid," e-Science and Grid Computing, First International Conference on e-Science and Grid Computing, 2005, pp. 98-105. [2] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury, and S. Tuecke, "The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets," Journal of Network and Computer Applications, vol. 23, 2000, pp. 187-200. [3] B. Allcock, J. Bester, J. Bresnahan, A.L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova, D. Quesnel, and S. Tuecke, "Data management and transfer in high-performance computational grid environments," Parallel Computing, vol. 28, 2002, pp. 749-771. [4] E. Frizziero, M. Gulmini, F. Lelli, G. Maron, A. Oh, S. Orlando, A. Petrucci, S. Squizzato, and S. Traldi, "Instrument Element: A New Grid component that Enables the Control of Remote Instrumentation," Proceedings of the Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)-Volume 00, IEEE Computer Society Washington, DC, USA, 2006. [5] R. Ranon, L. De Marco, A. Senerchia, S. Gabrielli, L. Chittaro, R. Pugliese, L. Del Cano, F. Asnicar, and M. Prica, "A Web-based Tool for Collaborative Access to Scientific Instruments in Cyberinfrastructures." 1 The DORII project is supported by the European Commission within the 7th Framework Programme (FP7/2007-2013) under grant agreement no. RI-213110. URL: http://www.dorii.eu 2 Istituto Nazionale di Oceanografia e di Geofisica Sperimentale. URL: http://www.ogs.trieste.it
NASA Astrophysics Data System (ADS)
Corbett, Jacqueline Marie
Enabled by advanced communication and information technologies, the smart grid represents a major transformation for the electricity sector. Vast quantities of data and two-way communications abilities create the potential for a flexible, data-driven, multi-directional supply and consumption network well equipped to meet the challenges of the next century. For electricity service providers ("utilities"), the smart grid provides opportunities for improved business practices and new business models; however, a transformation of such magnitude is not without risks. Three related studies are conducted to explore the implications of the smart grid on utilities' demand-side activities. An initial conceptual framework, based on organizational information processing theory, suggests that utilities' performance depends on the fit between the information processing requirements and capacities associated with a given demand-side activity. Using secondary data and multiple regression analyses, the first study finds, consistent with OIPT, a positive relationship between utilities' advanced meter deployments and demand-side management performance. However, it also finds that meters with only data collection capacities are associated with lower performance, suggesting the presence of information waste causing operational inefficiencies. In the second study, interviews with industry participants provide partial support for the initial conceptual model, new insights are gained with respect to information processing fit and information waste, and "big data" is identified as a central theme of the smart grid. To derive richer theoretical insights, the third study employs a grounded theory approach examining the experience of one successful utility in detail. Based on interviews and documentary data, the paradox of dynamic stability emerges as an essential enabler of utilities' performance in the smart grid environment. Within this context, the frames of opportunity, control, and data limitation interact to support dynamic stability and contribute to innovation within tradition. The main contributions of this thesis include theoretical extensions to OIPT and the development of an emergent model of dynamic stability in relation to big data. The thesis also adds to the green IS literature and identifies important practical implications for utilities as they endeavour to bring the smart grid to reality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Tillay
For three years, Sandia National Laboratories, Georgia Institute of Technology, and University of Illinois at Urbana-Champaign investigated a smart grid vision in which renewable-centric Virtual Power Plants (VPPs) provided ancillary services with interoperable distributed energy resources (DER). This team researched, designed, built, and evaluated real-time VPP designs incorporating DER forecasting, stochastic optimization, controls, and cyber security to construct a system capable of delivering reliable ancillary services, which have been traditionally provided by large power plants or other dedicated equipment. VPPs have become possible through an evolving landscape of state and national interconnection standards, which now require DER to include grid-supportmore » functionality and communications capabilities. This makes it possible for third party aggregators to provide a range of critical grid services such as voltage regulation, frequency regulation, and contingency reserves to grid operators. This paradigm (a) enables renewable energy, demand response, and energy storage to participate in grid operations and provide grid services, (b) improves grid reliability by providing additional operating reserves for utilities, independent system operators (ISOs), and regional transmission organization (RTOs), and (c) removes renewable energy high-penetration barriers by providing services with photovoltaics and wind resources that traditionally were the jobs of thermal generators. Therefore, it is believed VPP deployment will have far-reaching positive consequences for grid operations and may provide a robust pathway to high penetrations of renewables on US power systems. In this report, we design VPPs to provide a range of grid-support services and demonstrate one VPP which simultaneously provides bulk-system energy and ancillary reserves.« less
Design and evaluation of a grid reciprocation scheme for use in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Patel, Tushita; Sporkin, Helen; Peppard, Heather; Williams, Mark B.
2016-03-01
This work describes a methodology for efficient removal of scatter radiation during digital breast tomosynthesis (DBT). The goal of this approach is to enable grid image obscuration without a large increase in radiation dose by minimizing misalignment of the grid focal point (GFP) and x-ray focal spot (XFS) during grid reciprocation. Hardware for the motion scheme was built and tested on the dual modality breast tomosynthesis (DMT) scanner, which combines DBT and molecular breast tomosynthesis (MBT) on a single gantry. The DMT scanner uses fully isocentric rotation of tube and x-ray detector for maintaining a fixed tube-detector alignment during DBT imaging. A cellular focused copper prototype grid with 80 cm focal length, 3.85 mm height, 0.1 mm thick lamellae, and 1.1 mm hole pitch was tested. Primary transmission of the grid at 28 kV tube voltage was on average 74% with the grid stationary and aligned for maximum transmission. It fell to 72% during grid reciprocation by the proposed method. Residual grid line artifacts (GLAs) in projection views and reconstructed DBT images are characterized and methods for reducing the visibility of GLAs in the reconstructed volume through projection image flat-field correction and spatial frequency-based filtering of the DBT slices are described and evaluated. The software correction methods reduce the visibility of these artifacts in the reconstructed volume, making them imperceptible both in the reconstructed DBT images and their Fourier transforms.
Clean vehicles as an enabler for a clean electricity grid
NASA Astrophysics Data System (ADS)
Coignard, Jonathan; Saxena, Samveg; Greenblatt, Jeffery; Wang, Dai
2018-05-01
California has issued ambitious targets to decarbonize transportation through the deployment of electric vehicles (EVs), and to decarbonize the electricity grid through the expansion of both renewable generation and energy storage. These parallel efforts can provide an untapped synergistic opportunity for clean transportation to be an enabler for a clean electricity grid. To quantify this potential, we forecast the hourly system-wide balancing problems arising out to 2025 as more renewables are deployed and load continues to grow. We then quantify the system-wide balancing benefits from EVs modulating the charging or discharging of their batteries to mitigate renewable intermittency, without compromising the mobility needs of drivers. Our results show that with its EV deployment target and with only one-way charging control of EVs, California can achieve much of the same benefit of its Storage Mandate for mitigating renewable intermittency, but at a small fraction of the cost. Moreover, EVs provide many times these benefits if two-way charging control becomes widely available. Thus, EVs support the state’s renewable integration targets while avoiding much of the tremendous capital investment of stationary storage that can instead be applied towards further deployment of clean vehicles.
Duarte, Afonso M. S.; Psomopoulos, Fotis E.; Blanchet, Christophe; Bonvin, Alexandre M. J. J.; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C.; de Lucas, Jesus M.; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B.
2015-01-01
With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community. PMID:26157454
NASA Astrophysics Data System (ADS)
Pavlak, Gregory S.
Building energy use is a significant contributing factor to growing worldwide energy demands. In pursuit of a sustainable energy future, commercial building operations must be intelligently integrated with the electric system to increase efficiency and enable renewable generation. Toward this end, a model-based methodology was developed to estimate the capability of commercial buildings to participate in frequency regulation ancillary service markets. This methodology was integrated into a supervisory model predictive controller to optimize building operation in consideration of energy prices, demand charges, and ancillary service revenue. The supervisory control problem was extended to building portfolios to evaluate opportunities for synergistic effect among multiple, centrally-optimized buildings. Simulation studies performed showed that the multi-market optimization was able to determine appropriate opportunities for buildings to provide frequency regulation. Total savings were increased by up to thirteen percentage points, depending on the simulation case. Furthermore, optimizing buildings as a portfolio achieved up to seven additional percentage points of savings, depending on the case. Enhanced energy and cost savings opportunities were observed by taking the novel perspective of optimizing building portfolios in multiple grid markets, motivating future pursuits of advanced control paradigms that enable a more intelligent electric grid.
Duarte, Afonso M S; Psomopoulos, Fotis E; Blanchet, Christophe; Bonvin, Alexandre M J J; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C; de Lucas, Jesus M; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B
2015-01-01
With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.
NASA Astrophysics Data System (ADS)
Reerink, Thomas J.; van de Berg, Willem Jan; van de Wal, Roderik S. W.
2016-11-01
This paper accompanies the second OBLIMAP open-source release. The package is developed to map climate fields between a general circulation model (GCM) and an ice sheet model (ISM) in both directions by using optimal aligned oblique projections, which minimize distortions. The curvature of the surfaces of the GCM and ISM grid differ, both grids may be irregularly spaced and the ratio of the grids is allowed to differ largely. OBLIMAP's stand-alone version is able to map data sets that differ in various aspects on the same ISM grid. Each grid may either coincide with the surface of a sphere, an ellipsoid or a flat plane, while the grid types might differ. Re-projection of, for example, ISM data sets is also facilitated. This is demonstrated by relevant applications concerning the major ice caps. As the stand-alone version also applies to the reverse mapping direction, it can be used as an offline coupler. Furthermore, OBLIMAP 2.0 is an embeddable GCM-ISM coupler, suited for high-frequency online coupled experiments. A new fast scan method is presented for structured grids as an alternative for the former time-consuming grid search strategy, realising a performance gain of several orders of magnitude and enabling the mapping of high-resolution data sets with a much larger number of grid nodes. Further, a highly flexible masked mapping option is added. The limitation of the fast scan method with respect to unstructured and adaptive grids is discussed together with a possible future parallel Message Passing Interface (MPI) implementation.
Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis
NASA Technical Reports Server (NTRS)
Nayani, Sudheer N.; Campbell, Richard L.
2013-01-01
Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.
Selection of battery technology to support grid-integrated renewable electricity
NASA Astrophysics Data System (ADS)
Leadbetter, Jason; Swan, Lukas G.
2012-10-01
Operation of the electricity grid has traditionally been done using slow responding base and intermediate load generators with fast responding peak load generators to capture the chaotic behavior of end-use demands. Many modern electricity grids are implementing intermittent non-dispatchable renewable energy resources. As a result, the existing support services are becoming inadequate and technological innovation in grid support services are necessary. Support services fall into short (seconds to minutes), medium (minutes to hours), and long duration (several hours) categories. Energy storage offers a method of providing these services and can enable increased penetration rates of renewable energy generators. Many energy storage technologies exist. Of these, batteries span a significant range of required storage capacity and power output. By assessing the energy to power ratio of electricity grid services, suitable battery technologies were selected. These include lead-acid, lithium-ion, sodium-sulfur, and vanadium-redox. Findings show the variety of grid services require different battery technologies and batteries are capable of meeting the short, medium, and long duration categories. A brief review of each battery technology and its present state of development, commercial implementation, and research frontiers is presented to support these classifications.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
Efficient visibility encoding for dynamic illumination in direct volume rendering.
Kronander, Joel; Jönsson, Daniel; Löw, Joakim; Ljung, Patric; Ynnerman, Anders; Unger, Jonas
2012-03-01
We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.
Harper, Jason
2018-03-02
Jason Harper, an electrical engineer in Argonne National Laboratory's EV-Smart Grid Interoperability Center, discusses his SpEC Module invention that will enable fast charging of electric vehicles in under 15 minutes. The module has been licensed to BTCPower.
Service-Oriented Architecture for NVO and TeraGrid Computing
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew
2008-01-01
The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.
Systems Integration Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-06-01
This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstrationmore » projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.« less
SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE
NASA Technical Reports Server (NTRS)
Davies, C. B.
1994-01-01
SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is acceptable since it makes possible an overall and local error reduction through grid redistribution. SAGE includes the ability to modify the adaption techniques in boundary regions, which substantially improves the flexibility of the adaptive scheme. The vectorial approach used in the analysis also provides flexibility. The user has complete choice of adaption direction and order of sequential adaptions without concern for the computational data structure. Multiple passes are available with no restraint on stepping directions; for each adaptive pass the user can choose a completely new set of adaptive parameters. This facility, combined with the capability of edge boundary control, enables the code to individually adapt multi-dimensional multiple grids. Zonal grids can be adapted while maintaining continuity along the common boundaries. For patched grids, the multiple-pass capability enables complete adaption. SAGE is written in FORTRAN 77 and is intended to be machine independent; however, it requires a FORTRAN compiler which supports NAMELIST input. It has been successfully implemented on Sun series computers, SGI IRIS's, DEC MicroVAX computers, HP series computers, the Cray YMP, and IBM PC compatibles. Source code is provided, but no sample input and output files are provided. The code reads three datafiles: one that contains the initial grid coordinates (x,y,z), one that contains corresponding flow-field variables, and one that contains the user control parameters. It is assumed that the first two datasets are formatted as defined in the plotting software package PLOT3D. Several machine versions of PLOT3D are available from COSMIC. The amount of main memory is dependent on the size of the matrix. The standard distribution medium for SAGE is a 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format or on a 9-track 1600 BPI ASCII CARD IMAGE format magnetic tape. SAGE was developed in 1989, first released as a 2D version in 1991 and updated to 3D in 1993.
Enabling Object Storage via shims for Grid Middleware
NASA Astrophysics Data System (ADS)
Cadellin Skipsey, Samuel; De Witt, Shaun; Dewhurst, Alastair; Britton, David; Roy, Gareth; Crooks, David
2015-12-01
The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called ”Cloud” storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for ”POSIX-like” filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow.
An Overview of Distributed Microgrid State Estimation and Control for Smart Grids
Rana, Md Masud; Li, Li
2015-01-01
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316
NASA Technical Reports Server (NTRS)
Hinke, Thomas H.
2004-01-01
Grid technology consists of middleware that permits distributed computations, data and sensors to be seamlessly integrated into a secure, single-sign-on processing environment. In &is environment, a user has to identify and authenticate himself once to the grid middleware, and then can utilize any of the distributed resources to which he has been,panted access. Grid technology allows resources that exist in enterprises that are under different administrative control to be securely integrated into a single processing environment The grid community has adopted commercial web services technology as a means for implementing persistent, re-usable grid services that sit on top of the basic distributed processing environment that grids provide. These grid services can then form building blocks for even more complex grid services. Each grid service is characterized using the Web Service Description Language, which provides a description of the interface and how other applications can access it. The emerging Semantic grid work seeks to associates sufficient semantic information with each grid service such that applications wii1 he able to automatically select, compose and if necessary substitute available equivalent services in order to assemble collections of services that are most appropriate for a particular application. Grid technology has been used to provide limited support to various Earth and space science applications. Looking to the future, this emerging grid service technology can provide a cyberinfrastructures for both the Earth and space science communities. Groups within these communities could transform those applications that have community-wide applicability into persistent grid services that are made widely available to their respective communities. In concert with grid-enabled data archives, users could easily create complex workflows that extract desired data from one or more archives and process it though an appropriate set of widely distributed grid services discovered using semantic grid technology. As required, high-end computational resources could be drawn from available grid resource pools. Using grid technology, this confluence of data, services and computational resources could easily be harnessed to transform data from many different sources into a desired product that is delivered to a user's workstation or to a web portal though which it could be accessed by its intended audience.
Pilly, Praveen K.; Grossberg, Stephen
2013-01-01
Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Gail-Joon
The project seeks an innovative framework to enable users to access and selectively share resources in distributed environments, enhancing the scalability of information sharing. We have investigated secure sharing & assurance approaches for ad-hoc collaboration, focused on Grids, Clouds, and ad-hoc network environments.
Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Pirzadeh, S.
1999-01-01
A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.
Annular Ion Engine Concept and Development Status
NASA Technical Reports Server (NTRS)
Patterson, Michael J.
2016-01-01
The Annular Ion Engine (AIE) concept represents an evolutionary development in gridded ion thruster technology with the potential for delivering revolutionary capabilities. It has this potential because the AIE concept: (a) enables scaling of ion thruster technology to high power at specific impulse (Isp) values of interest for near-term mission applications, 5000 sec; and (b) it enables an increase in both thrust density and thrust-to-power (FP) ratio exceeding conventional ion thrusters and other electric propulsion (EP) technology options, thereby yielding the highest performance over a broad range in Isp. The AIE concept represents a natural progression of gridded ion thruster technology beyond the capabilities embodied by NASAs Evolutionary Xenon Thruster (NEXT) [1]. The AIE would be appropriate for: (a) applications which require power levels exceeding NEXTs capabilities (up to about 14 kW [2]), with scalability potentially to 100s of kW; and/or (b) applications which require FP conditions exceeding NEXTs capabilities.
Synchrotron Imaging Computations on the Grid without the Computing Element
NASA Astrophysics Data System (ADS)
Curri, A.; Pugliese, R.; Borghes, R.; Kourousias, G.
2011-12-01
Besides the heavy use of the Grid in the Synchrotron Radiation Facility (SRF) Elettra, additional special requirements from the beamlines had to be satisfied through a novel solution that we present in this work. In the traditional Grid Computing paradigm the computations are performed on the Worker Nodes of the grid element known as the Computing Element. A Grid middleware extension that our team has been working on, is that of the Instrument Element. In general it is used to Grid-enable instrumentation; and it can be seen as a neighbouring concept to that of the traditional Control Systems. As a further extension we demonstrate the Instrument Element as the steering mechanism for a series of computations. In our deployment it interfaces a Control System that manages a series of computational demanding Scientific Imaging tasks in an online manner. The instrument control in Elettra is done through a suitable Distributed Control System, a common approach in the SRF community. The applications that we present are for a beamline working in medical imaging. The solution resulted to a substantial improvement of a Computed Tomography workflow. The near-real-time requirements could not have been easily satisfied from our Grid's middleware (gLite) due to the various latencies often occurred during the job submission and queuing phases. Moreover the required deployment of a set of TANGO devices could not have been done in a standard gLite WN. Besides the avoidance of certain core Grid components, the Grid Security infrastructure has been utilised in the final solution.
Vector-based navigation using grid-like representations in artificial agents.
Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan
2018-05-01
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
High-Latitude F-Region Irregularities: A Review and Synthesis
1988-02-15
8217 Menlo Park, CA 94025-3434 O 15 February 1988 Technical Report ) CONTRACT No. DNA 001-86- C -0002 Approved for public release; distribution is unlimited...auroral currents. Sato and 31 %~~ & % -- mmm i m m lm I ml mm* l,* ~ -. O Rourke [132] derived electric field patterns from ground-based magnetome...uarrlbost h equaorwad ege o theaurral layr 2 Janary 979 36S 2100p P % %. % k’ %1 Id ~IJ cn 0 0 D/ K!> ,’ c II0 D 0 Li DU U. - F UI L, z* 0 <~ ~ < CQ cn N J
1977-05-01
D )EGED IATWRAL WI0, 00 EEACHPRGAFT4- CONTRACT REPORT D -77-4 TRANSFORMATIONS OF HEAVY METALS AND PLANT NUTRIENTS IN DREDGED SEDIMENTS AS AFFECTED BY...Engineers, U. S. Army C=) Washington, D . C. 20314 Q-2 Under Contract No. DACW39-74-C-0076 CM-1 (DMRP Work Unit No. IC05) Monitored by Environmental...Sa70mB COMPLETING FOM T -IMPORT PUNIeRf oTACSINN ESCTLGNME Contract Report D -77-4GOTACSINN C NTSAAOGUMR RAAN SFORATIONS OFPEAV71ETALS AND
DOE Office of Scientific and Technical Information (OSTI.GOV)
McParland, Charles
The Smart Grid envisions a transformed US power distribution grid that enables communicating devices, under human supervision, to moderate loads and increase overall system stability and security. This vision explicitly promotes increased participation from a community that, in the past, has had little involvement in power grid operations -the consumer. The potential size of this new community and its member's extensive experience with the public Internet prompts an analysis of the evolution and current state of the Internet as a predictor for best practices in the architectural design of certain portions of the Smart Grid network. Although still evolving, themore » vision of the Smart Grid is that of a community of communicating and cooperating energy related devices that can be directed to route power and modulate loads in pursuit of an integrated, efficient and secure electrical power grid. The remaking of the present power grid into the Smart Grid is considered as fundamentally transformative as previous developments such as modern computing technology and high bandwidth data communications. However, unlike these earlier developments, which relied on the discovery of critical new technologies (e.g. the transistor or optical fiber transmission lines), the technologies required for the Smart Grid currently exist and, in many cases, are already widely deployed. In contrast to other examples of technical transformations, the path (and success) of the Smart Grid will be determined not by its technology, but by its system architecture. Fortunately, we have a recent example of a transformative force of similar scope that shares a fundamental dependence on our existing communications infrastructure - namely, the Internet. We will explore several ways in which the scale of the Internet and expectations of its users have shaped the present Internet environment. As the presence of consumers within the Smart Grid increases, some experiences from the early growth of the Internet are expected to be informative and pertinent.« less
Huang, Zhe; Parrott, Edward P J; Park, Hongkyu; Chan, Hau Ping; Pickwell-MacPherson, Emma
2014-02-15
A thin-film terahertz polarizer is proposed and realized via a tunable bilayer metal wire-grid structure to achieve high extinction ratios and good transmission. The polarizer is fabricated on top of a thin silica layer by standard micro-fabrication techniques to eliminate the multireflection effects. The tunable alignment of the bilayer aluminum-wire grid structure enables tailoring of the extinction ratio and transmission characteristics. Using terahertz time-domain spectroscopy (THz-TDS), a fabricated polarizer is characterized, with extinction ratios greater than 50 dB and transmission losses below 1 dB reported in the 0.2-1.1 THz frequency range. These characteristics can be improved by further tuning the polarizer parameters such as the pitch, metal film thickness, and lateral displacement.
Nonlinearity of resistive impurity effects on van der Pauw measurements
NASA Astrophysics Data System (ADS)
Koon, D. W.
2006-09-01
The dependence of van der Pauw resistivity measurements on local macroscopic inhomogeneities is shown to be nonlinear. A resistor grid network models a square laminar specimen, enabling the investigation of both positive and negative local perturbations in resistivity. The effect of inhomogeneity is measured both experimentally, for an 11×11 grid, and computationally, for both 11×11 and 101×101 grids. The maximum "shortlike" perturbation produces 3.1±0.2 times the effect predicted by the linear approximation, regardless of its position within the specimen, while all "openlike" perturbations produce a smaller effect than predicted. An empirical nonlinear correction for f(x ,y) is presented which provides excellent fit over the entire range of both positive and negative perturbations for the entire specimen.
NASA Astrophysics Data System (ADS)
Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry
2006-12-01
We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.
Metal nano-grids for transparent conduction in solar cells
Muzzillo, Christopher P.
2017-05-11
A general procedure for predicting metal grid performance in solar cells was developed. Unlike transparent conducting oxides (TCOs) or other homogeneous films, metal grids induce more resistance in the neighbor layer. The resulting balance of transmittance, neighbor and grid resistance was explored in light of cheap lithography advances that have enabled metal nano-grid (MNG) fabrication. The patterned MNGs have junction resistances and degradation rates that are more favorable than solution-synthesized metal nanowires. Neighbor series resistance was simulated by the finite element method, although a simpler analytical model was sufficient in most cases. Finite-difference frequency-domain transmittance simulations were performed for MNGsmore » with minimum wire width (w) of 50 nm, but deviations from aperture transmittance were small in magnitude. Depending on the process, MNGs can exhibit increased series resistance as w is decreased. However, numerous experimental reports have already achieved transmittance-MNG sheet resistance trade-offs comparable to TCOs. The transmittance, neighbor and MNG series resistances were used to parameterize a grid fill factor for a solar cell. In conclusion, this new figure of merit was used to demonstrate that although MNGs have only been employed in low efficiency solar cells, substantial gains in performance are predicted for decreased w in all high efficiency absorber technologies.« less
Design and implementation of GRID-based PACS in a hospital with multiple imaging departments
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo
2008-03-01
Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations.
Khalifa, Tarek; Abdrabou, Atef; Shaban, Khaled; Gaouda, A M
2018-05-11
Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids.
Heterogeneous Wireless Networks for Smart Grid Distribution Systems: Advantages and Limitations
Khalifa, Tarek; Abdrabou, Atef; Gaouda, A. M.
2018-01-01
Supporting a conventional power grid with advanced communication capabilities is a cornerstone to transferring it to a smart grid. A reliable communication infrastructure with a high throughput can lay the foundation towards the ultimate objective of a fully automated power grid with self-healing capabilities. In order to realize this objective, the communication infrastructure of a power distribution network needs to be extended to cover all substations including medium/low voltage ones. This shall enable information exchange among substations for a variety of system automation purposes with a low latency that suits time critical applications. This paper proposes the integration of two heterogeneous wireless technologies (such as WiFi and cellular 3G/4G) to provide reliable and fast communication among primary and secondary distribution substations. This integration allows the transmission of different data packets (not packet replicas) over two radio interfaces, making these interfaces act like a one data pipe. Thus, the paper investigates the applicability and effectiveness of employing heterogeneous wireless networks (HWNs) in achieving the desired reliability and timeliness requirements of future smart grids. We study the performance of HWNs in a realistic scenario under different data transfer loads and packet loss ratios. Our findings reveal that HWNs can be a viable data transfer option for smart grids. PMID:29751633
Metal nano-grids for transparent conduction in solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muzzillo, Christopher P.
A general procedure for predicting metal grid performance in solar cells was developed. Unlike transparent conducting oxides (TCOs) or other homogeneous films, metal grids induce more resistance in the neighbor layer. The resulting balance of transmittance, neighbor and grid resistance was explored in light of cheap lithography advances that have enabled metal nano-grid (MNG) fabrication. The patterned MNGs have junction resistances and degradation rates that are more favorable than solution-synthesized metal nanowires. Neighbor series resistance was simulated by the finite element method, although a simpler analytical model was sufficient in most cases. Finite-difference frequency-domain transmittance simulations were performed for MNGsmore » with minimum wire width (w) of 50 nm, but deviations from aperture transmittance were small in magnitude. Depending on the process, MNGs can exhibit increased series resistance as w is decreased. However, numerous experimental reports have already achieved transmittance-MNG sheet resistance trade-offs comparable to TCOs. The transmittance, neighbor and MNG series resistances were used to parameterize a grid fill factor for a solar cell. In conclusion, this new figure of merit was used to demonstrate that although MNGs have only been employed in low efficiency solar cells, substantial gains in performance are predicted for decreased w in all high efficiency absorber technologies.« less
JPARSS: A Java Parallel Network Package for Grid Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jie; Akers, Walter; Chen, Ying
2002-03-01
The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less
Greene, Samuel M; Batista, Victor S
2017-09-12
We introduce the "tensor-train split-operator Fourier transform" (TT-SOFT) method for simulations of multidimensional nonadiabatic quantum dynamics. TT-SOFT is essentially the grid-based SOFT method implemented in dynamically adaptive tensor-train representations. In the same spirit of all matrix product states, the tensor-train format enables the representation, propagation, and computation of observables of multidimensional wave functions in terms of the grid-based wavepacket tensor components, bypassing the need of actually computing the wave function in its full-rank tensor product grid space. We demonstrate the accuracy and efficiency of the TT-SOFT method as applied to propagation of 24-dimensional wave packets, describing the S 1 /S 2 interconversion dynamics of pyrazine after UV photoexcitation to the S 2 state. Our results show that the TT-SOFT method is a powerful computational approach for simulations of quantum dynamics of polyatomic systems since it avoids the exponential scaling problem of full-rank grid-based representations.
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
A new service-oriented grid-based method for AIoT application and implementation
NASA Astrophysics Data System (ADS)
Zou, Yiqin; Quan, Li
2017-07-01
The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.
Distributed Monte Carlo production for DZero
NASA Astrophysics Data System (ADS)
Snow, Joel; DØ Collaboration
2010-04-01
The DZero collaboration uses a variety of resources on four continents to pursue a strategy of flexibility and automation in the generation of simulation data. This strategy provides a resilient and opportunistic system which ensures an adequate and timely supply of simulation data to support DZero's physics analyses. A mixture of facilities, dedicated and opportunistic, specialized and generic, large and small, grid job enabled and not, are used to provide a production system that has adapted to newly developing technologies. This strategy has increased the event production rate by a factor of seven and the data production rate by a factor of ten in the last three years despite diminishing manpower. Common to all production facilities is the SAM (Sequential Access to Metadata) data-grid. Job submission to the grid uses SAMGrid middleware which may forward jobs to the OSG, the WLCG, or native SAMGrid sites. The distributed computing and data handling system used by DZero will be described and the results of MC production since the deployment of grid technologies will be presented.
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica
Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.; Blankenship, D. D.; Damiani, T. M.; Young, D.; Cochran, J. R.; Richter, T. D.
2018-01-01
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km2, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica. PMID:29326484
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica
NASA Technical Reports Server (NTRS)
Scheinert, M.; Ferraccioli, F.; Schwabe, J.; Bell, R.; Studinger, M.; Damaske, D.; Jokat, W.; Aleshkova, N.; Jordan, T.; Leitchenkov, G.;
2016-01-01
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, air-borne and ship-borne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million sq km, which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated leveling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.
New Antarctic Gravity Anomaly Grid for Enhanced Geodetic and Geophysical Studies in Antarctica.
Scheinert, M; Ferraccioli, F; Schwabe, J; Bell, R; Studinger, M; Damaske, D; Jokat, W; Aleshkova, N; Jordan, T; Leitchenkov, G; Blankenship, D D; Damiani, T M; Young, D; Cochran, J R; Richter, T D
2016-01-28
Gravity surveying is challenging in Antarctica because of its hostile environment and inaccessibility. Nevertheless, many ground-based, airborne and shipborne gravity campaigns have been completed by the geophysical and geodetic communities since the 1980s. We present the first modern Antarctic-wide gravity data compilation derived from 13 million data points covering an area of 10 million km 2 , which corresponds to 73% coverage of the continent. The remove-compute-restore technique was applied for gridding, which facilitated levelling of the different gravity datasets with respect to an Earth Gravity Model derived from satellite data alone. The resulting free-air and Bouguer gravity anomaly grids of 10 km resolution are publicly available. These grids will enable new high-resolution combined Earth Gravity Models to be derived and represent a major step forward towards solving the geodetic polar data gap problem. They provide a new tool to investigate continental-scale lithospheric structure and geological evolution of Antarctica.
Cyberinfrastructure for End-to-End Environmental Explorations
NASA Astrophysics Data System (ADS)
Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.
2007-12-01
The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.
Making the most of cloud storage - a toolkit for exploitation by WLCG experiments
NASA Astrophysics Data System (ADS)
Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea
2017-10-01
Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.
Hyperviscosity for unstructured ALE meshes
NASA Astrophysics Data System (ADS)
Cook, Andrew W.; Ulitsky, Mark S.; Miller, Douglas S.
2013-01-01
An artificial viscosity, originally designed for Eulerian schemes, is adapted for use in arbitrary Lagrangian-Eulerian simulations. Changes to the Eulerian model (dubbed 'hyperviscosity') are discussed, which enable it to work within a Lagrangian framework. New features include a velocity-weighted grid scale and a generalised filtering procedure, applicable to either structured or unstructured grids. The model employs an artificial shear viscosity for treating small-scale vorticity and an artificial bulk viscosity for shock capturing. The model is based on the Navier-Stokes form of the viscous stress tensor, including the diagonal rate-of-expansion tensor. A second-order version of the model is presented, in which Laplacian operators act on the velocity divergence and the grid-weighted strain-rate magnitude to ensure that the velocity field remains smooth at the grid scale. Unlike sound-speed-based artificial viscosities, the hyperviscosity model is compatible with the low Mach number limit. The new model outperforms a commonly used Lagrangian artificial viscosity on a variety of test problems.
Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-01-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586
PLOT3D Export Tool for Tecplot
NASA Technical Reports Server (NTRS)
Alter, Stephen
2010-01-01
The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zipperer, Adam; Aloise-Young, Patricia A.; Suryanarayanan, Siddharth
2013-11-01
Smart homes hold the potential for increasing energy efficiency, decreasing costs of energy use, decreasing the carbon footprint by including renewable resources, and transforming the role of the occupant. At the crux of the smart home is an efficient electric energy management system that is enabled by emerging technologies in the electric grid and consumer electronics. This article presents a discussion of the state-of-the-art in electricity management in smart homes, the various enabling technologies that will accelerate this concept, and topics around consumer behavior with respect to energy usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zipperer, A.; Aloise-Young, P. A.; Suryanarayanan, S.
2013-08-01
Smart homes hold the potential for increasing energy efficiency, decreasing costs of energy use, decreasing the carbon footprint by including renewable resources, and trans-forming the role of the occupant. At the crux of the smart home is an efficient electric energy management system that is enabled by emerging technologies in the electricity grid and consumer electronics. This article presents a discussion of the state-of-the-art in electricity management in smart homes, the various enabling technologies that will accelerate this concept, and topics around consumer behavior with respect to energy usage.
Global Multi-Resolution Topography (GMRT) Synthesis - Recent Updates and Developments
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Morton, J. J.; Celnick, M.; McLain, K.; Nitsche, F. O.; Carbotte, S. M.; O'hara, S. H.
2017-12-01
The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of elevation data that is maintained in Mercator, South Polar, and North Polar Projections. GMRT consists of four independently curated elevation components: (1) quality controlled multibeam data ( 100m res.), (2) contributed high-resolution gridded bathymetric data (0.5-200 m res.), (3) ocean basemap data ( 500 m res.), and (4) variable resolution land elevation data (to 10-30 m res. in places). Each component is managed and updated as new content becomes available, with two scheduled releases each year. The ocean basemap content for GMRT includes the International Bathymetric Chart of the Arctic Ocean (IBCAO), the International Bathymetric Chart of the Southern Ocean (IBCSO), and the GEBCO 2014. Most curatorial effort for GMRT is focused on the swath bathymetry component, with an emphasis on data from the US Academic Research Fleet. As of July 2017, GMRT includes data processed and curated by the GMRT Team from 974 research cruises, covering over 29 million square kilometers ( 8%) of the seafloor at 100m resolution. The curated swath bathymetry data from GMRT is routinely contributed to international data synthesis efforts including GEBCO and IBCSO. Additional curatorial effort is associated with gridded data contributions from the international community and ensures that these data are well blended in the synthesis. Significant new additions to the gridded data component this year include the recently released data from the search for MH370 (Geoscience Australia) as well as a large high-resolution grid from the Gulf of Mexico derived from 3D seismic data (US Bureau of Ocean Energy Management). Recent developments in functionality include the deployment of a new Polar GMRT MapTool which enables users to export custom grids and map images in polar projection for their selected area of interest at the resolution of their choosing. Available for both the south and north polar regions, grids can be exported from GMRT in a variety of formats including ASCII, GeoTIFF and NetCDF to support use in common mapping software applications such as ArcGIS, GMT, Matlab, and Python. New web services have also been developed to enable programmatic access to grids and images in north and south polar projections.
The Electrochemical Flow Capacitor: Capacitive Energy Storage in Flowable Media
NASA Astrophysics Data System (ADS)
Dennison, Christopher R.
Electrical energy storage (EES) has emerged as a necessary aspect of grid infrastructure to address the increasing problem of grid instability imposed by the large scale implementation of renewable energy sources (such as wind or solar) on the grid. Rapid energy recovery and storage is critically important to enable immediate and continuous utilization of these resources, and provides other benefits to grid operators and consumers as well. In past decades, there has been significant progress in the development of electrochemical EES technologies which has had an immense impact on the consumer and micro-electronics industries. However, these advances primarily address small-scale storage, and are often not practical at the grid-scale. A new energy storage concept called "the electrochemical flow capacitor (EFC)" has been developed at Drexel which has significant potential to be an attractive technology for grid-scale energy storage. This new concept exploits the characteristics of both supercapacitors and flow batteries, potentially enabling fast response rates with high power density, high efficiency, and long cycle lifetime, while decoupling energy storage from power output (i.e., scalable energy storage capacity). The unique aspect of this concept is the use of flowable carbon-electrolyte slurry ("flowable electrode") as the active material for capacitive energy storage. This dissertation work seeks to lay the scientific groundwork necessary to develop this new concept into a practical technology, and to test the overarching hypothesis that energy can be capacitively stored and recovered from a flowable media. In line with these goals, the objectives of this Ph.D. work are to: i) perform an exploratory investigation of the operating principles and demonstrate the technical viability of this new concept and ii) establish a scientific framework to assess the key linkages between slurry composition, flow cell design, operating conditions and system performance. To achieve these goals, a combined experimental and computational approach is undertaken. The technical viability of the technology is demonstrated, and in-depth studies are performed to understand the coupling between flow rate and slurry conductivity, and localized effects arising within the cell. The outlook of EFCs and other flowable electrode technologies is assessed, and opportunities for future work are discussed.
MIDG-Emerging grid technologies for multi-site preclinical molecular imaging research communities.
Lee, Jasper; Documet, Jorge; Liu, Brent; Park, Ryan; Tank, Archana; Huang, H K
2011-03-01
Molecular imaging is the visualization and identification of specific molecules in anatomy for insight into metabolic pathways, tissue consistency, and tracing of solute transport mechanisms. This paper presents the Molecular Imaging Data Grid (MIDG) which utilizes emerging grid technologies in preclinical molecular imaging to facilitate data sharing and discovery between preclinical molecular imaging facilities and their collaborating investigator institutions to expedite translational sciences research. Grid-enabled archiving, management, and distribution of animal-model imaging datasets help preclinical investigators to monitor, access and share their imaging data remotely, and promote preclinical imaging facilities to share published imaging datasets as resources for new investigators. The system architecture of the Molecular Imaging Data Grid is described in a four layer diagram. A data model for preclinical molecular imaging datasets is also presented based on imaging modalities currently used in a molecular imaging center. The MIDG system components and connectivity are presented. And finally, the workflow steps for grid-based archiving, management, and retrieval of preclincial molecular imaging data are described. Initial performance tests of the Molecular Imaging Data Grid system have been conducted at the USC IPILab using dedicated VMware servers. System connectivity, evaluated datasets, and preliminary results are presented. The results show the system's feasibility, limitations, direction of future research. Translational and interdisciplinary research in medicine is increasingly interested in cellular and molecular biology activity at the preclinical levels, utilizing molecular imaging methods on animal models. The task of integrated archiving, management, and distribution of these preclinical molecular imaging datasets at preclinical molecular imaging facilities is challenging due to disparate imaging systems and multiple off-site investigators. A Molecular Imaging Data Grid design, implementation, and initial evaluation is presented to demonstrate the secure and novel data grid solution for sharing preclinical molecular imaging data across the wide-area-network (WAN).
NASA Technical Reports Server (NTRS)
Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.
2016-01-01
Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
NASA Astrophysics Data System (ADS)
Toosi, Siavash; Larsson, Johan
2017-11-01
The accuracy of an LES depends directly on the accuracy of the resolved part of the turbulence. The continuing increase in computational power enables the application of LES to increasingly complex flow problems for which the LES community lacks the experience of knowing what the ``optimal'' or even an ``acceptable'' grid (or equivalently filter-width distribution) is. The goal of this work is to introduce a systematic approach to finding the ``optimal'' grid/filter-width distribution and their ``optimal'' anisotropy. The method is tested first on the turbulent channel flow, mainly to see if it is able to predict the right anisotropy of the filter/grid, and then on the more complicated case of flow over a backward-facing step, to test its ability to predict the right distribution and anisotropy of the filter/grid simultaneously, hence leading to a converged solution. This work has been supported by the Naval Air Warfare Center Aircraft Division at Pax River, MD, under contract N00421132M021. Computing time has been provided by the University of Maryland supercomputing resources (http://hpcc.umd.edu).
Kanematsu, Nobuyuki
2011-04-01
This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
McCurdy, C. William; Lucchese, Robert L.; Greenman, Loren
2017-04-01
The complex Kohn variational method, which represents the continuum wave function in each channel using a combination of Gaussians and Bessel or Coulomb functions, has been successful in numerous applications to electron-polyatomic molecule scattering and molecular photoionization. The hybrid basis representation limits it to relatively low energies (< 50 eV) , requires an approximation to exchange matrix elements involving continuum functions, and hampers its coupling to modern electronic structure codes for the description of correlated target states. We describe a successful implementation of the method using completely adaptive overset grids to describe continuum functions, in which spherical subgrids are placed on every atomic center to complement a spherical master grid that describes the behavior at large distances. An accurate method for applying the free-particle Green's function on the grid eliminates the need to operate explicitly with the kinetic energy, enabling a rapidly convergent Arnoldi algorithm for solving linear equations on the grid, and no approximations to exchange operators are made. Results for electron scattering from several polyatomic molecules will be presented. Army Research Office, MURI, WN911NF-14-1-0383 and U. S. DOE DE-SC0012198 (at Texas A&M).
Wire chamber radiation detector with discharge control
Perez-Mendez, Victor; Mulera, Terrence A.
1984-01-01
A wire chamber radiation detector (11) has spaced apart parallel electrodes (16) and grids (17, 18, 19) defining an ignition region (21) in which charged particles (12) or other ionizing radiations initiate brief localized avalanche discharges (93) and defining an adjacent memory region (22) in which sustained glow discharges (94) are initiated by the primary discharges (93). Conductors (29, 32) of the grids (18, 19) at each side of the memory section (22) extend in orthogonal directions enabling readout of the X-Y coordinates of locations at which charged particles (12) were detected by sequentially transmitting pulses to the conductors (29) of one grid (18) while detecting transmissions of the pulses to the orthogonal conductors (36) of the other grid (19) through glow discharges (94). One of the grids (19) bounding the memory region (22) is defined by an array of conductive elements (32) each of which is connected to the associated readout conductor (36) through a separate resistance (37). The wire chamber (11) avoids ambiguities and imprecisions in the readout of coordinates when large numbers of simultaneous or near simultaneous charged particles (12) have been detected. Down time between detection periods and the generation of radio frequency noise are also reduced.
NASA Technical Reports Server (NTRS)
Ferlemann, Paul G.; Gollan, Rowan J.
2010-01-01
Computational design and analysis of three-dimensional hypersonic inlets with shape transition has been a significant challenge due to the complex geometry and grid required for three-dimensional viscous flow calculations. Currently, the design process utilizes an inviscid design tool to produce initial inlet shapes by streamline tracing through an axisymmetric compression field. However, the shape is defined by a large number of points rather than a continuous surface and lacks important features such as blunt leading edges. Therefore, a design system has been developed to parametrically construct true CAD geometry and link the topology of a structured grid to the geometry. The Adaptive Modeling Language (AML) constitutes the underlying framework that is used to build the geometry and grid topology. Parameterization of the CAD geometry allows the inlet shapes produced by the inviscid design tool to be generated, but also allows a great deal of flexibility to modify the shape to account for three-dimensional viscous effects. By linking the grid topology to the parametric geometry, the GridPro grid generation software can be used efficiently to produce a smooth hexahedral multiblock grid. To demonstrate the new capability, a matrix of inlets were designed by varying four geometry parameters in the inviscid design tool. The goals of the initial design study were to explore inviscid design tool geometry variations with a three-dimensional analysis approach, demonstrate a solution rate which would enable the use of high-fidelity viscous three-dimensional CFD in future design efforts, process the results for important performance parameters, and perform a sample optimization.
Astro-WISE: Chaining to the Universe
NASA Astrophysics Data System (ADS)
Valentijn, E. A.; McFarland, J. P.; Snigula, J.; Begeman, K. G.; Boxhoorn, D. R.; Rengelink, R.; Helmich, E.; Heraudeau, P.; Verdoes Kleijn, G.; Vermeij, R.; Vriend, W.-J.; Tempelaar, M. J.; Deul, E.; Kuijken, K.; Capaccioli, M.; Silvotti, R.; Bender, R.; Neeser, M.; Saglia, R.; Bertin, E.; Mellier, Y.
2007-10-01
The recent explosion of recorded digital data and its processed derivatives threatens to overwhelm researchers when analysing their experimental data or looking up data items in archives and file systems. While current hardware developments allow the acquisition, processing and storage of hundreds of terabytes of data at the cost of a modern sports car, the software systems to handle these data are lagging behind. This problem is very general and is well recognized by various scientific communities; several large projects have been initiated, e.g., DATAGRID/EGEE {http://www.eu-egee.org/} federates compute and storage power over the high-energy physical community, while the international astronomical community is building an Internet geared Virtual Observatory {http://www.euro-vo.org/pub/} (Padovani 2006) connecting archival data. These large projects either focus on a specific distribution aspect or aim to connect many sub-communities and have a relatively long trajectory for setting standards and a common layer. Here, we report first light of a very different solution (Valentijn & Kuijken 2004) to the problem initiated by a smaller astronomical IT community. It provides an abstract scientific information layer which integrates distributed scientific analysis with distributed processing and federated archiving and publishing. By designing new abstractions and mixing in old ones, a Science Information System with fully scalable cornerstones has been achieved, transforming data systems into knowledge systems. This break-through is facilitated by the full end-to-end linking of all dependent data items, which allows full backward chaining from the observer/researcher to the experiment. Key is the notion that information is intrinsic in nature and thus is the data acquired by a scientific experiment. The new abstraction is that software systems guide the user to that intrinsic information by forcing full backward and forward chaining in the data modelling.
Europa Small Lander Design Concepts
NASA Astrophysics Data System (ADS)
Zimmerman, W. F.
2005-12-01
Title: Europa Small Lander Design Concepts Authors: Wayne F. Zimmerman, James Shirley, Robert Carlson, Tom Rivellini, Mike Evans One of the primary goals of NASA's Outer Planets Program is to revisit the Jovian system. A new Europa Geophysical Explorer (EGE) Mission has been proposed and is under evaluation. There is in addition strong community interest in a surface science mission to Europa. A Europa Lander might be delivered to the Jovian system with the EGE orbiter. A Europa Astrobiology Lander (EAL) Mission has also been proposed; this would launch sometime after 2020. The primary science objectives for either of these would most likely include: Surface imaging (both microscopic and near-field), characterization of surface mechanical properties (temperature, hardness), assessment of surface and near-surface organic and inorganic chemistry (volatiles, mineralogy, and compounds), characterization of the radiation environment (total dose and particles), characterization of the planetary seismicity, and the measurement of Europa's magnetic field. The biggest challenges associated with getting to the surface and surviving to perform science investigations revolve around the difficulty of landing on an airless body, the ubiquitous extreme topography, the harsh radiation environment, and the extreme cold. This presentation reviews some the recent design work on drop-off probes, also called "hard landers". Hard lander designs have been developed for a range of science payload delivery systems spanning small impactors to multiple science pods tethered to a central hub. In addition to developing designs for these various payload delivery systems, significant work has been done in weighing the relative merits of standard power systems (i.e., batteries) against radioisotope power systems. A summary of the power option accommodation benefits and issues will be presented. This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract from NASA,
Imbriaco, Massimo; Nappi, Carmela; Puglia, Marta; De Giorgi, Marco; Dell'Aversana, Serena; Cuocolo, Renato; Ponsiglione, Andrea; De Giorgi, Igino; Polito, Maria Vincenza; Klain, Michele; Piscione, Federico; Pace, Leonardo; Cuocolo, Alberto
2017-10-26
To compare cardiac magnetic resonance (CMR) qualitative and quantitative analysis methods for the noninvasive assessment of myocardial inflammation in patients with suspected acute myocarditis (AM). A total of 61 patients with suspected AM underwent coronary angiography and CMR. Qualitative analysis was performed applying Lake-Louise Criteria (LLC), followed by quantitative analysis based on the evaluation of edema ratio (ER) and global relative enhancement (RE). Diagnostic performance was assessed for each method by measuring the area under the curves (AUC) of the receiver operating characteristic analyses. The final diagnosis of AM was based on symptoms and signs suggestive of cardiac disease, evidence of myocardial injury as defined by electrocardiogram changes, elevated troponin I, exclusion of coronary artery disease by coronary angiography, and clinical and echocardiographic follow-up at 3 months after admission to the chest pain unit. In all patients, coronary angiography did not show significant coronary artery stenosis. Troponin I levels and creatine kinase were higher in patients with AM compared to those without (both P < .001). There were no significant differences among LLC, T2-weighted short inversion time inversion recovery (STIR) sequences, early (EGE), and late (LGE) gadolinium-enhancement sequences for diagnosis of AM. The AUC for qualitative (T2-weighted STIR 0.92, EGE 0.87 and LGE 0.88) and quantitative (ER 0.89 and global RE 0.80) analyses were also similar. Qualitative and quantitative CMR analysis methods show similar diagnostic accuracy for the diagnosis of AM. These findings suggest that a simplified approach using a shortened CMR protocol including only T2-weighted STIR sequences might be useful to rule out AM in patients with acute coronary syndrome and normal coronary angiography.
MrGrid: A Portable Grid Based Molecular Replacement Pipeline
Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.
2010-01-01
Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612
On the fly quantum dynamics of electronic and nuclear wave packets
NASA Astrophysics Data System (ADS)
Komarova, Ksenia G.; Remacle, F.; Levine, R. D.
2018-05-01
Multielectronic states quantum dynamics on a grid is described in a manner motivated by on the fly classical trajectory computations. Non stationary electronic states are prepared by a few cycle laser pulse. The nuclei respond and begin moving. We solve the time dependent Schrödinger equation for the electronic and nuclear dynamics for excitation from the ground electronic state. A satisfactory accuracy is possible using a localized description on a discrete grid. This enables computing on the fly for both the nuclear and electronic dynamics including non-adiabatic couplings. Attosecond dynamics in LiH is used as an example.
Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.
2006-01-01
The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.
NASA Astrophysics Data System (ADS)
Al-Taie, A.; Graber, L.; Pamidi, S. V.
2017-12-01
Opportunities for applications of high temperature superconducting (HTS) DC power cables for long distance power transmission in increasing the reliability of the electric power grid and to enable easier integration of distributed renewable sources into the grid are discussed. The gaps in the technology developments both in the superconducting cable designs and cryogenic systems as well as power electronic devices are identified. Various technology components in multi-terminal high voltage DC power transmission networks and the available options are discussed. The potential of ongoing efforts in the development of superconducting DC transmission systems is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Annika; Cappers, Peter; Goldman, Charles
2013-05-01
The U.S. Department of Energy’s (DOE’s) Smart Grid Investment Grant (SGIG) program is working with a subset of the 99 SGIG projects undertaking Consumer Behavior Studies (CBS), which examine the response of mass market consumers (i.e., residential and small commercial customers) to time-varying electricity prices (referred to herein as time-based rate programs) in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort presents an opportunity to advance the electric industry’s understanding of consumer behavior.
Resilient Core Networks for Energy Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuntze, Nicolai; Rudolph, Carsten; Leivesley, Sally
2014-07-28
Abstract—Substations and their control are crucial for the availability of electricity in today’s energy distribution. Ad- vanced energy grids with Distributed Energy Resources require higher complexity in substations, distributed functionality and communication between devices inside substations and between substations. Also, substations include more and more intelligent devices and ICT based systems. All these devices are connected to other systems by different types of communication links or are situated in uncontrolled environments. Therefore, the risk of ICT based attacks on energy grids is growing. Consequently, security measures to counter these risks need to be an intrinsic part of energy grids. Thismore » paper introduces the concept of a Resilient Core Network to interconnected substations. This core network provides essen- tial security features, enables fast detection of attacks and allows for a distributed and autonomous mitigation of ICT based risks.« less
Blockchain: A Path to Grid Modernization and Cyber Resiliency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mylrea, Michael E.; Gourisetti, Sri Nikhil G.
Blockchain may help solve several complex problems related to integrity and trustworthiness of rapid, distributed, complex energy transactions and data exchanges. In a move towards resilience, blockchain commoditizes trust and enables automated smart contracts to support auditable multiparty transactions based on predefined rules between distributed energy providers and customers. Blockchain based smart contracts also help remove the need to interact with third-parties, facilitating the adoption and monetization of distributed energy transactions and exchanges, both energy flows as well as financial transactions. This may help reduce transactive energy costs and increase the security and sustainability of distributed energy resource (DER) integration,more » helping to remove barriers to a more decentralized and resilient power grid. This paper explores the application of blockchain and smart contracts to improve smart grid cyber resiliency and secure transactive energy applications.« less
Smart signal processing for an evolving electric grid
NASA Astrophysics Data System (ADS)
Silva, Leandro Rodrigues Manso; Duque, Calos Augusto; Ribeiro, Paulo F.
2015-12-01
Electric grids are interconnected complex systems consisting of generation, transmission, distribution, and active loads, recently called prosumers as they produce and consume electric energy. Additionally, these encompass a vast array of equipment such as machines, power transformers, capacitor banks, power electronic devices, motors, etc. that are continuously evolving in their demand characteristics. Given these conditions, signal processing is becoming an essential assessment tool to enable the engineer and researcher to understand, plan, design, and operate the complex and smart electronic grid of the future. This paper focuses on recent developments associated with signal processing applied to power system analysis in terms of characterization and diagnostics. The following techniques are reviewed and their characteristics and applications discussed: active power system monitoring, sparse representation of power system signal, real-time resampling, and time-frequency (i.e., wavelets) applied to power fluctuations.
Borrego springs microgrid demonstration project
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
SDG&E has been developing and implementing the foundation for its Smart Grid platform for three decades – beginning with its innovations in automation and control technologies in the 1980s and 1990s, through its most recent Smart Meter deployment and re-engineering of operational processes enabled by new software applications in its OpEx 20/20 (Operational Excellence with a 20/20 Vision) program. SDG&E’s Smart Grid deployment efforts have been consistently acknowledged by industry observers. SDG&E’s commitment and progress has been recognized by IDC Energy Insights and Intelligent Utility Magazine as the nation’s “Most Intelligent Utility” for three consecutive years, winning this award eachmore » year since its inception. SDG&E also received the “Top Ten Utility” award for excellence in Smart Grid development from GreenTech Media.« less
An Integrated Software Package to Enable Predictive Simulation Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang
The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less
NASA Astrophysics Data System (ADS)
Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.
2014-10-01
Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.
New trends in the virtualization of hospitals--tools for global e-Health.
Graschew, Georgi; Roelofs, Theo A; Rakowsky, Stefan; Schlag, Peter M; Heinzlreiter, Paul; Kranzlmüller, Dieter; Volkert, Jens
2006-01-01
The development of virtual hospitals and digital medicine helps to bridge the digital divide between different regions of the world and enables equal access to high-level medical care. Pre-operative planning, intra-operative navigation and minimally-invasive surgery require a digital and virtual environment supporting the perception of the physician. As data and computing resources in a virtual hospital are distributed over many sites the concept of the Grid should be integrated with other communication networks and platforms. A promising approach is the implementation of service-oriented architectures for an invisible grid, hiding complexity for both application developers and end-users. Examples of promising medical applications of Grid technology are the real-time 3D-visualization and manipulation of patient data for individualized treatment planning and the creation of distributed intelligent databases of medical images.
Graphic Representations as Tools for Decision Making.
ERIC Educational Resources Information Center
Howard, Judith
2001-01-01
Focuses on the use of graphic representations to enable students to improve their decision making skills in the social studies. Explores three visual aids used in assisting students with decision making: (1) the force field; (2) the decision tree; and (3) the decision making grid. (CMK)
Harris, Magdalena; Rhodes, Tim
2018-06-01
A life history approach enables study of how risk or health protection is shaped by critical transitions and turning points in a life trajectory and in the context of social environment and time. We employed visual and narrative life history methods with people who inject drugs to explore how hepatitis C protection was enabled and maintained over the life course. We overview our methodological approach, with a focus on the ethics in practice of using life history timelines and life-grids with 37 participants. The life-grid evoked mixed emotions for participants: pleasure in receiving a personalized visual history and pain elicited by its contents. A minority managed this pain with additional heroin use. The methodological benefits of using life history methods and visual aids have been extensively reported. Crucial to consider are the ethical implications of this process, particularly for people who lack socially ascribed markers of a "successful life."
Collaboration tools and techniques for large model datasets
Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.
2008-01-01
In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
Decision Making in the Acquisition Community: Survey and Techniques
1992-04-01
F’ORT MONMON Ill, NJ IT~ FM, R E(GE l C Y I NLT Ar i ~N S F A F- -CM-IrN FORT MONMONTII, NIJ Vi7YZ0,--,’O0 PMf ENIIANCF:I) FIiS I I ION LOCAl 11,01.1...McNair, Washington, D.C. 20319-6000 94-07658 DTIC Q•A•AT -N•vr-PIED 5 Unclassified S SECUR:TY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la...ORGANIZATION REPORT NUMBER( S ) S . MONITORING ORGANIZATION REPORT NUMBER( S ) NDU-ICAF-92-./1 Same 6a. NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a. NAME OF
NASA Synthetic Vision EGE Flight Test
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J.; Kramer, Lynda J.; Comstock, J. Raymond; Bailey, Randall E.; Hughes, Monica F.; Parrish, Russell V.
2002-01-01
NASA Langley Research Center conducted flight tests at the Eagle County, Colorado airport to evaluate synthetic vision concepts. Three display concepts (size 'A' head-down, size 'X' head-down, and head-up displays) and two texture concepts (photo, generic) were assessed for situation awareness and flight technical error / performance while making approaches to Runway 25 and Runway 07 and simulated engine-out Cottonwood 2 and KREMM departures. The results of the study confirm the retrofit capability of the HUD and Size 'A' SVS concepts to significantly improve situation awareness and performance over current EFIS glass and non-glass instruments for difficult approaches in terrain-challenged environments.
Grid cells on steeply sloping terrain: evidence for planar rather than volumetric encoding
Hayman, Robin M. A.; Casali, Giulio; Wilson, Jonathan J.; Jeffery, Kate J.
2015-01-01
Neural encoding of navigable space involves a network of structures centered on the hippocampus, whose neurons –place cells – encode current location. Input to the place cells includes afferents from the entorhinal cortex, which contains grid cells. These are neurons expressing spatially localized activity patches, or firing fields, that are evenly spaced across the floor in a hexagonal close-packed array called a grid. It is thought that grids function to enable the calculation of distances. The question arises as to whether this odometry process operates in three dimensions, and so we queried whether grids permeate three-dimensional (3D) space – that is, form a lattice – or whether they simply follow the environment surface. If grids form a 3D lattice then this lattice would ordinarily be aligned horizontally (to explain the usual hexagonal pattern observed). A tilted floor would transect several layers of this putative lattice, resulting in interruption of the hexagonal pattern. We model this prediction with simulated grid lattices, and show that the firing of a grid cell on a 40°-tilted surface should cover proportionally less of the surface, with smaller field size, fewer fields, and reduced hexagonal symmetry. However, recording of real grid cells as animals foraged on a 40°-tilted surface found that firing of grid cells was almost indistinguishable, in pattern or rate, from that on the horizontal surface, with if anything increased coverage and field number, and preserved field size. It thus appears unlikely that the sloping surface transected a lattice. However, grid cells on the slope displayed slightly degraded firing patterns, with reduced coherence and slightly reduced symmetry. These findings collectively suggest that the grid cell component of the metric representation of space is not fixed in absolute 3D space but is influenced both by the surface the animal is on and by the relationship of this surface to the horizontal, supporting the hypothesis that the neural map of space is “multi-planar” rather than fully volumetric. PMID:26236245
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Emma M.; Hendrix, Val; Chertkov, Michael
This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less
Irvine Smart Grid Demonstration, a Regional Smart Grid Demonstration Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yinger, Robert; Irwin, Mark
ISGD was a comprehensive demonstration that spanned the electricity delivery system and extended into customer homes. The project used phasor measurement technology to enable substation-level situational awareness, and demonstrated SCE’s next-generation substation automation system. It extended beyond the substation to evaluate the latest generation of distribution automation technologies, including looped 12-kV distribution circuit topology using URCIs. The project team used DVVC capabilities to demonstrate CVR. In customer homes, the project evaluated HAN devices such as smart appliances, programmable communicating thermostats, and home energy management components. The homes were also equipped with energy storage, solar PV systems, and a number ofmore » energy efficiency measures (EEMs). The team used one block of homes to evaluate strategies and technologies for achieving ZNE. A home achieves ZNE when it produces at least as much renewable energy as the amount of energy it consumes annually. The project also assessed the impact of device-specific demand response (DR), as well as load management capabilities involving energy storage devices and plug-in electric vehicle charging equipment. In addition, the ISGD project sought to better understand the impact of ZNE homes on the electric grid. ISGD’s SENet enabled end-to-end interoperability between multiple vendors’ systems and devices, while also providing a level of cybersecurity that is essential to smart grid development and adoption across the nation. The ISGD project includes a series of sub-projects grouped into four logical technology domains: Smart Energy Customer Solutions, Next-Generation Distribution System, Interoperability and Cybersecurity, and Workforce of the Future. Section 2.3 provides a more detailed overview of these domains.« less
Global Marine Gravity and Bathymetry at 1-Minute Resolution
NASA Astrophysics Data System (ADS)
Sandwell, D. T.; Smith, W. H.
2008-12-01
We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
NASA Astrophysics Data System (ADS)
Re, B.; Dobrzynski, C.; Guardone, A.
2017-07-01
A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
A Testbed Environment for Buildings-to-Grid Cyber Resilience Research and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridhar, Siddharth; Ashok, Aditya; Mylrea, Michael E.
The Smart Grid is characterized by the proliferation of advanced digital controllers at all levels of its operational hierarchy from generation to end consumption. Such controllers within modern residential and commercial buildings enable grid operators to exercise fine-grained control over energy consumption through several emerging Buildings-to-Grid (B2G) applications. Though this capability promises significant benefits in terms of operational economics and improved reliability, cybersecurity weaknesses in the supporting infrastructure could be exploited to cause a detrimental effect and this necessitates focused research efforts on two fronts. First, the understanding of how cyber attacks in the B2G space could impact grid reliabilitymore » and to what extent. Second, the development and validation of cyber-physical application-specific countermeasures that are complementary to traditional infrastructure cybersecurity mechanisms for enhanced cyber attack detection and mitigation. The PNNL B2G testbed is currently being developed to address these core research needs. Specifically, the B2G testbed combines high-fidelity buildings+grid simulators, industry-grade building automation and Supervisory Control and Data Acquisition (SCADA) systems in an integrated, realistic, and reconfigurable environment capable of supporting attack-impact-detection-mitigation experimentation. In this paper, we articulate the need for research testbeds to model various B2G applications broadly by looking at the end-to-end operational hierarchy of the Smart Grid. Finally, the paper not only describes the architecture of the B2G testbed in detail, but also addresses the broad spectrum of B2G resilience research it is capable of supporting based on the smart grid operational hierarchy identified earlier.« less
Non-Pilot Protection of the HVDC Grid
NASA Astrophysics Data System (ADS)
Badrkhani Ajaei, Firouz
This thesis develops a non-pilot protection system for the next generation power transmission system, the High-Voltage Direct Current (HVDC) grid. The HVDC grid protection system is required to be (i) adequately fast to prevent damages and/or converter blocking and (ii) reliable to minimize the impacts of faults. This study is mainly focused on the Modular Multilevel Converter (MMC) -based HVDC grid since the MMC is considered as the building block of the future HVDC systems. The studies reported in this thesis include (i) developing an enhanced equivalent model of the MMC to enable accurate representation of its DC-side fault response, (ii) developing a realistic HVDC-AC test system that includes a five-terminal MMC-based HVDC grid embedded in a large interconnected AC network, (iii) investigating the transient response of the developed test system to AC-side and DC-side disturbances in order to determine the HVDC grid protection requirements, (iv) investigating the fault surge propagation in the HVDC grid to determine the impacts of the DC-side fault location on the measured signals at each relay location, (v) designing a protection algorithm that detects and locates DC-side faults reliably and sufficiently fast to prevent relay malfunction and unnecessary blocking of the converters, and (vi) performing hardware-in-the-loop tests on the designed relay to verify its potential to be implemented in hardware. The results of the off-line time domain transients studies in the PSCAD software platform and the real-time hardware-in-the-loop tests using an enhanced version of the RTDS platform indicate that the developed HVDC grid relay meets all technical requirements including speed, dependability, security, selectivity, and robustness. Moreover, the developed protection algorithm does not impose considerable computational burden on the hardware.
Pan, Chengfeng; Kumar, Kitty; Li, Jianzhao; Markvicka, Eric J; Herman, Peter R; Majidi, Carmel
2018-03-01
A material architecture and laser-based microfabrication technique is introduced to produce electrically conductive films (sheet resistance = 2.95 Ω sq -1 ; resistivity = 1.77 × 10 -6 Ω m) that are soft, elastic (strain limit >100%), and optically transparent. The films are composed of a grid-like array of visually imperceptible liquid-metal (LM) lines on a clear elastomer. Unlike previous efforts in transparent LM circuitry, the current approach enables fully imperceptible electronics that have not only high optical transmittance (>85% at 550 nm) but are also invisible under typical lighting conditions and reading distances. This unique combination of properties is enabled with a laser writing technique that results in LM grid patterns with a line width and pitch as small as 4.5 and 100 µm, respectively-yielding grid-like wiring that has adequate conductivity for digital functionality but is also well below the threshold for visual perception. The electrical, mechanical, electromechanical, and optomechanical properties of the films are characterized and it is found that high conductivity and transparency are preserved at tensile strains of ≈100%. To demonstrate their effectiveness for emerging applications in transparent displays and sensing electronics, the material architecture is incorporated into a couple of illustrative use cases related to chemical hazard warning. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Advances in Parallelization for Large Scale Oct-Tree Mesh Generation
NASA Technical Reports Server (NTRS)
O'Connell, Matthew; Karman, Steve L.
2015-01-01
Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.
Power Hardware-in-the-Loop Testing of a Smart Distribution System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza Carrillo, Ismael; Breaden, Craig; Medley, Paige
This paper presents the results of the third and final phase of the National Renewable Energy Lab (NREL) INTEGRATE demonstration: Smart Distribution. For this demonstration, high penetrations of solar PV and wind energy systems were simulated in a power hardware-in-the-loop set-up using a smart distribution test feeder. Simulated and real DERs were controlled by a real-time control platform, which manages grid constraints under high clean energy deployment levels. The power HIL testing, conducted at NREL's ESIF smart power lab, demonstrated how dynamically managing DER increases the grid's hosting capacity by leveraging active network management's (ANM) safe and reliable control framework.more » Results are presented for how ANM's real-time monitoring, automation, and control can be used to manage multiple DERs and multiple constraints associated with high penetrations of DER on a distribution grid. The project also successfully demonstrated the importance of escalating control actions given how ANM enables operation of grid equipment closer to their actual physical limit in the presence of very high levels of intermittent DER.« less
Cook, Brendan; Gazzano, Jerrome; Gunay, Zeynep; Hiller, Lucas; Mahajan, Sakshi; Taskan, Aynur; Vilogorac, Samra
2012-04-23
The electric grid in the United States has been suffering from underinvestment for years, and now faces pressing challenges from rising demand and deteriorating infrastructure. High congestion levels in transmission lines are greatly reducing the efficiency of electricity generation and distribution. In this paper, we assess the faults of the current electric grid and quantify the costs of maintaining the current system into the future. While the proposed "smart grid" contains many proposals to upgrade the ailing infrastructure of the electric grid, we argue that smart meter installation in each U.S. household will offer a significant reduction in peak demand on the current system. A smart meter is a device which monitors a household's electricity consumption in real-time, and has the ability to display real-time pricing in each household. We conclude that these devices will provide short-term and long-term benefits to utilities and consumers. The smart meter will enable utilities to closely monitor electricity consumption in real-time, while also allowing households to adjust electricity consumption in response to real-time price adjustments.
Grids, Clouds, and Virtualization
NASA Astrophysics Data System (ADS)
Cafaro, Massimo; Aloisio, Giovanni
This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.
Analyzing system safety in lithium-ion grid energy storage
NASA Astrophysics Data System (ADS)
Rosewater, David; Williams, Adam
2015-12-01
As grid energy storage systems become more complex, it grows more difficult to design them for safe operation. This paper first reviews the properties of lithium-ion batteries that can produce hazards in grid scale systems. Then the conventional safety engineering technique Probabilistic Risk Assessment (PRA) is reviewed to identify its limitations in complex systems. To address this gap, new research is presented on the application of Systems-Theoretic Process Analysis (STPA) to a lithium-ion battery based grid energy storage system. STPA is anticipated to fill the gaps recognized in PRA for designing complex systems and hence be more effective or less costly to use during safety engineering. It was observed that STPA is able to capture causal scenarios for accidents not identified using PRA. Additionally, STPA enabled a more rational assessment of uncertainty (all that is not known) thereby promoting a healthy skepticism of design assumptions. We conclude that STPA may indeed be more cost effective than PRA for safety engineering in lithium-ion battery systems. However, further research is needed to determine if this approach actually reduces safety engineering costs in development, or improves industry safety standards.
Smart Grid Interoperability Maturity Model Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widergren, Steven E.; Drummond, R.; Giroti, Tony
The GridWise Architecture Council was formed by the U.S. Department of Energy to promote and enable interoperability among the many entities that interact with the electric power system. This balanced team of industry representatives proposes principles for the development of interoperability concepts and standards. The Council provides industry guidance and tools that make it an available resource for smart grid implementations. In the spirit of advancing interoperability of an ecosystem of smart grid devices and systems, this document presents a model for evaluating the maturity of the artifacts and processes that specify the agreement of parties to collaborate across anmore » information exchange interface. You are expected to have a solid understanding of large, complex system integration concepts and experience in dealing with software component interoperation. Those without this technical background should read the Executive Summary for a description of the purpose and contents of the document. Other documents, such as checklists, guides, and whitepapers, exist for targeted purposes and audiences. Please see the www.gridwiseac.org website for more products of the Council that may be of interest to you.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Wang, Taiping
This paper presents a modeling study conducted to evaluate tidal-stream energy extraction and its associated potential environmental impacts using a three-dimensional unstructured-grid coastal ocean model, which was coupled with a water-quality model and a tidal-turbine module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonder, Jeff; Brooker, Aaron; Burton, Evan
Presentation given at an 'Expert Workshop on V2X Enabled Electric Vehicles' hosted at NREL on behalf of the International Energy Agency (IEA) Hybrid and Electric Vehicle Implementing Agreement for Task 28: Home Grids and V2X Technologies.
NASA Astrophysics Data System (ADS)
Maleki, Hesamaldin; Ramachandaramurthy, V. K.; Lak, Moein
2013-06-01
Burning of fossil fuels and green house gasses causes global warming. This has led to governments to explore the use of green energies instead of fossil fuels. The availability of wind has made wind technology a viable alternative for generating electrical power. Hence, many parts of the world, especially Europe are experiencing a growth in wind farms. However, by increasing the number of wind farms connected to the grid, power quality and voltage stability of grid becomes a matter of concern. In this paper, VSC-HVDC control strategy which enables the wind farm to ride-through faults and regulate voltage for fault types is proposed. The results show that the wind turbine output voltage fulfills the E.ON grid code requirements, when subjected to three phase to ground fault. Hence, continues operation of the wind farm is achieved.
Security Policies for Mitigating the Risk of Load Altering Attacks on Smart Grid Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryutov, Tatyana; AlMajali, Anas; Neuman, Clifford
2015-04-01
While demand response programs implement energy efficiency and power quality objectives, they bring potential security threats to the Smart Grid. The ability to influence load in a system enables attackers to cause system failures and impacts the quality and integrity of power delivered to customers. This paper presents a security mechanism to monitor and control load according to a set of security policies during normal system operation. The mechanism monitors, detects, and responds to load altering attacks. We examined the security requirements of Smart Grid stakeholders and constructed a set of load control policies enforced by the mechanism. We implementedmore » a proof of concept prototype and tested it using the simulation environment. By enforcing the proposed policies in this prototype, the system is maintained in a safe state in the presence of load drop attacks.« less
The Proteome Folding Project: Proteome-scale prediction of structure and function
Drew, Kevin; Winters, Patrick; Butterfoss, Glenn L.; Berstis, Viktors; Uplinger, Keith; Armstrong, Jonathan; Riffle, Michael; Schweighofer, Erik; Bovermann, Bill; Goodlett, David R.; Davis, Trisha N.; Shasha, Dennis; Malmström, Lars; Bonneau, Richard
2011-01-01
The incompleteness of proteome structure and function annotation is a critical problem for biologists and, in particular, severely limits interpretation of high-throughput and next-generation experiments. We have developed a proteome annotation pipeline based on structure prediction, where function and structure annotations are generated using an integration of sequence comparison, fold recognition, and grid-computing-enabled de novo structure prediction. We predict protein domain boundaries and three-dimensional (3D) structures for protein domains from 94 genomes (including human, Arabidopsis, rice, mouse, fly, yeast, Escherichia coli, and worm). De novo structure predictions were distributed on a grid of more than 1.5 million CPUs worldwide (World Community Grid). We generated significant numbers of new confident fold annotations (9% of domains that are otherwise unannotated in these genomes). We demonstrate that predicted structures can be combined with annotations from the Gene Ontology database to predict new and more specific molecular functions. PMID:21824995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilder, Todd; Moragne, Corliss L.
The City of Tallahassee's Innovative Energy Initiatives program sought, first, to evaluate customer response and acceptance to in-home Smart Meter-enabled technologies that allow customers intelligent control of their energy usage. Additionally, this project is in furtherance of the City of Tallahassee's ongoing efforts to expand and enhance the City's Smart Grid capacity and give consumers more tools with which to effectively manage their energy consumption. This enhancement would become possible by establishing an "operations or command center" environment that would be designed as a dual use facility for the City's employees - field and network staff - and systems responsiblemore » for a Smart Grid network. A command center would also support the City's Office of Electric Delivery and Energy Reliability's objective to overcome barriers to the deployment of new technologies that will ensure a truly modern and robust grid capable of meeting the demands of the 2151 century.« less
Monitoring of the electrical parameters in off-grid solar power system
NASA Astrophysics Data System (ADS)
Idzkowski, Adam; Leoniuk, Katarzyna; Walendziuk, Wojciech
2016-09-01
The aim of this work was to make a monitoring dedicated to an off-grid installation. A laboratory set, which was built for that purpose, was equipped with a PV panel, a battery, a charge controller and a load. Additionally, to monitor electrical parameters from this installation there were used: LabJack module (data acquisition card), measuring module (self-built) and a computer with a program, which allows to measure and present the off-grid installation parameters. The program was made in G language using LabVIEW software. The designed system enables analyzing the currents and voltages of PV panel, battery and load. It makes also possible to visualize them on charts and to make reports from registered data. The monitoring system was also verified by a laboratory test and in real conditions. The results of this verification are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-10-01
ADEPT Project: HRL Laboratories is using gallium nitride (GaN) semiconductors to create battery chargers for electric vehicles (EVs) that are more compact and efficient than traditional EV chargers. Reducing the size and weight of the battery charger is important because it would help improve the overall performance of the EV. GaN semiconductors process electricity faster than the silicon semiconductors used in most conventional EV battery chargers. These high-speed semiconductors can be paired with lighter-weight electrical circuit components, which helps decrease the overall weight of the EV battery charger. HRL Laboratories is combining the performance advantages of GaN semiconductors with anmore » innovative, interactive battery-to-grid energy distribution design. This design would support 2-way power flow, enabling EV battery chargers to not only draw energy from the power grid, but also store and feed energy back into it.« less
A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.
Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P
2014-09-01
We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash;
2002-01-01
A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Yue, Meng; Wang, Xiaoyu
2015-07-01
It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making processmore » regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul
While it may seem obvious that wind and solar 'need' energy storage to be successfully integrated into the world's electricity grids, both detailed integration studies and real-world experience have shown that storage is only one of many options that could enable substantially increased growth of these renewable resources. This talk will discuss the potential role of energy storage in the integrating wind and solar, demonstrating that in the near term perhaps less exciting -- but often more cost-effective -- alternatives will likely provide much of the grid flexibility needed to add renewable resources. The talk will also demonstrate that themore » decreasing value of PV and wind and at increased penetration creates greater opportunities for storage. It also demonstrates the fact that 'the sun doesn't always shine and the wind always doesn't blow' is only one reason why energy storage may be an increasingly attractive solution to the challenges of operating the grid of the future.« less
Performance of large area xenon ion thrusters for orbit transfer missions
NASA Technical Reports Server (NTRS)
Rawlin, Vincent K.
1989-01-01
Studies have indicated that xenon ion propulsion systems can enable the use of smaller Earth-launch vehicles for satellite placement which results in significant cost savings. These analyses have assumed the availability of advanced, high power ion thrusters operating at about 10 kW or higher. A program was initiated to explore the viability of operating 50 cm diameter ion thrusters at this power level. Operation with several discharge chamber and ion extraction grid set combinations has been demonstrated and data were obtained at power levels to 16 kW. Fifty cm diameter thrusters using state of the art 30 cm diameter grids or advanced technology 50 cm diameter grids allow discharge power and beam current densities commensurate with long life at power levels up to 10 kW. In addition, 50 cm diameter thrusters are shown to have the potential for growth in thrust and power levels beyond 10 KW.
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
NASA Astrophysics Data System (ADS)
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
Secure and Time-Aware Communication of Wireless Sensors Monitoring Overhead Transmission Lines.
Mazur, Katarzyna; Wydra, Michal; Ksiezopolski, Bogdan
2017-07-11
Existing transmission power grids suffer from high maintenance costs and scalability issues along with a lack of effective and secure system monitoring. To address these problems, we propose to use Wireless Sensor Networks (WSNs) as a technology to achieve energy efficient, reliable, and low-cost remote monitoring of transmission grids. With WSNs, smart grid enables both utilities and customers to monitor, predict and manage energy usage effectively and react to possible power grid disturbances in a timely manner. However, the increased application of WSNs also introduces new security challenges, especially related to privacy, connectivity, and security management, repeatedly causing unpredicted expenditures. Monitoring the status of the power system, a large amount of sensors generates massive amount of sensitive data. In order to build an effective Wireless Sensor Network (WSN) for a smart grid, we focus on designing a methodology of efficient and secure delivery of the data measured on transmission lines. We perform a set of simulations, in which we examine different routing algorithms, security mechanisms and WSN deployments in order to select the parameters that will not affect the delivery time but fulfill their role and ensure security at the same time. Furthermore, we analyze the optimal placement of direct wireless links, aiming at minimizing time delays, balancing network performance and decreasing deployment costs.
Xia, Zhouhui; Gao, Peng; Sun, Teng; Wu, Haihua; Tan, Yeshu; Song, Tao; Lee, Shuit-Tong; Sun, Baoquan
2018-04-25
Silicon (Si)/organic heterojunction solar cells based on poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) and n-type Si have attracted wide interests because they promise cost-effectiveness and high-efficiency. However, the limited conductivity of PEDOT:PSS leads to an inefficient hole transport efficiency for the heterojunction device. Therefore, a high dense top-contact metal grid electrode is required to assure the efficient charge collection efficiency. Unfortunately, the large metal grid coverage ratio electrode would lead to undesirable optical loss. Here, we develop a strategy to balance PEDOT:PSS conductivity and grid optical transmittance via a buried molybdenum oxide/silver grid electrode. In addition, the grid electrode coverage ratio is optimized to reduce its light shading effect. The buried electrode dramatically reduces the device series resistance, which leads to a higher fill factor (FF). With the optimized buried electrode, a record FF of 80% is achieved for flat Si/PEDOT:PSS heterojunction devices. With further enhancement adhesion between the PEDOT:PSS film and Si substrate by a chemical cross-linkable silance, a power conversion efficiency of 16.3% for organic/textured Si heterojunction devices is achieved. Our results provide a path to overcome the inferior organic semiconductor property to enhance the organic/Si heterojunction solar cell.
Secure and Time-Aware Communication of Wireless Sensors Monitoring Overhead Transmission Lines
Mazur, Katarzyna; Wydra, Michal; Ksiezopolski, Bogdan
2017-01-01
Existing transmission power grids suffer from high maintenance costs and scalability issues along with a lack of effective and secure system monitoring. To address these problems, we propose to use Wireless Sensor Networks (WSNs)as a technology to achieve energy efficient, reliable, and low-cost remote monitoring of transmission grids. With WSNs, smart grid enables both utilities and customers to monitor, predict and manage energy usage effectively and react to possible power grid disturbances in a timely manner. However, the increased application of WSNs also introduces new security challenges, especially related to privacy, connectivity, and security management, repeatedly causing unpredicted expenditures. Monitoring the status of the power system, a large amount of sensors generates massive amount of sensitive data. In order to build an effective Wireless Sensor Networks (WSNs) for a smart grid, we focus on designing a methodology of efficient and secure delivery of the data measured on transmission lines. We perform a set of simulations, in which we examine different routing algorithms, security mechanisms and WSN deployments in order to select the parameters that will not affect the delivery time but fulfill their role and ensure security at the same time. Furthermore, we analyze the optimal placement of direct wireless links, aiming at minimizing time delays, balancing network performance and decreasing deployment costs. PMID:28696390
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2014-09-30
The Maui Smart Grid Project (MSGP) is under the leadership of the Hawaii Natural Energy Institute (HNEI) of the University of Hawaii at Manoa. The project team includes Maui Electric Company, Ltd. (MECO), Hawaiian Electric Company, Inc. (HECO), Sentech (a division of SRA International, Inc.), Silver Spring Networks (SSN), Alstom Grid, Maui Economic Development Board (MEDB), University of Hawaii-Maui College (UHMC), and the County of Maui. MSGP was supported by the U.S. Department of Energy (DOE) under Cooperative Agreement Number DE-FC26-08NT02871, with approximately 50% co-funding supplied by MECO. The project was designed to develop and demonstrate an integrated monitoring, communications,more » database, applications, and decision support solution that aggregates renewable energy (RE), other distributed generation (DG), energy storage, and demand response technologies in a distribution system to achieve both distribution and transmission-level benefits. The application of these new technologies and procedures will increase MECO’s visibility into system conditions, with the expected benefits of enabling more renewable energy resources to be integrated into the grid, improving service quality, increasing overall reliability of the power system, and ultimately reducing costs to both MECO and its customers.« less
Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data
NASA Astrophysics Data System (ADS)
Koranda, Scott
2004-03-01
The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basso, T.
Public-private partnerships have been a mainstay of the U.S. Department of Energy and the National Renewable Energy Laboratory (DOE/NREL) approach to research and development. These partnerships also include technology development that enables grid modernization and distributed energy resources (DER) advancement, especially renewable energy systems integration with the grid. Through DOE/NREL and industry support of Institute of Electrical and Electronics Engineers (IEEE) standards development, the IEEE 1547 series of standards has helped shape the way utilities and other businesses have worked together to realize increasing amounts of DER interconnected with the distribution grid. And more recently, the IEEE 2030 series ofmore » standards is helping to further realize greater implementation of communications and information technologies that provide interoperability solutions for enhanced integration of DER and loads with the grid. For these standards development partnerships, for approximately $1 of federal funding, industry partnering has contributed $5. In this report, the status update is presented for the American National Standards IEEE 1547 and IEEE 2030 series of standards. A short synopsis of the history of the 1547 standards is first presented, then the current status and future direction of the ongoing standards development activities are discussed.« less
Contributing opportunistic resources to the grid with HTCondor-CE-Bosco
NASA Astrophysics Data System (ADS)
Weitzel, Derek; Bockelman, Brian
2017-10-01
The HTCondor-CE [1] is the primary Compute Element (CE) software for the Open Science Grid. While it offers many advantages for large sites, for smaller, WLCG Tier-3 sites or opportunistic clusters, it can be a difficult task to install, configure, and maintain the HTCondor-CE. Installing a CE typically involves understanding several pieces of software, installing hundreds of packages on a dedicated node, updating several configuration files, and implementing grid authentication mechanisms. On the other hand, accessing remote clusters from personal computers has been dramatically improved with Bosco: site admins only need to setup SSH public key authentication and appropriate accounts on a login host. In this paper, we take a new approach with the HTCondor-CE-Bosco, a CE which combines the flexibility and reliability of the HTCondor-CE with the easy-to-install Bosco. The administrators of the opportunistic resource are not required to install any software: only SSH access and a user account are required from the host site. The OSG can then run the grid-specific portions from a central location. This provides a new, more centralized, model for running grid services, which complements the traditional distributed model. We will show the architecture of a HTCondor-CE-Bosco enabled site, as well as feedback from multiple sites that have deployed it.
A Security-façade Library for Virtual-observatory Software
NASA Astrophysics Data System (ADS)
Rixon, G.
2009-09-01
The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.
An Analysis for an Internet Grid to Support Space Based Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert; McNair, Ann R. (Technical Monitor)
2002-01-01
Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements have been used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate other approaches to providing mission services for space ground and flight operations. In various scientific disciplines, effort is under way to develop network/komputing grids. These grids consisting of networks and computing equipment are enabling lower cost science. Specifically, earthquake research is headed in this direction. With a standard for network and computing interfaces using a grid, a researcher would not be required to develop and engineer NASA/DoD specific interfaces with the attendant increased cost. Use of the Internet Protocol (IP), CCSDS packet spec, and reed-solomon for satellite error correction etc. can be adopted/standardized to provide these interfaces. Generally most interfaces are developed at least to some degree end to end. This study would investigate the feasibility of using existing standards and protocols necessary to implement a SpaceOps Grid. New interface definitions or adoption/modification of existing ones for the various space operational services is required for voice both space based and ground, video, telemetry, commanding and planning may play a role to some undefined level. Security will be a separate focus in the study since security is such a large issue in using public networks. This SpaceOps Grid would be transparent to users. It would be anagulous to the Ethernet protocol's ease of use in that a researcher would plug in their experiment or instrument at one end and would be connected to the appropriate host or server without further intervention. Free flyers would be in this category as well. They would be launched and would transmit without any further intervention with the researcher or ground ops personnel. The payback in developing these new approaches in support of manned and unmanned operations is lower cost and will enable direct participation by more people in organizations and educational institutions in space based science. By lowering the high cost of space based operations and networking, more resource will be available to the science community for science. With a specific grid in place, experiment development and operations would be much less costly by using standardized network interfaces. Because of the extensive connectivity on a global basis, significant numbers of people would participate in science who otherwise would not be able to participate.
Smart Grid Demonstration Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Craig; Carroll, Paul; Bell, Abigail
The National Rural Electric Cooperative Association (NRECA) organized the NRECA-U.S. Department of Energy (DOE) Smart Grid Demonstration Project (DE-OE0000222) to install and study a broad range of advanced smart grid technologies in a demonstration that spanned 23 electric cooperatives in 12 states. More than 205,444 pieces of electronic equipment and more than 100,000 minor items (bracket, labels, mounting hardware, fiber optic cable, etc.) were installed to upgrade and enhance the efficiency, reliability, and resiliency of the power networks at the participating co-ops. The objective of this project was to build a path for other electric utilities, and particularly electrical cooperatives,more » to adopt emerging smart grid technology when it can improve utility operations, thus advancing the co-ops’ familiarity and comfort with such technology. Specifically, the project executed multiple subprojects employing a range of emerging smart grid technologies to test their cost-effectiveness and, where the technology demonstrated value, provided case studies that will enable other electric utilities—particularly electric cooperatives— to use these technologies. NRECA structured the project according to the following three areas: Demonstration of smart grid technology; Advancement of standards to enable the interoperability of components; and Improvement of grid cyber security. We termed these three areas Technology Deployment Study, Interoperability, and Cyber Security. Although the deployment of technology and studying the demonstration projects at coops accounted for the largest portion of the project budget by far, we see our accomplishments in each of the areas as critical to advancing the smart grid. All project deliverables have been published. Technology Deployment Study: The deliverable was a set of 11 single-topic technical reports in areas related to the listed technologies. Each of these reports has already been submitted to DOE, distributed to co-ops, and posted for universal access at www.nreca.coop/smartgrid. This research is available for widespread distribution to both cooperative members and non-members. These reports are listed in Table 1.2. Interoperability: The deliverable in this area was the advancement of the MultiSpeak™ interoperability standard from version 4.0 to version 5.0, and improvement in the MultiSpeak™ documentation to include more than 100 use cases. This deliverable substantially expanded the scope and usability of MultiSpeak, ™ the most widely deployed utility interoperability standard, now in use by more than 900 utilities. MultiSpeak™ documentation can be accessed only at www.multispeak.org. Cyber Security: NRECA’s starting point was to develop cyber security tools that incorporated succinct guidance on best practices. The deliverables were: cyber security extensions to MultiSpeak,™ which allow more security message exchanges; a Guide to Developing a Cyber Security and Risk Mitigation Plan; a Cyber Security Risk Mitigation Checklist; a Cyber Security Plan Template that co-ops can use to create their own cyber security plans; and Security Questions for Smart Grid Vendors.« less
Gridded Data in the Arctic; Benefits and Perils of Publicly Available Grids
NASA Astrophysics Data System (ADS)
Coakley, B.; Forsberg, R.; Gabbert, R.; Beale, J.; Kenyon, S. C.
2015-12-01
Our understanding of the Arctic Ocean has been hugely advanced by release of gridded bathymetry and potential field anomaly grids. The Arctic Gravity Project grid achieves excellent, near-isotropic coverage of the earth north of 64˚N by combining land, satellite, airborne, submarine, surface ship and ice set-out measurements of gravity anomalies. Since the release of the V 2.0 grid in 2008, there has been extensive icebreaker activity across the Amerasia Basin due to mapping of the Arctic coastal nation's Extended Continental Shelves (ECS). While grid resolution has been steadily improving over time, addition of higher resolution and better navigated data highlights some distortions in the grid that may influence interpretation. In addition to the new ECS data sets, gravity anomaly data has been collected from other vessels; notably the Korean Icebreaker Araon, the Japanese icebreaker Mirai and the German icebreaker Polarstern. Also the GRAV-D project of the US National Geodetic Survey has flown airborne surveys over much of Alaska. These data will be Included in the new AGP grid, which will result in a much improved product when version 3.0 is released in 2015. To make use of these measurements, it is necessary to compile them into a continuous spatial representation. Compilation is complicated by differences in survey parameters, gravimeter sensitivity and reduction methods. Cross-over errors are the classic means to assess repeatability of track measurements. Prior to the introduction of near-universal GPS positioning, positional uncertainty was evaluated by cross-over analysis. GPS positions can be treated as more or less true, enabling evaluation of differences due to contrasting sensitivity, reference and reduction techniques. For the most part, cross-over errors for racks of gravity anomaly data collected since 2008 are less than 0.5 mGals, supporting the compilation of these data with only slight adjustments. Given the different platforms used for various Arctic Ocean surveys, registration between bathymetric and gravity anomaly grids cannot be assumed. Inverse methods, which assume co-registration of data produce, sometimes surprising results when well-constrained gravity grid values are inverted against interpolated bathymetry.
Grid-Tied Photovoltaic Power System
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
A grid-tied photovoltaic (PV) power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. Operating costs of a PV power system are low compared to conventional power technologies. This method can displace the highest-cost electricity during times of peak demand in most climatic regions, and thus reduce grid loading. Net metering is often used, in which independent power producers such as PV power systems are connected to the utility grid via the customers main service panels and meters. When the PV power system is generating more power than required at that location, the excess power is provided to the utility grid. The customer pays the net of the power purchased when the on-site power demand is greater than the onsite power production, and the excess power is returned to the utility grid. Power generated by the PV system reduces utility demand, and the surplus power aids the community. Modern PV panels are readily available, reliable, efficient, and economical, with a life expectancy of at least 25 years. Modern electronics have been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy equal to the modern PV panels. The grid-tied PV power system was successfully designed and developed, and this served to validate the basic principles developed, and the theoretical work that was performed. Grid-tied PV power systems are reliable, maintenance- free, long-life power systems, and are of significant value to NASA and the community. Of particular value are the analytical tools and capabilities that have been successfully developed. Performance predictions can be made confidently for grid-tied PV systems of various scales. The work was done under the NASA Hybrid Power Management (HPM) Program, which is the integration of diverse power devices in an optimal configuration for space and terrestrial applications.
NASA Astrophysics Data System (ADS)
Baiya, Evanson G.
New energy technologies that provide real-time visibility of the electricity grid's performance, along with the ability to address unusual events in the grid and allow consumers to manage their energy use, are being developed in the United States. Primary drivers for the new technologies include the growing energy demand, tightening environmental regulations, aging electricity infrastructure, and rising consumer demand to become more involved in managing individual energy usage. In the literature and in practice, it is unclear if, and to what extent, residential consumers will adopt smart grid technologies. The purpose of this quantitative study was to examine the relationships between demographic characteristics, perceptions, and the likelihood of adopting smart grid technologies among residential energy consumers. The results of a 31-item survey were analyzed for differences within the Idaho consumers and compared against national consumers. Analysis of variance was used to examine possible differences between the dependent variable of likeliness to adopt smart grid technologies and the independent variables of age, gender, residential ownership, and residential location. No differences were found among Idaho consumers in their likeliness to adopt smart grid technologies. An independent sample t-test was used to examine possible differences between the two groups of Idaho consumers and national consumers in their level of interest in receiving detailed feedback information on energy usage, the added convenience of the smart grid, renewable energy, the willingness to pay for infrastructure costs, and the likeliness to adopt smart grid technologies. The level of interest in receiving detailed feedback information on energy usage was significantly different between the two groups (t = 3.11, p = .0023), while the other variables were similar. The study contributes to technology adoption research regarding specific consumer perceptions and provides a framework that estimates the likeliness of adopting smart grid technologies by residential consumers. The study findings could assist public utility managers and technology adoption researchers as they develop strategies to enable wide-scale adoption of smart grid technologies as a solution to the energy problem. Future research should be conducted among commercial and industrial energy consumers to further validate the findings and conclusions of this research.
Grid Computing: Topology-Aware, Peer-to-Peer, Power-Aware, and Embedded Web Services
2003-09-22
Dist Simulation • Time Management enables temporal causality to be enforced in Distributed Simulations • Typically enforced via a Lower Bound Time...algorithm • Distinguished Root Node Algorithm developed as a topology-aware time management service – Relies on a tree from end-hosts to a
2004-01-01
Women, Messages and Media: Understanding Human Communication Introduction: One of the most critical aspects of transforming information...and Porter 1982) Wilbur Schramm and William Porter, Men, Women, Messages and Media: Understanding Human Communication (New York, Harper and Rowe
2006-09-30
coastal phenomena. OBJECTIVES SURA is creating a SCOOP “Grid” that extends the interoperability enabled by the World Wide Web. The coastal ... community faces special challenges with respect to achieving a level of interoperability that can leverage emerging Grid technologies. With that in mind
NASA Astrophysics Data System (ADS)
Hey, Tony
2002-08-01
After defining what is meant by the term 'e-Science', this talk will survey the activity on e-Science and Grids in Europe. The two largest initiatives in Europe are the European Commission's portfolio of Grid projects and the UK e-Science program. The EU under its R Framework Program are funding nearly twenty Grid projects in a wide variety of application areas. These projects are in varying stages of maturity and this talk will focus on a subset that have most significant progress. These include the EU DataGrid project led by CERN and two projects - EuroGrid and Grip - that evolved from the German national Unicore project. A summary of the other EU Grid projects will be included. The UK e-Science initiative is a 180M program entirely focused on e-Science applications requiring resource sharing, a virtual organization and a Grid infrastructure. The UK program is unique for three reasons: (1) the program covers all areas of science and engineering; (2) all of the funding is devoted to Grid application and middleware development and not to funding major hardware platforms; and (3) there is an explicit connection with industry to produce robust and secure industrial-strength versions of Grid middleware that could be used in business-critical applications. A part of the funding, around 50M, but requiring an additional 'matching' $30M from industry in collaborative projects, forms the UK e-Science 'Core Program'. It is the responsibility of the Core Program to identify and support a set of generic middleware requirements that have emerged from a requirements analysis of the e-Science application projects. This has led to a much more data-centric vision for 'the Grid' in the UK in which access to HPC facilities forms only one element. More important for the UK projects are issues such as enabling access and federation of scientific data held in files, relational databases and other archives. Automatic annotation of data generated by high throughput experiments with XML-based metadata is seen as a key step towards developing higher-level Grid services for information retrieval and knowledge discovery. The talk will conclude with a survey of other Grid initiatives across Europe and look at possible future European projects.
Using Unsupervised Learning to Unlock the Potential of Hydrologic Similarity
NASA Astrophysics Data System (ADS)
Chaney, N.; Newman, A. J.
2017-12-01
By clustering environmental data into representative hydrologic response units (HRUs), hydrologic similarity aims to harness the covariance between a system's physical environment and its hydrologic response to create reduced-order models. This is the primary approach through which sub-grid hydrologic processes are represented in large-scale models (e.g., Earth System Models). Although the possibilities of hydrologic similarity are extensive, its practical implementations have been limited to 1-d bins of oversimplistic metrics of hydrologic response (e.g., topographic index)—this is a missed opportunity. In this presentation we will show how unsupervised learning is unlocking the potential of hydrologic similarity; clustering methods enable generalized frameworks to effectively and efficiently harness the petabytes of global environmental data to robustly characterize sub-grid heterogeneity in large-scale models. To illustrate the potential that unsupervised learning has towards advancing hydrologic similarity, we introduce a hierarchical clustering algorithm (HCA) that clusters very high resolution (30-100 meters) elevation, soil, climate, and land cover data to assemble a domain's representative HRUs. These HRUs are then used to parameterize the sub-grid heterogeneity in land surface models; for this study we use the GFDL LM4 model—the land component of the GFDL Earth System Model. To explore HCA and its impacts on the hydrologic system we use a ¼ grid cell in southeastern California as a test site. HCA is used to construct an ensemble of 9 different HRU configurations—each configuration has a different number of HRUs; for each ensemble member LM4 is run between 2002 and 2014 with a 26 year spinup. The analysis of the ensemble of model simulations show that: 1) clustering the high-dimensional environmental data space leads to a robust representation of the role of the physical environment in the coupled water, energy, and carbon cycles at a relatively low number of HRUs; 2) the reduced-order model with around 300 HRUs effectively reproduces the fully distributed model simulation (30 meters) with less than 1/1000 of computational expense; 3) assigning each grid cell of the fully distributed grid to an HRU via HCA enables novel visualization methods for large-scale models—this has significant implications for how these models are applied and evaluated. We will conclude by outlining the potential that this work has within operational prediction systems including numerical weather prediction, Earth System models, and Early Warning systems.
Breton, Vincent; Dean, Kevin; Solomonides, Tony; Blanquer, I; Hernandez, V; Medico, E; Maglaveras, N; Benkner, S; Lonsdale, G; Lloyd, S; Hassan, K; McClatchey, R; Miguet, S; Montagnat, J; Pennec, X; De Neve, W; De Wagter, C; Heeren, G; Maigne, L; Nozaki, K; Taillet, M; Bilofsky, H; Ziegler, R; Hoffman, M; Jones, C; Cannataro, M; Veltri, P; Aloisio, G; Fiore, S; Mirto, M; Chouvarda, I; Koutkias, V; Malousi, A; Lopez, V; Oliveira, I; Sanchez, J P; Martin-Sanchez, F; De Moor, G; Claerhout, B; Herveg, J A M
2005-01-01
Over the last four years, a community of researchers working on Grid and High Performance Computing technologies started discussing the barriers and opportunities that grid technologies must face and exploit for the development of health-related applications. This interest lead to the first Healthgrid conference, held in Lyon, France, on January 16th-17th, 2003, with the focus of creating increased awareness about the possibilities and advantages linked to the deployment of grid technologies in health, ultimately targeting the creation of a European/international grid infrastructure for health. The topics of this conference converged with the position of the eHealth division of the European Commission, whose mandate from the Lisbon Meeting was "To develop an intelligent environment that enables ubiquitous management of citizens' health status, and to assist health professionals in coping with some major challenges, risk management and the integration into clinical practice of advances in health knowledge." In this context "Health" involves not only clinical procedures but covers the whole range of information from molecular level (genetic and proteomic information) over cells and tissues, to the individual and finally the population level (social healthcare). Grid technology offers the opportunity to create a common working backbone for all different members of this large "health family" and will hopefully lead to an increased awareness and interoperability among disciplines. The first HealthGrid conference led to the creation of the Healthgrid association, a non-profit research association legally incorporated in France but formed from the broad community of European researchers and institutions sharing expertise in health grids. After the second Healthgrid conference, held in Clermont-Ferrand on January 29th-30th, 2004, the need for a "white paper" on the current status and prospective of health grids was raised. Over fifty experts from different areas of grid technologies, eHealth applications and the medical world were invited to contribute to the preparation of this document.
Wireless Sensor Network for Electric Transmission Line Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alphenaar, Bruce
Generally, federal agencies tasked to oversee power grid reliability are dependent on data from grid infrastructure owners and operators in order to obtain a basic level of situational awareness. Since there are many owners and operators involved in the day-to-day functioning of the power grid, the task of accessing, aggregating and analyzing grid information from these sources is not a trivial one. Seemingly basic tasks such as synchronizing data timestamps between many different data providers and sources can be difficult as evidenced during the post-event analysis of the August 2003 blackout. In this project we investigate the efficacy and costmore » effectiveness of deploying a network of wireless power line monitoring devices as a method of independently monitoring key parts of the power grid as a complement to the data which is currently available to federal agencies from grid system operators. Such a network is modeled on proprietary power line monitoring technologies and networks invented, developed and deployed by Genscape, a Louisville, Kentucky based real-time energy information provider. Genscape measures transmission line power flow using measurements of electromagnetic fields under overhead high voltage transmission power lines in the United States and Europe. Opportunities for optimization of the commercial power line monitoring technology were investigated in this project to enable lower power consumption, lower cost and improvements to measurement methodologies. These optimizations were performed in order to better enable the use of wireless transmission line monitors in large network deployments (perhaps covering several thousand power lines) for federal situational awareness needs. Power consumption and cost reduction were addressed by developing a power line monitor using a low power, low cost wireless telemetry platform known as the ''Mote''. Motes were first developed as smart sensor nodes in wireless mesh networking applications. On such a platform, it has been demonstrated in this project that wireless monitoring units can effectively deliver real-time transmission line power flow information for less than $500 per monitor. The data delivered by such a monitor has during the course of the project been integrated with a national grid situational awareness visualization platform developed by Oak Ridge National Laboratory. Novel vibration energy scavenging methods based on piezoelectric cantilevers were also developed as a proposed method to power such monitors, with a goal of further cost reduction and large-scale deployment. Scavenging methods developed during the project resulted in 50% greater power output than conventional cantilever-based vibrational energy scavenging devices typically used to power smart sensor nodes. Lastly, enhanced and new methods for electromagnetic field sensing using multi-axis magnetometers and infrared reflectometry were investigated for potential monitoring applications in situations with a high density of power lines or high levels of background 60 Hz noise in order to isolate power lines of interest from other power lines in close proximity. The goal of this project was to investigate and demonstrate the feasibility of using small form factor, highly optimized, low cost, low power, non-contact, wireless electric transmission line monitors for delivery of real-time, independent power line monitoring for the US power grid. The project was divided into three main types of activity as follows; (1) Research into expanding the range of applications for non-contact power line monitoring to enable large scale low cost sensor network deployments (Tasks 1, 2); (2) Optimization of individual sensor hardware components to reduce size, cost and power consumption and testing in a pilot field study (Tasks 3,5); and (3) Demonstration of the feasibility of using the data from the network of power line monitors via a range of custom developed alerting and data visualization applications to deliver real-time information to federal agencies and others tasked with grid reliability (Tasks 6,8).« less
Tariff Considerations for Micro-Grids in Sub-Saharan Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reber, Timothy J.; Booth, Samuel S.; Cutler, Dylan S.
This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the namemore » of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the name of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance stakeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.takeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magee, Thoman
The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), batterymore » storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG. The SGDP enables the efficient, flexible integration of these disparate resources and lays the architectural foundations for future scalability. Con Edison assembled an SGDP team of more than 16 different project partners, including technology vendors, and participating organizations, and the Con Edison team provided overall guidance and project management. Project team members are listed in Table 1-1.« less
Spaceflight Operations Services Grid (SOSG)
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Thigpen, William W.
2004-01-01
In an effort to adapt existing space flight operations services to new emerging Grid technologies we are developing a Grid-based prototype space flight operations Grid. This prototype is based on the operational services being provided to the International Space Station's Payload operations located at the Marshall Space Flight Center, Alabama. The prototype services will be Grid or Web enabled and provided to four user communities through portal technology. Users will have the opportunity to assess the value and feasibility of Grid technologies to their specific areas or disciplines. In this presentation descriptions of the prototype development, User-based services, Grid-based services and status of the project will be presented. Expected benefits, findings and observations (if any) to date will also be discussed. The focus of the presentation will be on the project in general, status to date and future plans. The End-use services to be included in the prototype are voice, video, telemetry, commanding, collaboration tools and visualization among others. Security is addressed throughout the project and is being designed into the Grid technologies and standards development. The project is divided into three phases. Phase One establishes the baseline User-based services required for space flight operations listed above. Phase Two involves applying Gridlweb technologies to the User-based services and development of portals for access by users. Phase Three will allow NASA and end users to evaluate the services and determine the future of the technology as applied to space flight operational services. Although, Phase One, which includes the development of the quasi-operational User-based services of the prototype, development will be completed by March 2004, the application of Grid technologies to these services will have just begun. We will provide status of the Grid technologies to the individual User-based services. This effort will result in an extensible environment that incorporates existing and new spaceflight services into a standards-based framework providing current and future NASA programs with cost savings and new and evolvable methods to conduct science. This project will demonstrate how the use of new programming paradigms such as web and grid services can provide three significant benefits to the cost-effective delivery of spaceflight services. They will enable applications to operate more efficiently by being able to utilize pooled resources. They will also permit the reuse of common services to rapidly construct new and more powerful applications. Finally they will permit easy and secure access to services via a combination of grid and portal technology by a distributed user community consisting of NASA operations centers, scientists, the educational community and even the general population as outreach. The approach will be to deploy existing mission support applications such as the Telescience Resource Kit (TReK) and new applications under development, such as the Grid Video Distribution System (GViDS), together with existing grid applications and services such as high-performance computing and visualization services provided by NASA s Information Power Grid (IPG) in the MSFC s Payload Operations Integration Center (POIC) HOSC Annex. Once the initial applications have been moved to the grid, a process will begin to apply the new programming paradigms to integrate them where possible. For example, with GViDS, instead of viewing the Distribution service as an application that must run on a single node, the new approach is to build it such that it can be dispatched across a pool of resources in response to dynamic loads. To make this a reality, reusable services will be critical, such as a brokering service to locate appropriate resource within the pool. This brokering service can then be used by other applications such as the TReK. To expand further, if the GViDS application is constructed using a services-based mel, then other applications such as the Video Auditorium can then use GViDS as a service to easily incorporate these video streams into a collaborative conference. Finally, as these applications are re-factored into this new services-based paradigm, the construction of portals to integrate them will be a simple process. As a result, portals can be tailored to meet the requirements of specific user communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorgian, Vahan; Koralewicz, Przemyslaw; Wallen, Robb
The rapid expansion of wind power has led many transmission system operators to demand modern wind power plants to comply with strict interconnection requirements. Such requirements involve various aspects of wind power plant operation, including fault ride-through and power quality performance as well as the provision of ancillary services to enhance grid reliability. During recent years, the National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy has developed a new, groundbreaking testing apparatus and methodology to test and demonstrate many existing and future advanced controls for wind generation (and other renewable generation technologies) on the multimegawatt scale andmore » medium-voltage levels. This paper describes the capabilities and control features of NREL's 7-MVA power electronic grid simulator (also called a controllable grid interface, or CGI) that enables testing many active and reactive power control features of modern wind turbine generators -- including inertial response, primary and secondary frequency responses, and voltage regulation -- under a controlled, medium-voltage grid environment. In particular, this paper focuses on the specifics of testing the balanced and unbalanced fault ride-through characteristics of wind turbine generators under simulated strong and weak medium-voltage grid conditions. In addition, this paper provides insights on the power hardware-in-the-loop feature implemented in the CGI to emulate (in real time) the conditions that might exist in various types of electric power systems under normal operations and/or contingency scenarios. Using actual test examples and simulation results, this paper describes the value of CGI as an ultimate modeling validation tool for all types of 'grid-friendly' controls by wind generation.« less
Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin
Casto, Daniel W.
2001-01-01
Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
NASA Astrophysics Data System (ADS)
Deosarkar, S. D.; Ghatbandhe, A. S.
2014-01-01
Molecular interactions and structural fittings in binary ethylene glycol + ethanol (EGE, x EG = 0.4111-0.0418) and ethylene glycol + water (EGW, x EG = 0.1771-0.0133) mixtures were studied through the measurement of densities (ρ), viscosities (η), and refractive indices ( n D ) at 303.15 K. Excess viscosities (η E ), molar volumes ( V m ), excess molar volumes ( V {/m E }), and molar retractions ( R M ) of the both binary systems were computed from measured properties. The measured and computed properties have been used to understand the molecular interactions in unlike solvents and structural fittings in these binary mixtures.
Individualized grid-enabled mammographic training system
NASA Astrophysics Data System (ADS)
Yap, M. H.; Gale, A. G.
2009-02-01
The PERFORMS self-assessment scheme measures individuals skills in identifying key mammographic features on sets of known cases. One aspect of this is that it allows radiologists' skills to be trained, based on their data from this scheme. Consequently, a new strategy is introduced to provide revision training based on mammographic features that the radiologist has had difficulty with in these sets. To do this requires a lot of random cases to provide dynamic, unique, and up-to-date training modules for each individual. We propose GIMI (Generic Infrastructure in Medical Informatics) middleware as the solution to harvest cases from distributed grid servers. The GIMI middleware enables existing and legacy data to support healthcare delivery, research, and training. It is technology-agnostic, data-agnostic, and has a security policy. The trainee examines each case, indicating the location of regions of interest, and completes an evaluation form, to determine mammographic feature labelling, diagnosis, and decisions. For feedback, the trainee can choose to have immediate feedback after examining each case or batch feedback after examining a number of cases. All the trainees' result are recorded in a database which also contains their trainee profile. A full report can be prepared for the trainee after they have completed their training. This project demonstrates the practicality of a grid-based individualised training strategy and the efficacy in generating dynamic training modules within the coverage/outreach of the GIMI middleware. The advantages and limitations of the approach are discussed together with future plans.
Wildlife monitoring across multiple spatial scales using grid-based sampling
Kevin S. McKelvey; Samuel A. Cushman; Michael K. Schwartz; Leonard F. Ruggiero
2009-01-01
Recently, noninvasive genetic sampling has become the most effective way to reliably sample occurrence of many species. In addition, genetic data provide a rich data source enabling the monitoring of population status. The combination of genetically based animal data collected at known spatial coordinates with vegetation, topography, and other available covariates...
Energy Systems Integration News | Energy Systems Integration Facility |
the electric grid. These control systems will enable real-time coordination between distributed energy with real-time voltage and frequency control at the level of the home or distributed energy resource least for electricity. A real-time connection to weather forecasts and energy prices would allow the
Pollux: Enhancing the Quality of Service of the Global Information Grid (GIG)
2009-06-01
and throughput of standard-based and/or COTS-based QoS-enabled pub/sub technologies, including DDS, JMS, Web Services, and CORBA. 2. The DDS QoS...of ser- vice pICKER (QUICKER) model-driven engineering ( MDE ) toolchain shown in Figure 8. QUICKER extends the Platform-Independent Component Modeling
GaN Micromechanical Resonators with Meshed Metal Bottom Electrode.
Ansari, Azadeh; Liu, Che-Yu; Lin, Chien-Chung; Kuo, Hao-Chung; Ku, Pei-Cheng; Rais-Zadeh, Mina
2015-03-17
This work describes a novel architecture to realize high-performance gallium nitride (GaN) bulk acoustic wave (BAW) resonators. The method is based on the growth of a thick GaN layer on a metal electrode grid. The fabrication process starts with the growth of a thin GaN buffer layer on a Si (111) substrate. The GaN buffer layer is patterned and trenches are made and refilled with sputtered tungsten (W)/silicon dioxide (SiO₂) forming passivated metal electrode grids. GaN is then regrown, nucleating from the exposed GaN seed layer and coalescing to form a thick GaN device layer. A metal electrode can be deposited and patterned on top of the GaN layer. This method enables vertical piezoelectric actuation of the GaN layer using its largest piezoelectric coefficient ( d 33 ) for thickness-mode resonance. Having a bottom electrode also results in a higher coupling coefficient, useful for the implementation of acoustic filters. Growth of GaN on Si enables releasing the device from the frontside using isotropic xenon difluoride (XeF₂) etch and therefore eliminating the need for backside lithography and etching.
Summary of Utility Studies: Smart Grid Investment Grant Consumer Behavior Study Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappers, Peter; Todd, Annika; Goldamn, Charles A.
2013-05-01
The U.S. Department of Energy’s (DOE’s) Smart Grid Investment Grant (SGIG) program is working with a subset of the 99 SGIG projects to assess the response of mass market consumers (i.e., residential and small commercial customers) to time-varying electricity prices (referred to herein as time-based rate programs) in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort provides an opportunity to advance the electric industry’s understanding of consumer behavior. In addition, DOE is attempting to apply a consistent study design and analysis framework for the SGIG Consumer Behavior Studies (CBS). The aim is to collectmore » information across the studies on variables and impacts that have been defined in a consistent manner. This will enable Lawrence Berkeley National Lab (LBNL), as DOE’s principal investigator for these Consumer Behavior Studies, to leverage the data from the individual studies and conduct comparative analysis of the impacts of AMI, time-based rate programs and enabling technologies that facilitate customer control, automation and information/feedback on customer energy usage.« less
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL
NASA Astrophysics Data System (ADS)
Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong
2011-12-01
We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.
Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon
2018-02-28
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
WPS mediation: An approach to process geospatial data on different computing backends
NASA Astrophysics Data System (ADS)
Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas
2012-10-01
The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.
Analyzing system safety in lithium-ion grid energy storage
Rosewater, David; Williams, Adam
2015-10-08
As grid energy storage systems become more complex, it grows more di cult to design them for safe operation. This paper first reviews the properties of lithium-ion batteries that can produce hazards in grid scale systems. Then the conventional safety engineering technique Probabilistic Risk Assessment (PRA) is reviewed to identify its limitations in complex systems. To address this gap, new research is presented on the application of Systems-Theoretic Process Analysis (STPA) to a lithium-ion battery based grid energy storage system. STPA is anticipated to ll the gaps recognized in PRA for designing complex systems and hence be more e ectivemore » or less costly to use during safety engineering. It was observed that STPA is able to capture causal scenarios for accidents not identified using PRA. Additionally, STPA enabled a more rational assessment of uncertainty (all that is not known) thereby promoting a healthy skepticism of design assumptions. Lastly, we conclude that STPA may indeed be more cost effective than PRA for safety engineering in lithium-ion battery systems. However, further research is needed to determine if this approach actually reduces safety engineering costs in development, or improves industry safety standards.« less
The island dynamics model on parallel quadtree grids
NASA Astrophysics Data System (ADS)
Mistani, Pouria; Guittet, Arthur; Bochkov, Daniil; Schneider, Joshua; Margetis, Dionisios; Ratsch, Christian; Gibou, Frederic
2018-05-01
We introduce an approach for simulating epitaxial growth by use of an island dynamics model on a forest of quadtree grids, and in a parallel environment. To this end, we use a parallel framework introduced in the context of the level-set method. This framework utilizes: discretizations that achieve a second-order accurate level-set method on non-graded adaptive Cartesian grids for solving the associated free boundary value problem for surface diffusion; and an established library for the partitioning of the grid. We consider the cases with: irreversible aggregation, which amounts to applying Dirichlet boundary conditions at the island boundary; and an asymmetric (Ehrlich-Schwoebel) energy barrier for attachment/detachment of atoms at the island boundary, which entails the use of a Robin boundary condition. We provide the scaling analyses performed on the Stampede supercomputer and numerical examples that illustrate the capability of our methodology to efficiently simulate different aspects of epitaxial growth. The combination of adaptivity and parallelism in our approach enables simulations that are several orders of magnitude faster than those reported in the recent literature and, thus, provides a viable framework for the systematic study of mound formation on crystal surfaces.
A Green Prison: The Santa Rita Jail Campus Microgrid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; DeForest, Nicholas; Lai, Judy
2012-01-22
A large microgrid project is nearing completion at Alameda County’s twenty-two-year-old 45 ha 4,000-inmate Santa Rita Jail, about 70 km east of San Francisco. Often described as a green prison, it has a considerable installed base of distributed energy resources (DER) including an eight-year old 1.2 MW PV array, a five-year old 1 MW fuel cell with heat recovery, and considerable efficiency investments. A current US$14 M expansion adds a 2 MW-4 MWh Li-ion battery, a static disconnect switch, and various controls upgrades. During grid blackouts, or when conditions favor it, the Jail can now disconnect from the grid andmore » operate as an island, using the on-site resources described together with its back-up diesel generators. In other words, the Santa Rita Jail is a true microgrid, or μgrid, because it fills both requirements, i.e. it is a locally controlled system, and it can operate both grid connected and islanded. The battery’s electronics includes Consortium for Electric Reliability Technology (CERTS) Microgrid technology. This enables the battery to maintain energy balance using droops without need for a fast control system.« less
omniClassifier: a Desktop Grid Computing System for Big Data Prediction Modeling
Phan, John H.; Kothari, Sonal; Wang, May D.
2016-01-01
Robust prediction models are important for numerous science, engineering, and biomedical applications. However, best-practice procedures for optimizing prediction models can be computationally complex, especially when choosing models from among hundreds or thousands of parameter choices. Computational complexity has further increased with the growth of data in these fields, concurrent with the era of “Big Data”. Grid computing is a potential solution to the computational challenges of Big Data. Desktop grid computing, which uses idle CPU cycles of commodity desktop machines, coupled with commercial cloud computing resources can enable research labs to gain easier and more cost effective access to vast computing resources. We have developed omniClassifier, a multi-purpose prediction modeling application that provides researchers with a tool for conducting machine learning research within the guidelines of recommended best-practices. omniClassifier is implemented as a desktop grid computing system using the Berkeley Open Infrastructure for Network Computing (BOINC) middleware. In addition to describing implementation details, we use various gene expression datasets to demonstrate the potential scalability of omniClassifier for efficient and robust Big Data prediction modeling. A prototype of omniClassifier can be accessed at http://omniclassifier.bme.gatech.edu/. PMID:27532062
Grid site availability evaluation and monitoring at CMS
NASA Astrophysics Data System (ADS)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
PNNL’s Shared Perspectives Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-09-25
Shared Perspectives, one of the technologies within the PNNL-developed GridOPTICS capability suite, enables neighboring organizations, such as different electric utilities, to more effectively partner to solve outages and other grid problems. Shared Perspectives provides a means for organizations to safely stream information from different organizational service areas; the technology then combines and aligns this information into a common, global view, enhancing global situation awareness that can reduce the time it takes to talk through a problem and identify solutions. The technology potentially offers applications in other areas, such as disaster response; collaboration in the monitoring/assessment of real-time events (e.g., hurricanes,more » earthquakes, and tornadoes); as well as military uses.« less
PNNLâs Shared Perspectives Technology
None
2018-01-16
Shared Perspectives, one of the technologies within the PNNL-developed GridOPTICS capability suite, enables neighboring organizations, such as different electric utilities, to more effectively partner to solve outages and other grid problems. Shared Perspectives provides a means for organizations to safely stream information from different organizational service areas; the technology then combines and aligns this information into a common, global view, enhancing global situation awareness that can reduce the time it takes to talk through a problem and identify solutions. The technology potentially offers applications in other areas, such as disaster response; collaboration in the monitoring/assessment of real-time events (e.g., hurricanes, earthquakes, and tornadoes); as well as military uses.
Energy Policy Case Study - Texas: Wind, Markets, and Grid Modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, Alice C.; Homer, Juliet S.; Bender, Sadie R.
This document presents a case study of energy policies in Texas related to power system transformation, renewable energy and distributed energy resources (DERs). Texas has experienced a dramatic increase in installed wind capacity, from 116 MW in 2000 to over 15,000 MW in 2015. This achievement was enabled by the designation of Competitive Renewable Energy Zones (CREZs) and new transmission lines that transmit wind to load centers. This report highlights nascent efforts to include DERs in the ERCOT market. As costs decline and adoption rates increase, ERCOT expects distributed generation to have an increasing effect on grid operations, while bringingmore » potentially valuable new resources to the wholesale markets.« less
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
Capturing Multiscale Phenomena via Adaptive Mesh Refinement (AMR) in 2D and 3D Atmospheric Flows
NASA Astrophysics Data System (ADS)
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.; Langhans, W.; Collins, W. D.
2017-12-01
Extreme atmospheric events such as tropical cyclones are inherently complex multiscale phenomena. Such phenomena are a challenge to simulate in conventional atmosphere models, which typically use rather coarse uniform-grid resolutions. To enable study of these systems, Adaptive Mesh Refinement (AMR) can provide sufficient local resolution by dynamically placing high-resolution grid patches selectively over user-defined features of interest, such as a developing cyclone, while limiting the total computational burden of requiring such high-resolution globally. This work explores the use of AMR with a high-order, non-hydrostatic, finite-volume dynamical core, which uses the Chombo AMR library to implement refinement in both space and time on a cubed-sphere grid. The characteristics of the AMR approach are demonstrated via a series of idealized 2D and 3D test cases designed to mimic atmospheric dynamics and multiscale flows. In particular, new shallow-water test cases with forcing mechanisms are introduced to mimic the strengthening of tropical cyclone-like vortices and to include simplified moisture and convection processes. The forced shallow-water experiments quantify the improvements gained from AMR grids, assess how well transient features are preserved across grid boundaries, and determine effective refinement criteria. In addition, results from idealized 3D test cases are shown to characterize the accuracy and stability of the non-hydrostatic 3D AMR dynamical core.
Mousa, Mohamed G; Allam, S M; Rashad, Essam M
2018-01-01
This paper proposes an advanced strategy to synchronize the wind-driven Brushless Doubly-Fed Reluctance Generator (BDFRG) to the grid-side terminals. The proposed strategy depends mainly upon determining the electrical angle of the grid voltage, θ v and using the same transformation matrix of both the power winding and grid sides to ensure that the generated power-winding voltage has the same phase-sequence of the grid-side voltage. On the other hand, the paper proposes a vector-control (power-winding flux orientation) technique for maximum wind-power extraction under two schemes summarized as; unity power-factor operation and minimum converter-current. Moreover, a soft-starting method is suggested to avoid the employed converter over-current. The first control scheme is achieved by adjusting the command power-winding reactive power at zero for a unity power-factor operation. However, the second scheme depends on setting the command d-axis control-winding current at zero to maximize the ratio of the generator electromagnetic-torque per the converter current. This enables the system to get a certain command torque under minimum converter current. A sample of the obtained simulation and experimental results is presented to check the effectiveness of the proposed control strategies. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
PyMT: A Python package for model-coupling in the Earth sciences
NASA Astrophysics Data System (ADS)
Hutton, E.
2016-12-01
The current landscape of Earth-system models is not only broad in scientific scope, but also broad in type. On the one hand, the large variety of models is exciting, as it provides fertile ground for extending or linking models together in novel ways to answer new scientific questions. However, the heterogeneity in model type acts to inhibit model coupling, model development, or even model use. Existing models are written in a variety of programming languages, operate on different grids, use their own file formats (both for input and output), have different user interfaces, have their own time steps, etc. Each of these factors become obstructions to scientists wanting to couple, extend - or simply run - existing models. For scientists whose main focus may not be computer science these barriers become even larger and become significant logistical hurdles. And this is all before the scientific difficulties of coupling or running models are addressed. The CSDMS Python Modeling Toolkit (PyMT) was developed to help non-computer scientists deal with these sorts of modeling logistics. PyMT is the fundamental package the Community Surface Dynamics Modeling System uses for the coupling of models that expose the Basic Modeling Interface (BMI). It contains: Tools necessary for coupling models of disparate time and space scales (including grid mappers) Time-steppers that coordinate the sequencing of coupled models Exchange of data between BMI-enabled models Wrappers that automatically load BMI-enabled models into the PyMT framework Utilities that support open-source interfaces (UGRID, SGRID,CSDMS Standard Names, etc.) A collection of community-submitted models, written in a variety of programminglanguages, from a variety of process domains - but all usable from within the Python programming language A plug-in framework for adding additional BMI-enabled models to the framework In this presentation we intoduce the basics of the PyMT as well as provide an example of coupling models of different domains and grid types.
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
Electricity demand and storage dispatch modeling for buildings and implications for the smartgrid
NASA Astrophysics Data System (ADS)
Zheng, Menglian; Meinrenken, Christoph
2013-04-01
As an enabler for demand response (DR), electricity storage in buildings has the potential to lower costs and carbon footprint of grid electricity while simultaneously mitigating grid strain and increasing its flexibility to integrate renewables (central or distributed). We present a stochastic model to simulate minute-by-minute electricity demand of buildings and analyze the resulting electricity costs under actual, currently available DR-enabling tariffs in New York State, namely a peak/offpeak tariff charging by consumed energy (monthly total kWh) and a time of use tariff charging by power demand (monthly peak kW). We then introduce a variety of electrical storage options (from flow batteries to flywheels) and determine how DR via temporary storage may increase the overall net present value (NPV) for consumers (comparing the reduced cost of electricity to capital and maintenance costs of the storage). We find that, under the total-energy tariff, only medium-term storage options such as batteries offer positive NPV, and only at the low end of storage costs (optimistic scenario). Under the peak-demand tariff, however, even short-term storage such as flywheels and superconducting magnetic energy offer positive NPV. Therefore, these offer significant economic incentive to enable DR without affecting the consumption habits of buildings' residents. We discuss implications for smartgrid communication and our future work on real-time price tariffs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, Andrew P.; Sparn, Bethany F.; Jin, Xin
This document is the final report of a two-year development, test, and demonstration project entitled 'Cohesive Application of Standards-Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL) Integrated Network Test-bed for Energy Grid Research and Technology (INTEGRATE) initiative. The Electric Power Research Institute (EPRI) and a team of partners were selected by NREL to carry out a project to develop and test how smart, connected consumer devices can act to enable the use of more clean energy technologies on the electric power grid. The project team includes a set ofmore » leading companies that produce key products in relation to achieving this vision: thermostats, water heaters, pool pumps, solar inverters, electric vehicle supply equipment, and battery storage systems. A key requirement of the project was open access at the device level - a feature seen as foundational to achieving a future of widespread distributed generation and storage. The internal intelligence, standard functionality and communication interfaces utilized in this project result in the ability to integrate devices at any level, to work collectively at the level of the home/business, microgrid, community, distribution circuit or other. Collectively, the set of products serve as a platform on which a wide range of control strategies may be developed and deployed.« less
IsoMAP (Isoscape Modeling, Analysis, and Prediction)
NASA Astrophysics Data System (ADS)
Miller, C. C.; Bowen, G. J.; Zhang, T.; Zhao, L.; West, J. B.; Liu, Z.; Rapolu, N.
2009-12-01
IsoMAP is a TeraGrid-based web portal aimed at building the infrastructure that brings together distributed multi-scale and multi-format geospatial datasets to enable statistical analysis and modeling of environmental isotopes. A typical workflow enabled by the portal includes (1) data source exploration and selection, (2) statistical analysis and model development; (3) predictive simulation of isotope distributions using models developed in (1) and (2); (4) analysis and interpretation of simulated spatial isotope distributions (e.g., comparison with independent observations, pattern analysis). The gridded models and data products created by one user can be shared and reused among users within the portal, enabling collaboration and knowledge transfer. This infrastructure and the research it fosters can lead to fundamental changes in our knowledge of the water cycle and ecological and biogeochemical processes through analysis of network-based isotope data, but it will be important A) that those with whom the data and models are shared can be sure of the origin, quality, inputs, and processing history of these products, and B) the system is agile and intuitive enough to facilitate this sharing (rather than just ‘allow’ it). IsoMAP researchers are therefore building into the portal’s architecture several components meant to increase the amount of metadata about users’ products and to repurpose those metadata to make sharing and discovery more intuitive and robust to both expected, professional users as well as unforeseeable populations from other sectors.
Development and Testing of a Prototype Grid-Tied Photovoltaic Power System
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2009-01-01
The NASA Glenn Research Center (GRC) has developed and tested a prototype 2 kW DC grid-tied photovoltaic (PV) power system at the Center. The PV system has generated in excess of 6700 kWh since operation commenced in July 2006. The PV system is providing power to the GRC grid for use by all. Operation of the prototype PV system has been completely trouble free. A grid-tied PV power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. The project transfers space technology to terrestrial use via nontraditional partners. GRC personnel glean valuable experience with PV power systems that are directly applicable to various space power systems, and provide valuable space program test data. PV power systems help to reduce harmful emissions and reduce the Nation s dependence on fossil fuels. Power generated by the PV system reduces the GRC utility demand, and the surplus power aids the community. Present global energy concerns reinforce the need for the development of alternative energy systems. Modern PV panels are readily available, reliable, efficient, and economical with a life expectancy of at least 25 years. Modern electronics has been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy of at least 25 years. Based upon the success of the prototype PV system, additional PV power system expansion at GRC is under consideration. The prototype grid-tied PV power system was successfully designed and developed which served to validate the basic principles described, and the theoretical work that was performed. The report concludes that grid-tied photovoltaic power systems are reliable, maintenance free, long life power systems, and are of significant value to NASA and the community.
In-depth analysis of switchable glycerol based polymeric coatings for cell sheet engineering.
Becherer, Tobias; Heinen, Silke; Wei, Qiang; Haag, Rainer; Weinhart, Marie
2015-10-01
Scaffold-free cell sheet engineering using thermoresponsive substrates provides a promising alternative to conventional tissue engineering which in general employs biodegradable scaffold materials. We have previously developed a thermoresponsive coating with glycerol based linear copolymers that enables gentle harvesting of entire cell sheets. In this article we present an in-depth analysis of these thermoresponsive linear polyglycidyl ethers and their performance as coating for substrates in cell culture in comparison with commercially available poly(N-isopropylacrylamide) (PNIPAM) coated culture dishes. A series of copolymers of glycidyl methyl ether (GME) and glycidyl ethyl ether (EGE) was prepared in order to study their thermoresponsive properties in solution and on the surface with respect to the comonomer ratio. In both cases, when grafted to planar surfaces or spherical nanoparticles, the applied thermoresponsive polyglycerol coatings render the respective surfaces switchable. Protein adsorption experiments on copolymer coated planar surfaces with surface plasmon resonance (SPR) spectroscopy reveal the ability of the tested thermoresponsive coatings to be switched between highly protein resistant and adsorptive states. Cell culture experiments demonstrate that these thermoresponsive coatings allow for adhesion and proliferation of NIH 3T3 fibroblasts comparable to TCPS and faster than on PNIPAM substrates. Temperature triggered detachment of complete cell sheets from copolymer coated substrates was accomplished within minutes while maintaining high viability of the harvested cells. Thus such glycerol based copolymers present a promising alternative to PNIPAM as a thermoresponsive coating of cell culture substrates. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
A Concept for the One Degree Imager (ODI) Data Reduction Pipeline and Archiving System
NASA Astrophysics Data System (ADS)
Knezek, Patricia; Stobie, B.; Michael, S.; Valdes, F.; Marru, S.; Henschel, R.; Pierce, M.
2010-05-01
The One Degree Imager (ODI), currently being built by the WIYN Observatory, will provide tremendous possibilities for conducting diverse scientific programs. ODI will be a complex instrument, using non-conventional Orthogonal Transfer Array (OTA) detectors. Due to its large field of view, small pixel size, use of OTA technology, and expected frequent use, ODI will produce vast amounts of astronomical data. If ODI is to achieve its full potential, a data reduction pipeline must be developed. Long-term archiving must also be incorporated into the pipeline system to ensure the continued value of ODI data. This paper presents a concept for an ODI data reduction pipeline and archiving system. To limit costs and development time, our plan leverages existing software and hardware, including existing pipeline software, Science Gateways, Computational Grid & Cloud Technology, Indiana University's Data Capacitor and Massive Data Storage System, and TeraGrid compute resources. Existing pipeline software will be augmented to add functionality required to meet challenges specific to ODI, enhance end-user control, and enable the execution of the pipeline on grid resources including national grid resources such as the TeraGrid and Open Science Grid. The planned system offers consistent standard reductions and end-user flexibility when working with images beyond the initial instrument signature removal. It also gives end-users access to computational and storage resources far beyond what are typically available at most institutions. Overall, the proposed system provides a wide array of software tools and the necessary hardware resources to use them effectively.
NASA Astrophysics Data System (ADS)
Jaithwa, Ishan
Deployment of smart grid technologies is accelerating. Smart grid enables bidirectional flows of energy and energy-related communications. The future electricity grid will look very different from today's power system. Large variable renewable energy sources will provide a greater portion of electricity, small DERs and energy storage systems will become more common, and utilities will operate many different kinds of energy efficiency. All of these changes will add complexity to the grid and require operators to be able to respond to fast dynamic changes to maintain system stability and security. This thesis investigates advanced control technology for grid integration of renewable energy sources and STATCOM systems by verifying them on real time hardware experiments using two different systems: d SPACE and OPAL RT. Three controls: conventional, direct vector control and the intelligent Neural network control were first simulated using Matlab to check the stability and safety of the system and were then implemented on real time hardware using the d SPACE and OPAL RT systems. The thesis then shows how dynamic-programming (DP) methods employed to train the neural networks are better than any other controllers where, an optimal control strategy is developed to ensure effective power delivery and to improve system stability. Through real time hardware implementation it is proved that the neural vector control approach produces the fastest response time, low overshoot, and, the best performance compared to the conventional standard vector control method and DCC vector control technique. Finally the entrepreneurial approach taken to drive the technologies from the lab to market via ORANGE ELECTRIC is discussed in brief.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorgian, Vahan
The National Renewable Energy Laboratory (NREL) and DONG Energy are interested in collaborating for the development of control algorithms, modeling, and grid simulator testing of wind turbine generator systems involving NWTC's advanced Controllable Grid Interface (CGI). NREL and DONG Energy will work together to develop control algorithms, models, test methods, and protocols involving NREL's CGI, as well as appropriate data acquisition systems for grid simulation testing. The CRADA also includes work on joint publication of results achieved from modeling and testing efforts. Further, DONG Energy will send staff to NREL on a long-term basis for collaborative work including modeling andmore » testing. NREL will send staff to DONG Energy on a short-term basis to visit wind power sites and participate in meetings relevant to this collaborative effort. DOE has provided NREL with over 10 years of support in developing custom facilities and capabilities to enable testing of full-scale integrated wind turbine drivetrain systems in accordance with the needs of the US wind industry. NREL currently operates a 2.5MW dynamometer and is in the processes of commissioning a 5MW dynamometer and a grid simulator (referred to as a 'Controllable Grid Interface' or CGI). DONG Energy is the market leader in offshore wind power development, with currently over 1 GW of on- and offshore wind power in operation, and 1.3 GW under construction. DONG Energy has on-going R&D projects involving high voltage DC (HVDC) transmission.« less
Cyberinfrastructure for high energy physics in Korea
NASA Astrophysics Data System (ADS)
Cho, Kihyeon; Kim, Hyunwoo; Jeung, Minho; High Energy Physics Team
2010-04-01
We introduce the hierarchy of cyberinfrastructure which consists of infrastructure (supercomputing and networks), Grid, e-Science, community and physics from bottom layer to top layer. KISTI is the national headquarter of supercomputer, network, Grid and e-Science in Korea. Therefore, KISTI is the best place to for high energy physicists to use cyberinfrastructure. We explain this concept on the CDF and the ALICE experiments. In the meantime, the goal of e-Science is to study high energy physics anytime and anywhere even if we are not on-site of accelerator laboratories. The components are data production, data processing and data analysis. The data production is to take both on-line and off-line shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment such as EVO (Enabling Virtual Organization) system. We also present the global community activities of FKPPL (France-Korea Particle Physics Laboratory) and physics as top layer.
U.S. Laws and Regulations for Renewable Energy Grid Interconnections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chernyakhovskiy, Ilya; Tian, Tian; McLaren, Joyce
Rapidly declining costs of wind and solar energy technologies, increasing concerns about the environmental and climate change impacts of fossil fuels, and sustained investment in renewable energy projects all point to a not-so-distant future in which renewable energy plays a pivotal role in the electric power system of the 21st century. In light of public pressures and market factors that hasten the transition towards a low-carbon system, power system planners and regulators are preparing to integrate higher levels of variable renewable generation into the grid. Updating the regulations that govern generator interconnections and operations is crucial to ensure system reliabilitymore » while creating an enabling environment for renewable energy development. This report presents a chronological review of energy laws and regulations concerning grid interconnection procedures in the United States, highlighting the consequences of policies for renewable energy interconnections. Where appropriate, this report places interconnection policies and their impacts on renewable energy within the broader context of power market reform.« less
Compression of pulsed electron beams for material tests
NASA Astrophysics Data System (ADS)
Metel, Alexander S.
2018-03-01
In order to strengthen the surface of machine parts and investigate behavior of their materials exposed to highly dense energy fluxes an electron gun has been developed, which produces the pulsed beams of electrons with the energy up to 300 keV and the current up to 250 A at the pulse width of 100-200 µs. Electrons are extracted into the accelerating gap from the hollow cathode glow discharge plasma through a flat or a spherical grid. The flat grid produces 16-cm-diameter beams with the density of transported per one pulse energy not exceeding 15 J·cm-2, which is not enough even for the surface hardening. The spherical grid enables compression of the beams and regulation of the energy density from 15 J·cm-2 up to 15 kJ·cm-2, thus allowing hardening, pulsed melting of the machine part surface with the further high-speed recrystallization as well as an explosive ablation of the surface layer.
Multi-Material Front Contact for 19% Thin Film Solar Cells.
van Deelen, Joop; Tezsevin, Yasemin; Barink, Marco
2016-02-06
The trade-off between transmittance and conductivity of the front contact material poses a bottleneck for thin film solar panels. Normally, the front contact material is a metal oxide and the optimal cell configuration and panel efficiency were determined for various band gap materials, representing Cu(In,Ga)Se₂ (CIGS), CdTe and high band gap perovskites. Supplementing the metal oxide with a metallic copper grid improves the performance of the front contact and aims to increase the efficiency. Various front contact designs with and without a metallic finger grid were calculated with a variation of the transparent conductive oxide (TCO) sheet resistance, scribing area, cell length, and finger dimensions. In addition, the contact resistance and illumination power were also assessed and the optimal thin film solar panel design was determined. Adding a metallic finger grid on a TCO gives a higher solar cell efficiency and this also enables longer cell lengths. However, contact resistance between the metal and the TCO material can reduce the efficiency benefit somewhat.
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; ...
2015-06-10
Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users andmore » vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.« less
North-East Asian Super Grid: Renewable energy mix and economics
NASA Astrophysics Data System (ADS)
Breyer, Christian; Bogdanov, Dmitrii; Komoto, Keiichi; Ehara, Tomoki; Song, Jinsoo; Enebish, Namjil
2015-08-01
Further development of the North-East Asian energy system is at a crossroads due to severe limitations of the current conventional energy based system. For North-East Asia it is proposed that the excellent solar and wind resources of the Gobi desert could enable the transformation towards a 100% renewable energy system. An hourly resolved model describes an energy system for North-East Asia, subdivided into 14 regions interconnected by high voltage direct current (HVDC) transmission grids. Simulations are made for highly centralized, decentralized and country-wide grids scenarios. The results for total system levelized cost of electricity (LCOE) are 0.065 and 0.081 €/(kW·h) for the centralized and decentralized approaches for 2030 assumptions. The presented results for 100% renewable resources-based energy systems are lower in LCOE by about 30-40% than recent findings in Europe for conventional alternatives. This research clearly indicates that a 100% renewable resources-based energy system is THE real policy option.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trinklei, Eddy; Parker, Gordon; Weaver, Wayne
This report presents a scoping study for networked microgrids which are defined as "Interoperable groups of multiple Advanced Microgrids that become an integral part of the electricity grid while providing enhanced resiliency through self-healing, aggregated ancillary services, and real-time communication." They result in optimal electrical system configurations and controls whether grid-connected or in islanded modes and enable high penetrations of distributed and renewable energy resources. The vision for the purpose of this document is: "Networked microgrids seamlessly integrate with the electricity grid or other Electric Power Sources (EPS) providing cost effective, high quality, reliable, resilient, self-healing power delivery systems." Scopingmore » Study: Networked Microgrids September 4, 2014 Eddy Trinklein, Michigan Technological University Gordon Parker, Michigan Technological University Wayne Weaver, Michigan Technological University Rush Robinett, Michigan Technological University Lucia Gauchia Babe, Michigan Technological University Chee-Wooi Ten, Michigan Technological University Ward Bower, Ward Bower Innovations LLC Steve Glover, Sandia National Laboratories Steve Bukowski, Sandia National Laboratories Prepared by Michigan Technological University Houghton, Michigan 49931 Michigan Technological University« less
Stability assessment of structures under earthquake hazard through GRID technology
NASA Astrophysics Data System (ADS)
Prieto Castrillo, F.; Boton Fernandez, M.
2009-04-01
This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.
Hexagonal Pixels and Indexing Scheme for Binary Images
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
2004-01-01
A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.
Rogerson, Fraser M; Stanton, Heather; East, Charlotte J; Golub, Suzanne B; Tutolo, Leonie; Farmer, Pamela J; Fosang, Amanda J
2008-06-01
To characterize aggrecan catabolism and the overall phenotype in mice deficient in both ADAMTS-4 and ADAMTS-5 (TS-4/TS-5 Delta-cat) activity. Femoral head cartilage from the joints of TS-4/TS-5 Delta-cat mice and wild-type mice were cultured in vitro, and aggrecan catabolism was stimulated with either interleukin-1alpha (IL-1alpha) or retinoic acid. Total aggrecan release was measured, and aggrecanase activity was examined by Western blotting using neoepitope antibodies for detecting cleavage at EGE 373-374 ALG, SELE 1279-1280 GRG, FREEE 1467-1468 GLG, and AQE 1572-1573 AGEG. Aggrecan catabolism in vivo was examined by Western blotting of cartilage that had been extracted immediately ex vivo. TS-4/TS-5 Delta-cat mice were viable, fertile, and phenotypically normal. TS-4/TS-5 Delta-cat cartilage explants did not release aggrecan in response to IL-1alpha, and there was no detectable increase in aggrecanase neoepitopes. TS-4/TS-5 Delta-cat cartilage explants released aggrecan in response to retinoic acid. There was no retinoic acid-stimulated cleavage at either EGE 373-374 ALG or AQE 1572-1573 AGEG. There was a low level of cleavage at SELE 1279-1280 GRG and major cleavage at FREEE 1467-1468 GLG. Ex vivo, cleavage at FREEE 1467-1468 GLG was substantially reduced, but still present, in TS-4/TS-5 Delta-cat mouse cartilage compared with wild-type mouse cartilage. An aggrecanase other than ADAMTS-4 and ADAMTS-5 is expressed in mouse cartilage and is up-regulated by retinoic acid but not IL-1alpha. The novel aggrecanase appears to have different substrate specificity from either ADAMTS-4 or ADAMTS-5, cleaving E-G bonds but not E-A bonds. Neither ADAMTS-4 nor ADAMTS-5 is required for normal skeletal development or aggrecan turnover in cartilage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Mohit; Grape, Ulrik
2014-07-29
The purpose of this project was for Seeo to deliver the first ever large-scale or grid-scale prototype of a new class of advanced lithium-ion rechargeable batteries. The technology combines unprecedented energy density, lifetime, safety, and cost. The goal was to demonstrate Seeo’s entirely new class of lithium-based batteries based on Seeo’s proprietary nanostructured polymer electrolyte. This technology can enable the widespread deployment in Smart Grid applications and was demonstrated through the development and testing of a 10 kilowatt-hour (kWh) prototype battery system. This development effort, supported by the United States Department of Energy (DOE) enabled Seeo to pursue and validatemore » the transformational performance advantages of its technology for use in grid-tied energy storage applications. The focus of this project and Seeo’s goal as demonstrated through the efforts made under this project is to address the utility market needs for energy storage systems applications, especially for residential and commercial customers tied to solar photovoltaic installations. In addition to grid energy storage opportunities Seeo’s technology has been tested with automotive drive cycles and is seen as equally applicable for battery packs for electric vehicles. The goals of the project were outlined and achieved through a series of specific tasks, which encompassed materials development, scaling up of cells, demonstrating the performance of the cells, designing, building and demonstrating a pack prototype, and providing an economic and environmental assessment. Nearly all of the tasks were achieved over the duration of the program, with only the full demonstration of the battery system and a complete economic and environmental analysis not able to be fully completed. A timeline over the duration of the program is shown in figure 1.« less
Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J
2016-05-01
The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Nnadi, Matthias; Rosser, Mike
2014-01-01
The "individualised accounting questions" (IAQ) technique set out in this paper encourages independent active learning. It enables tutors to set individualised accounting questions and construct an answer grid that can be used for any number of students, with numerical values for each student's answers based on their student enrolment…
Lankila, Tiina; Näyhä, Simo; Rautio, Arja; Koiranen, Markku; Rusanen, Jarmo; Taanila, Anja
2013-01-01
We examined the association of health and well-being with moving using a detailed geographical scale. 7845 men and women born in northern Finland in 1966 were surveyed by postal questionnaire in 1997 and linked to 1 km(2) geographical grids based on each subject's home address in 1997-2000. Population density was used to classify each grid as rural (1-100 inhabitants/km²) or urban (>100 inhabitants/km²) type. Moving was treated as a three-class response variate (not moved; moved to different type of grid; moved to similar type of grid). Moving was regressed on five explanatory factors (life satisfaction, self-reported health, lifetime morbidity, activity-limiting illness and use of health services), adjusting for factors potentially associated with health and moving (gender, marital status, having children, housing tenure, education, employment status and previous move). The results were expressed as odds ratios (OR) and their 95% confidence intervals (CI). Moves from rural to urban grids were associated with dissatisfaction with current life (adjusted OR 2.01; 95% CI 1.26-3.22) and having somatic (OR 1.66; 1.07-2.59) or psychiatric (OR 2.37; 1.21-4.63) morbidities, the corresponding ORs for moves from rural to other rural grids being 1.71 (0.98-2.98), 1.63 (0.95-2.78) and 2.09 (0.93-4.70), respectively. Among urban dwellers, only the frequent use of health services (≥ 21 times/year) was associated with moving, the adjusted ORs being 1.65 (1.05-2.57) for moves from urban to rural grids and 1.30 (1.03-1.64) for urban to other urban grids. We conclude that dissatisfaction with life and history of diseases and injuries, especially psychiatric morbidity, may increase the propensity to move from rural to urbanised environments, while availability of health services may contribute to moves within urban areas and also to moves from urban areas to the countryside, where high-level health services enable a good quality of life for those attracted by the pastoral environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, T.; Hagihara, R.; Yugo, M.
1994-12-31
The authors have successfully developed and industrialized a new frequency-shift anti-islanding protection method using a twin-peak band-pass filter (BPF) for grid-interconnected photovoltaic (PV) systems. In this method, the power conditioner has a twin-peak BPF in a current feed back loop in place of the normal BPF. The new method works perfectly for various kinds of loads such as resistance, inductive and capacitive loads connected to the PV system. Furthermore, because there are no mis-detections, the system enables the most effective generation of electric energy from solar cells. A power conditioner equipped with this protection was officially certified as suitable formore » grid-interconnection.« less
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Djilali, Ned
Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less
Blockchain for Smart Grid Resilience: Exchanging Distributed Energy at Speed, Scale and Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mylrea, Michael E.; Gourisetti, Sri Nikhil Gup
Blockchain may help solve several complex problems related to integrity and trustworthiness of rapid, distributed, complex energy transactions and data exchanges. In a move towards resilience, blockchain commoditizes trust and enables automated smart contracts to support auditable multiparty transactions based on predefined rules between distributed energy providers and customers. Blockchain based smart contracts also help remove the need to interact with third-parties, facilitating the adoption and monetization of distributed energy transactions and exchanges, both energy flows as well as financial transactions. This may help reduce transactive energy costs and increase the security and sustainability of distributed energy resource (DER) integration,more » helping to remove barriers to a more decentralized and resilient power grid.« less
Treeby, Bradley E; Tumen, Mustafa; Cox, B T
2011-01-01
A k-space pseudospectral model is developed for the fast full-wave simulation of nonlinear ultrasound propagation through heterogeneous media. The model uses a novel equation of state to account for nonlinearity in addition to power law absorption. The spectral calculation of the spatial gradients enables a significant reduction in the number of required grid nodes compared to finite difference methods. The model is parallelized using a graphical processing unit (GPU) which allows the simulation of individual ultrasound scan lines using a 256 x 256 x 128 voxel grid in less than five minutes. Several numerical examples are given, including the simulation of harmonic ultrasound images and beam patterns using a linear phased array transducer.
The Language Grid: supporting intercultural collaboration
NASA Astrophysics Data System (ADS)
Ishida, T.
2018-03-01
A variety of language resources already exist online. Unfortunately, since many language resources have usage restrictions, it is virtually impossible for each user to negotiate with every language resource provider when combining several resources to achieve the intended purpose. To increase the accessibility and usability of language resources (dictionaries, parallel texts, part-of-speech taggers, machine translators, etc.), we proposed the Language Grid [1]; it wraps existing language resources as atomic services and enables users to create new services by combining the atomic services, and reduces the negotiation costs related to intellectual property rights [4]. Our slogan is “language services from language resources.” We believe that modularization with recombination is the key to creating a full range of customized language environments for various user communities.
Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...
2017-07-29
Coordinated operation of distributed thermostatic loads such as heat pumps and air conditioners can reduce energy costs and prevents grid congestion, while maintaining room temperatures in the comfort range set by consumers. This paper furthers efforts towards enabling thermostatically controlled loads (TCLs) to participate in real-time retail electricity markets under a transactive control paradigm. An agent-based approach is used to develop an effective and low complexity demand response control scheme for TCLs. The proposed scheme adjusts aggregated thermostatic loads according to real-time grid conditions under both heating and cooling modes. Here, a case study is presented showing the method reducesmore » consumer electricity costs by over 10% compared to uncoordinated operation.« less
NASA Astrophysics Data System (ADS)
Lazzari, R.; Parma, C.; De Marco, A.; Bittanti, S.
2015-07-01
In this paper, we describe a control strategy for a photovoltaic (PV) power plant equipped with an energy storage system (ESS), based on lithium-ion battery. The plant consists of the following units: the PV generator, the energy storage system, the DC-bus and the inverter. The control, organised in a hierarchical manner, maximises the self-consumption of the local load unit. In particular, the ESS action performs power balance in case of low solar radiation or surplus of PV generation, thus managing the power exchange variability at the plant with the grid. The implemented control strategy is under testing in RSE pilot test facility in Milan, Italy.