Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... Deployment Analysis Report Review; Notice of Public Meeting AGENCY: Research and Innovative Technology... discuss the Connected Vehicle Infrastructure Deployment Analysis Report. The webinar will provide an... and Transportation Officials (AASHTO) Connected Vehicle Infrastructure Deployment Analysis Report...
AASHTO connected vehicle infrastructure deployment analysis.
DOT National Transportation Integrated Search
2011-06-17
This report describes a deployment scenario for Connected Vehicle infrastructure by state and local transportation agencies, together with a series of strategies and actions to be performed by AASHTO to support application development and deployment.
Intelligent Transportation Infrastructure Deployment Analysis System
DOT National Transportation Integrated Search
1997-01-01
Much of the work on Intelligent Transportation Systems (ITS) to date has emphasized technologies, Standards/protocols, architecture, user services, core infrastructure requirements, and various other technical and institutional issues. ITS implementa...
Regional Charging Infrastructure for Plug-In Electric Vehicles: A Case Study of Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Eric; Raghavan, Sesha; Rames, Clement
Given the complex issues associated with plug-in electric vehicle (PEV) charging and options in deploying charging infrastructure, there is interest in exploring scenarios of future charging infrastructure deployment to provide insight and guidance to national and regional stakeholders. The complexity and cost of PEV charging infrastructure pose challenges to decision makers, including individuals, communities, and companies considering infrastructure installations. The value of PEVs to consumers and fleet operators can be increased with well-planned and cost-effective deployment of charging infrastructure. This will increase the number of miles driven electrically and accelerate PEV market penetration, increasing the shared value of charging networksmore » to an expanding consumer base. Given these complexities and challenges, the objective of the present study is to provide additional insight into the role of charging infrastructure in accelerating PEV market growth. To that end, existing studies on PEV infrastructure are summarized in a literature review. Next, an analysis of current markets is conducted with a focus on correlations between PEV adoption and public charging availability. A forward-looking case study is then conducted focused on supporting 300,000 PEVs by 2025 in Massachusetts. The report concludes with a discussion of potential methodology for estimating economic impacts of PEV infrastructure growth.« less
Alternative Fuels Data Center: Smith Dairy Deploys Natural Gas Vehicles and
Fueling Infrastructure in the Midwest Smith Dairy Deploys Natural Gas Vehicles and Fueling Infrastructure in the Midwest to someone by E-mail Share Alternative Fuels Data Center: Smith Dairy Deploys Data Center: Smith Dairy Deploys Natural Gas Vehicles and Fueling Infrastructure in the Midwest on
Impact of public electric vehicle charging infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Rebecca S.; West, Todd H.
Our work uses market analysis and simulation to explore the potential of public charging infrastructure to spur US battery electric vehicle (BEV) sales, increase national electrified mileage, and lower greenhouse gas (GHG) emissions. By employing both scenario and parametric analysis for policy driven injection of public charging stations we find the following: (1) For large deployments of public chargers, DC fast chargers are more effective than level 2 chargers at increasing BEV sales, increasing electrified mileage, and lowering GHG emissions, even if only one DC fast charging station can be built for every ten level 2 charging stations. (2) Amore » national initiative to build DC fast charging infrastructure will see diminishing returns on investment at approximately 30,000 stations. (3) Some infrastructure deployment costs can be defrayed by passing them back to electric vehicle consumers, but once those costs to the consumer reach the equivalent of approximately 12¢/kWh for all miles driven, almost all gains to BEV sales and GHG emissions reductions from infrastructure construction are lost.« less
Impact of public electric vehicle charging infrastructure
Levinson, Rebecca S.; West, Todd H.
2017-10-16
Our work uses market analysis and simulation to explore the potential of public charging infrastructure to spur US battery electric vehicle (BEV) sales, increase national electrified mileage, and lower greenhouse gas (GHG) emissions. By employing both scenario and parametric analysis for policy driven injection of public charging stations we find the following: (1) For large deployments of public chargers, DC fast chargers are more effective than level 2 chargers at increasing BEV sales, increasing electrified mileage, and lowering GHG emissions, even if only one DC fast charging station can be built for every ten level 2 charging stations. (2) Amore » national initiative to build DC fast charging infrastructure will see diminishing returns on investment at approximately 30,000 stations. (3) Some infrastructure deployment costs can be defrayed by passing them back to electric vehicle consumers, but once those costs to the consumer reach the equivalent of approximately 12¢/kWh for all miles driven, almost all gains to BEV sales and GHG emissions reductions from infrastructure construction are lost.« less
DOT National Transportation Integrated Search
The purpose of this report, "Working Paper National Costs of the Metropolitan ITS infrastructure: Updated with 2004 Deployment Data," is to update the estimates of the costs remaining to deploy Intelligent Transportation Systems (ITS) infrastructure ...
Managing a tier-2 computer centre with a private cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara
2014-06-01
In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.
Importance of biometrics to addressing vulnerabilities of the U.S. infrastructure
NASA Astrophysics Data System (ADS)
Arndt, Craig M.; Hall, Nathaniel A.
2004-08-01
Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.
ERIC Educational Resources Information Center
Greenhalgh-Spencer, Heather; Jerbi, Moja
2017-01-01
In this paper, we provide a design-actuality gap-analysis of the internet infrastructure that exists in developing nations and nations in the global South with the deployed internet computer technologies (ICT)-assisted programs that are designed to use internet infrastructure to provide educational opportunities. Programs that specifically…
Transmission Infrastructure | Energy Analysis | NREL
aggregating geothermal with other complementary generating technologies, in renewable energy zones infrastructure planning and expansion to enable large-scale deployment of renewable energy in the future. Large Energy, FERC, NERC, and the regional entities, transmission providers, generating companies, utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Eric W; Rames, Clement L; Bedir, Abdulkadir
This report analyzes plug-in electric vehicle (PEV) infrastructure needs in California from 2017 to 2025 in a scenario where the State's zero-emission vehicle (ZEV) deployment goals are achieved by household vehicles. The statewide infrastructure needs are evaluated by using the Electric Vehicle Infrastructure Projection tool, which incorporates representative statewide travel data from the 2012 California Household Travel Survey. The infrastructure solution presented in this assessment addresses two primary objectives: (1) enabling travel for battery electric vehicles and (2) maximizing the electric vehicle-miles traveled for plug-in hybrid electric vehicles. The analysis is performed at the county-level for each year between 2017more » and 2025 while considering potential technology improvements. The results from this study present an infrastructure solution that can facilitate market growth for PEVs to reach the State's ZEV goals by 2025. The overall results show a need for 99k-130k destination chargers, including workplaces and public locations, and 9k-25k fast chargers. The results also show a need for dedicated or shared residential charging solutions at multi-family dwellings, which are expected to host about 120k PEVs by 2025. An improvement to the scientific literature, this analysis presents the significance of infrastructure reliability and accessibility on the quantification of charger demand.« less
The CARMEN software as a service infrastructure.
Weeks, Michael; Jessop, Mark; Fletcher, Martyn; Hodge, Victoria; Jackson, Tom; Austin, Jim
2013-01-28
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
DOT National Transportation Integrated Search
2006-07-01
The purpose of this report, "Working Paper National Costs of the Metropolitan ITS Infrastructure: Updated with 2005 Deployment Data," is to update the estimates of the costs remaining to fully deploy Intelligent Transportation Systems (ITS) infrastru...
Tracking the deployment of the integrated metropolitan ITS infrastructure in the USA : FY99 results
DOT National Transportation Integrated Search
2000-05-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
Galaxy CloudMan: delivering cloud compute clusters.
Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James
2010-12-21
Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.
Galaxy CloudMan: delivering cloud compute clusters
2010-01-01
Background Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is “cloud computing”, which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate “as is” use by experimental biologists. Results We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon’s EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. Conclusions The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge. PMID:21210983
DOT National Transportation Integrated Search
2000-01-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
Infrastructure for deployment of power systems
NASA Technical Reports Server (NTRS)
Sprouse, Kenneth M.
1991-01-01
A preliminary effort in characterizing the types of stationary lunar power systems which may be considered for emplacement on the lunar surface from the proposed initial 100-kW unit in 2003 to later units ranging in power from 25 to 825 kW is presented. Associated with these power systems are their related infrastructure hardware including: (1) electrical cable, wiring, switchgear, and converters; (2) deployable radiator panels; (3) deployable photovoltaic (PV) panels; (4) heat transfer fluid piping and connection joints; (5) power system instrumentation and control equipment; and (6) interface hardware between lunar surface construction/maintenance equipment and power system. This report: (1) presents estimates of the mass and volumes associated with these power systems and their related infrastructure hardware; (2) provides task breakdown description for emplacing this equipment; (3) gives estimated heat, forces, torques, and alignment tolerances for equipment assembly; and (4) provides other important equipment/machinery requirements where applicable. Packaging options for this equipment will be discussed along with necessary site preparation requirements. Design and analysis issues associated with the final emplacement of this power system hardware are also described.
Cloud Environment Automation: from infrastructure deployment to application monitoring
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.
2017-10-01
The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.
Data near processing support for climate data analysis
NASA Astrophysics Data System (ADS)
Kindermann, Stephan; Ehbrecht, Carsten; Hempelmann, Nils
2016-04-01
Climate data repositories grow in size exponentially. Scalable data near processing capabilities are required to meet future data analysis requirements and to replace current "data download and process at home" workflows and approaches. On one hand side, these processing capabilities should be accessible via standardized interfaces (e.g. OGC WPS), on the other side a large variety of processing tools, toolboxes and deployment alternatives have to be supported and maintained at the data/processing center. We present a community approach of a modular and flexible system supporting the development, deployment and maintenace of OGC-WPS based web processing services. This approach is organized in an open source github project (called "bird-house") supporting individual processing services ("birds", e.g. climate index calculations, model data ensemble calculations), which rely on basic common infrastructural components (e.g. installation and deployment recipes, analysis code dependencies management). To support easy deployment at data centers as well as home institutes (e.g. for testing and development) the system supports the management of the often very complex package dependency chain of climate data analysis packages as well as docker based packaging and installation. We present a concrete deployment scenario at the German Climate Computing Center (DKRZ). The DKRZ one hand side hosts a multi-petabyte climate archive which is integrated e.g. into the european ENES and worldwide ESGF data infrastructure, and on the other hand hosts an HPC center supporting (model) data production and data analysis. The deployment scenario also includes openstack based data cloud services to support data import and data distribution for bird-house based WPS web processing services. Current challenges for inter-institutionnal deployments of web processing services supporting the european and international climate modeling community as well as the climate impact community are highlighted. Also aspects supporting future WPS based cross community usage scenarios supporting data reuse and data provenance aspects are reflected.
Wireless intelligent network: infrastructure before services?
NASA Astrophysics Data System (ADS)
Chu, Narisa N.
1996-01-01
The Wireless Intelligent Network (WIN) intends to take advantage of the Advanced Intelligent Network (AIN) concepts and products developed from wireline communications. However, progress of the AIN deployment has been slow due to the many barriers that exist in the traditional wireline carriers' deployment procedures and infrastructure. The success of AIN has not been truly demonstrated. The AIN objectives and directions are applicable to the wireless industry although the plans and implementations could be significantly different. This paper points out WIN characteristics in architecture, flexibility, deployment, and value to customers. In order to succeed, the technology driven AIN concept has to be reinforced by the market driven WIN services. An infrastructure suitable for the WIN will contain elements that are foreign to the wireline network. The deployment process is expected to seed with the revenue generated services. Standardization will be achieved by simplifying and incorporating the IS-41C, AIN, and Intelligent Network CS-1 recommendations. Integration of the existing and future systems impose the biggest challenge of all. Service creation has to be complemented with service deployment process which heavily impact the carriers' infrastructure. WIN deployment will likely start from an Intelligent Peripheral, a Service Control Point and migrate to a Service Node when sufficient triggers are implemented in the mobile switch for distributed call control. The struggle to move forward will not be based on technology, but rather on the impact to existing infrastructure.
DOT National Transportation Integrated Search
2003-01-01
In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...
DOT National Transportation Integrated Search
2005-07-01
In January 1996, the Secretary of Transportation set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2005. Using data from surveys administered...
More Than 1,000 Fuel Cell Units Deployed Through DOE ARRA Funding (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This NREL Hydrogen and Fuel Cell Technical Highlight describes how early market end users are operating 1,111 fuel cell units at 301 sites in 20 states with funding from the U.S. Department of Energy Fuel Cell Technologies Program and analysis by NREL. The American Recovery and Reinvestment Act (ARRA) funded the deployment of approximately 1,000 fuel cell systems in key early markets to accelerate the commercialization and deployment of fuel cells and fuel cell manufacturing, installation, maintenance, and support services. In support of the ARRA fuel cell deployment objectives, NREL analyzes and validates the technology in real-world applications, reports onmore » the technology status, and facilitates the development of fuel cell technologies, manufacturing, and operations in strategic markets-including material handling equipment, backup power, and stationary power-where fuel cells can compete with conventional technologies. NREL is validating hydrogen and fuel cell systems in real-world settings through data collection, analysis, and reporting. The fuel cell and infrastructure analysis provides an independent, third-party assessment that focuses on fuel cell system and hydrogen infrastructure performance, operation, maintenance, use, and safety. An objective of the ARRA fuel cell project-to deploy approximately 1,000 fuel cell systems in key early markets - has been met in two years. By the end of 2011, 504 material handling equipment (MHE) fuel cell units were operating at 8 facilities and 607 backup power fuel cell units were operating at 293 sites. MHE and backup power are two markets where fuel cells are capable of meeting the operating demands, and deployments can be leveraged to accelerate fuel cell commercialization.« less
Modeling Hydrogen Refueling Infrastructure to Support Passenger Vehicles
Muratori, Matteo; Bush, Brian; Hunter, Chad; ...
2018-05-07
The year 2014 marked hydrogen fuel cell electric vehicles (FCEVs) first becoming commercially available in California, where significant investments are being made to promote the adoption of alternative transportation fuels. A refueling infrastructure network that guarantees adequate coverage and expands in line with vehicle sales is required for FCEVs to be successfully adopted by private customers. In this article, we provide an overview of modelling methodologies used to project hydrogen refueling infrastructure requirements to support FCEV adoption, and we describe, in detail, the National Renewable Energy Laboratory's scenario evaluation and regionalization analysis (SERA) model. As an example, we use SERAmore » to explore two alternative scenarios of FCEV adoption: one in which FCEV deployment is limited to California and several major cities in the United States; and one in which FCEVs reach widespread adoption, becoming a major option as passenger vehicles across the entire country. Such scenarios can provide guidance and insights for efforts required to deploy the infrastructure supporting transition toward different levels of hydrogen use as a transportation fuel for passenger vehicles in the United States.« less
Modeling Hydrogen Refueling Infrastructure to Support Passenger Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muratori, Matteo; Bush, Brian; Hunter, Chad
The year 2014 marked hydrogen fuel cell electric vehicles (FCEVs) first becoming commercially available in California, where significant investments are being made to promote the adoption of alternative transportation fuels. A refueling infrastructure network that guarantees adequate coverage and expands in line with vehicle sales is required for FCEVs to be successfully adopted by private customers. In this article, we provide an overview of modelling methodologies used to project hydrogen refueling infrastructure requirements to support FCEV adoption, and we describe, in detail, the National Renewable Energy Laboratory's scenario evaluation and regionalization analysis (SERA) model. As an example, we use SERAmore » to explore two alternative scenarios of FCEV adoption: one in which FCEV deployment is limited to California and several major cities in the United States; and one in which FCEVs reach widespread adoption, becoming a major option as passenger vehicles across the entire country. Such scenarios can provide guidance and insights for efforts required to deploy the infrastructure supporting transition toward different levels of hydrogen use as a transportation fuel for passenger vehicles in the United States.« less
Tracking the deployment of the integrated metropolitan ITS infrastructure in Columbus : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Fresno : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Wichita : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Phoenix : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Orlando : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Austin : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Toledo : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Honolulu : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Memphis : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Tulsa : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Atlanta : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Syracuse : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Dallas : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in Omaha : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Electric Sector Integration | Energy Analysis | NREL
investigates the potential impacts of expanding renewable technology deployment on grid operations and Electric System Flexibility and Storage Impacts on Conventional Generators Transmission Infrastructure Generation Our grid integration studies use state-of-the-art modeling and analysis to evaluate the impacts of
ATLAS user analysis on private cloud resources at GoeGrid
NASA Astrophysics Data System (ADS)
Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.
2015-12-01
User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
1999-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in San Juan : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
Tracking the deployment of the integrated metropolitan ITS infrastructure in El Paso : FY99 results
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
1999-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-05-23
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
1999-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
77 FR 36903 - Accelerating Broadband Infrastructure Deployment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-20
... coordination with the Chief Performance Officer (CPO). (b) The Working Group shall be composed of: (i) a... broadband infrastructure. Sec. 2. Broadband Deployment on Federal Property Working Group. (a) In order to... Property Working Group (Working Group), to be co-chaired by representatives designated by the Administrator...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Francfort, Jim; Bennett, Brion; Carlson, Richard
2015-09-01
Battelle Energy Alliance, LLC, managing and operating contractor for the U.S. Department of Energy’s (DOE) Idaho National Laboratory (INL), is the lead laboratory for U.S. Department of Energy’s Advanced Vehicle Testing Activity (AVTA). INL’s conduct of the AVTA resulted in a significant base of knowledge and experience in the area of testing light-duty vehicles that reduced transportation-related petroleum consumption. Due to this experience, INL was tasked by DOE to develop agreements with companies that were the recipients of The American Recovery and Reinvestment Act of 2009 (ARRA) grants, that would allow INL to collect raw data from light-duty vehicles andmore » charging infrastructure. INL developed non-disclosure agreements (NDAs) with several companies and their partners that resulted in INL being able to receive raw data via server-to-server connections from the partner companies. This raw data allowed INL to independently conduct data quality checks, perform analysis, and report publicly to DOE, partners, and stakeholders, how drivers used both new vehicle technologies and the deployed charging infrastructure. The ultimate goal was not the deployment of vehicles and charging infrastructure, cut rather to create real-world laboratories of vehicles, charging infrastructure and drivers that would aid in the design of future electric drive transportation systems. The five projects that INL collected data from and their partners are: • ChargePoint America - Plug-in Electric Vehicle Charging Infrastructure Demonstration • Chrysler Ram PHEV Pickup - Vehicle Demonstration • General Motors Chevrolet Volt - Vehicle Demonstration • The EV Project - Plug-in Electric Vehicle Charging Infrastructure Demonstration • EPRI / Via Motors PHEVs – Vehicle Demonstration The document serves to benchmark the performance science involved the execution, analysis and reporting for the five above projects that provided lessons learned based on driver’s use of the vehicles and recharging decisions made. Data is reported for the use of more than 25,000 vehicles and charging units.« less
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pea set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nations largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation ini...
DOT National Transportation Integrated Search
2017-11-23
The Federal Highway Administration (FHWA) has adapted the Transportation Systems Management and Operations (TSMO) Capability Maturity Model (CMM) to describe the operational maturity of Infrastructure Owner-Operator (IOO) agencies across a range of i...
Abstracting application deployment on Cloud infrastructures
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.
2017-10-01
Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.
The Experiment Factory: Standardizing Behavioral Experiments.
Sochat, Vanessa V; Eisenberg, Ian W; Enkavi, A Zeynep; Li, Jamie; Bissett, Patrick G; Poldrack, Russell A
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms.
The Experiment Factory: Standardizing Behavioral Experiments
Sochat, Vanessa V.; Eisenberg, Ian W.; Enkavi, A. Zeynep; Li, Jamie; Bissett, Patrick G.; Poldrack, Russell A.
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms. PMID:27199843
First results from a combined analysis of CERN computing infrastructure metrics
NASA Astrophysics Data System (ADS)
Duellmann, Dirk; Nieke, Christian
2017-10-01
The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.
INDIGO-DataCloud solutions for Earth Sciences
NASA Astrophysics Data System (ADS)
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin
2017-04-01
INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.
Bernal-Delgado, Enrique; Estupiñán-Romero, Francisco
2018-01-01
The integration of different administrative data sources from a number of European countries has been shown useful in the assessment of unwarranted variations in health care performance. This essay describes the procedures used to set up a data infrastructure (e.g., data access and exchange, definition of the minimum common wealth of data required, and the development of the relational logic data model) and, the methods to produce trustworthy healthcare performance measurements (e.g., ontologies standardisation and quality assurance analysis). The paper ends providing some hints on how to use these lessons in an eventual European infrastructure on public health research and monitoring. Although the relational data infrastructure developed has been proven accurate, effective to compare health system performance across different countries, and efficient enough to deal with hundred of millions of episodes, the logic data model might not be responsive if the European infrastructure aims at including electronic health records and carrying out multi-cohort multi-intervention comparative effectiveness research. The deployment of a distributed infrastructure based on semantic interoperability, where individual data remain in-country and open-access scripts for data management and analysis travel around the hubs composing the infrastructure, might be a sensible way forward.
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federationmore » policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.« less
Integration of XRootD into the cloud infrastructure for ALICE data analysis
NASA Astrophysics Data System (ADS)
Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey
2015-12-01
Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.
NASA Astrophysics Data System (ADS)
Jiménez-Redondo, Noemi; Calle-Cordón, Alvaro; Kandler, Ute; Simroth, Axel; Morales, Francisco J.; Reyes, Antonio; Odelius, Johan; Thaduri, Aditya; Morgado, Joao; Duarte, Emmanuele
2017-09-01
The on-going H2020 project INFRALERT aims to increase rail and road infrastructure capacity in the current framework of increased transportation demand by developing and deploying solutions to optimise maintenance interventions planning. It includes two real pilots for road and railways infrastructure. INFRALERT develops an ICT platform (the expert-based Infrastructure Management System, eIMS) which follows a modular approach including several expert-based toolkits. This paper presents the methodologies and preliminary results of the toolkits for i) nowcasting and forecasting of asset condition, ii) alert generation, iii) RAMS & LCC analysis and iv) decision support. The results of these toolkits in a meshed road network in Portugal under the jurisdiction of Infraestruturas de Portugal (IP) are presented showing the capabilities of the approaches.
A service-based BLAST command tool supported by cloud infrastructures.
Carrión, Abel; Blanquer, Ignacio; Hernández, Vicente
2012-01-01
Notwithstanding the benefits of distributed-computing infrastructures for empowering bioinformatics analysis tools with the needed computing and storage capability, the actual use of these infrastructures is still low. Learning curves and deployment difficulties have reduced the impact on the wide research community. This article presents a porting strategy of BLAST based on a multiplatform client and a service that provides the same interface as sequential BLAST, thus reducing learning curve and with minimal impact on their integration on existing workflows. The porting has been done using the execution and data access components from the EC project Venus-C and the Windows Azure infrastructure provided in this project. The results obtained demonstrate a low overhead on the global execution framework and reasonable speed-up and cost-efficiency with respect to a sequential version.
Civil infrastructure monitoring for IVHS using optical fiber sensors
NASA Astrophysics Data System (ADS)
de Vries, Marten J.; Arya, Vivek; Grinder, C. R.; Murphy, Kent A.; Claus, Richard O.
1995-01-01
8Early deployment of Intelligent Vehicle Highway Systems would necessitate the internal instrumentation of infrastructure for emergency preparedness. Existing quantitative analysis and visual analysis techniques are time consuming, cost prohibitive, and are often unreliable. Fiber optic sensors are rapidly replacing conventional instrumentation because of their small size, light weight, immunity to electromagnetic interference, and extremely high information carrying capability. In this paper research on novel optical fiber sensing techniques for health monitoring of civil infrastructure such as highways and bridges is reported. Design, fabrication, and implementation of fiber optic sensor configurations used for measurements of strain are discussed. Results from field tests conducted to demonstrate the effectiveness of fiber sensors at determining quantitative strain vector components near crack locations in bridges are presented. Emerging applications of fiber sensors for vehicle flow, vehicle speed, and weigh-in-motion measurements are also discussed.
Modeling & Testing of Inflatable Structures for Rapidly Deployable Port Infrastructures
2010-07-01
Rapidly Deployable Port Infrastructures By: Andrew Bloxom Abel Medellin Chris Vince Dr. Solomon Yim N SW C C D -C IS D -2 01...Andrew Bloxom, Abel Medellin , Chris Vince, Dr. Solomon Yim 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...Andrew Bloxom Abel Medellin Chris Vince Dr. Solomon Yim A special thanks to: • Ben Testerman and Dr. Pat
NASA Astrophysics Data System (ADS)
Jones, A. S.; Horsburgh, J. S.; Matos, M.; Caraballo, J.
2015-12-01
Networks conducting long term monitoring using in situ sensors need the functionality to track physical equipment as well as deployments, calibrations, and other actions related to site and equipment maintenance. The observational data being generated by sensors are enhanced if direct linkages to equipment details and actions can be made. This type of information is typically recorded in field notebooks or in static files, which are rarely linked to observations in a way that could be used to interpret results. However, the record of field activities is often relevant to analysis or post-processing of the observational data. We have developed an underlying database schema and deployed a web interface for recording and retrieving information on physical infrastructure and related actions for observational networks. The database schema for equipment was designed as an extension to the Observations Data Model 2 (ODM2), a community-developed information model for spatially discrete, feature based earth observations. The core entities of ODM2 describe location, observed variable, and timing of observations, and the equipment extension contains entities to provide additional metadata specific to the inventory of physical infrastructure and associated actions. The schema is implemented in a relational database system for storage and management with an associated web interface. We designed the web-based tools for technicians to enter and query information on the physical equipment and actions such as site visits, equipment deployments, maintenance, and calibrations. These tools were implemented for the iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) ecohydrologic observatory, and we anticipate that they will be useful for similar large-scale monitoring networks desiring to link observing infrastructure to observational data to increase the quality of sensor-based data products.
Cost-effective electric vehicle charging infrastructure siting for Delhi
Sheppard, Colin J. R.; Gopal, Anand R.; Harris, Andrew; ...
2016-06-10
Plug-in electric vehicles (PEVs) represent a substantial opportunity for governments to reduce emissions of both air pollutants and greenhouse gases. The Government of India has set a goal of deploying 6-7 million hybrid and PEVs on Indian roads by the year 2020. The uptake of PEVs will depend on, among other factors like high cost, how effectively range anxiety is mitigated through the deployment of adequate electric vehicle charging stations (EVCS) throughout a region. The Indian Government therefore views EVCS deployment as a central part of their electric mobility mission. The plug-in electric vehicle infrastructure (PEVI) model - an agent-basedmore » simulation modeling platform - was used to explore the cost-effective siting of EVCS throughout the National Capital Territory (NCT) of Delhi, India. At 1% penetration in the passenger car fleet, or ~10 000 battery electric vehicles (BEVs), charging services can be provided to drivers for an investment of $4.4 M (or $ 440/BEV) by siting 2764 chargers throughout the NCT of Delhi with an emphasis on the more densely populated and frequented regions of the city. The majority of chargers sited by this analysis were low power, Level 1 chargers, which have the added benefit of being simpler to deploy than higher power alternatives. The amount of public infrastructure needed depends on the access that drivers have to EVCS at home, with 83% more charging capacity required to provide the same level of service to a population of drivers without home chargers compared to a scenario with home chargers. Results also depend on the battery capacity of the BEVs adopted, with approximately 60% more charging capacity needed to achieve the same level of service when vehicles are assumed to have 57 km versus 96 km of range.« less
Cost-effective electric vehicle charging infrastructure siting for Delhi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheppard, Colin J. R.; Gopal, Anand R.; Harris, Andrew
Plug-in electric vehicles (PEVs) represent a substantial opportunity for governments to reduce emissions of both air pollutants and greenhouse gases. The Government of India has set a goal of deploying 6-7 million hybrid and PEVs on Indian roads by the year 2020. The uptake of PEVs will depend on, among other factors like high cost, how effectively range anxiety is mitigated through the deployment of adequate electric vehicle charging stations (EVCS) throughout a region. The Indian Government therefore views EVCS deployment as a central part of their electric mobility mission. The plug-in electric vehicle infrastructure (PEVI) model - an agent-basedmore » simulation modeling platform - was used to explore the cost-effective siting of EVCS throughout the National Capital Territory (NCT) of Delhi, India. At 1% penetration in the passenger car fleet, or ~10 000 battery electric vehicles (BEVs), charging services can be provided to drivers for an investment of $4.4 M (or $ 440/BEV) by siting 2764 chargers throughout the NCT of Delhi with an emphasis on the more densely populated and frequented regions of the city. The majority of chargers sited by this analysis were low power, Level 1 chargers, which have the added benefit of being simpler to deploy than higher power alternatives. The amount of public infrastructure needed depends on the access that drivers have to EVCS at home, with 83% more charging capacity required to provide the same level of service to a population of drivers without home chargers compared to a scenario with home chargers. Results also depend on the battery capacity of the BEVs adopted, with approximately 60% more charging capacity needed to achieve the same level of service when vehicles are assumed to have 57 km versus 96 km of range.« less
Cost-effective electric vehicle charging infrastructure siting for Delhi
NASA Astrophysics Data System (ADS)
Sheppard, Colin J. R.; Gopal, Anand R.; Harris, Andrew; Jacobson, Arne
2016-06-01
Plug-in electric vehicles (PEVs) represent a substantial opportunity for governments to reduce emissions of both air pollutants and greenhouse gases. The Government of India has set a goal of deploying 6-7 million hybrid and PEVs on Indian roads by the year 2020. The uptake of PEVs will depend on, among other factors like high cost, how effectively range anxiety is mitigated through the deployment of adequate electric vehicle charging stations (EVCS) throughout a region. The Indian Government therefore views EVCS deployment as a central part of their electric mobility mission. The plug-in electric vehicle infrastructure (PEVI) model—an agent-based simulation modeling platform—was used to explore the cost-effective siting of EVCS throughout the National Capital Territory (NCT) of Delhi, India. At 1% penetration in the passenger car fleet, or ˜10 000 battery electric vehicles (BEVs), charging services can be provided to drivers for an investment of 4.4 M (or 440/BEV) by siting 2764 chargers throughout the NCT of Delhi with an emphasis on the more densely populated and frequented regions of the city. The majority of chargers sited by this analysis were low power, Level 1 chargers, which have the added benefit of being simpler to deploy than higher power alternatives. The amount of public infrastructure needed depends on the access that drivers have to EVCS at home, with 83% more charging capacity required to provide the same level of service to a population of drivers without home chargers compared to a scenario with home chargers. Results also depend on the battery capacity of the BEVs adopted, with approximately 60% more charging capacity needed to achieve the same level of service when vehicles are assumed to have 57 km versus 96 km of range.
Battery Electric Vehicle Driving and Charging Behavior Observed Early in The EV Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Smart; Stephen Schey
2012-04-01
As concern about society's dependence on petroleum-based transportation fuels increases, many see plug-in electric vehicles (PEV) as enablers to diversifying transportation energy sources. These vehicles, which include plug-in hybrid electric vehicles (PHEV), range-extended electric vehicles (EREV), and battery electric vehicles (BEV), draw some or all of their power from electricity stored in batteries, which are charged by the electric grid. In order for PEVs to be accepted by the mass market, electric charging infrastructure must also be deployed. Charging infrastructure must be safe, convenient, and financially sustainable. Additionally, electric utilities must be able to manage PEV charging demand on themore » electric grid. In the Fall of 2009, a large scale PEV infrastructure demonstration was launched to deploy an unprecedented number of PEVs and charging infrastructure. This demonstration, called The EV Project, is led by Electric Transportation Engineering Corporation (eTec) and funded by the U.S. Department of Energy. eTec is partnering with Nissan North America to deploy up to 4,700 Nissan Leaf BEVs and 11,210 charging units in five market areas in Arizona, California, Oregon, Tennessee, and Washington. With the assistance of the Idaho National Laboratory, eTec will collect and analyze data to characterize vehicle consumer driving and charging behavior, evaluate the effectiveness of charging infrastructure, and understand the impact of PEV charging on the electric grid. Trials of various revenue systems for commercial and public charging infrastructure will also be conducted. The ultimate goal of The EV Project is to capture lessons learned to enable the mass deployment of PEVs. This paper is the first in a series of papers documenting the progress and findings of The EV Project. This paper describes key research objectives of The EV Project and establishes the project background, including lessons learned from previous infrastructure deployment and PEV demonstrations. One such previous study was a PHEV demonstration conducted by the U.S. Department of Energy's Advanced Vehicle Testing Activity (AVTA), led by the Idaho National Laboratory (INL). AVTA's PHEV demonstration involved over 250 vehicles in the United States, Canada, and Finland. This paper summarizes driving and charging behavior observed in that demonstration, including the distribution of distance driven between charging events, charging frequency, and resulting proportion of operation charge depleting mode. Charging demand relative to time of day and day of the week will also be shown. Conclusions from the PHEV demonstration will be given which highlight the need for expanded analysis in The EV Project. For example, the AVTA PHEV demonstration showed that in the absence of controlled charging by the vehicle owner or electric utility, the majority of vehicles were charged in the evening hours, coincident with typical utility peak demand. Given this baseline, The EV Project will demonstrate the effects of consumer charge control and grid-side charge management on electricity demand. This paper will outline further analyses which will be performed by eTec and INL to documenting driving and charging behavior of vehicles operated in a infrastructure-rich environment.« less
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
DOT National Transportation Integrated Search
2000-10-01
The Phoenix, Arizona Metropolitan Model Deployment was one of four cities included in the Metropolitan Model Deployment Initiative (MMDI). The initiative was set forth in 1996 to serve as model deployments of ITS infrastructure and integration. One o...
A framework for quantifying and optimizing the value of seismic monitoring of infrastructure
NASA Astrophysics Data System (ADS)
Omenzetter, Piotr
2017-04-01
This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, E.; Neubauer, J.; Burton, E.
The disparate characteristics between conventional (CVs) and battery electric vehicles (BEVs) in terms of driving range, refill/recharge time, and availability of refuel/recharge infrastructure inherently limit the relative utility of BEVs when benchmarked against traditional driver travel patterns. However, given a high penetration of high-power public charging combined with driver tolerance for rerouting travel to facilitate charging on long-distance trips, the difference in utility between CVs and BEVs could be marginalized. We quantify the relationships between BEV utility, the deployment of fast chargers, and driver tolerance for rerouting travel and extending travel durations by simulating BEVs operated over real-world travel patternsmore » using the National Renewable Energy Laboratory's Battery Lifetime Analysis and Simulation Tool for Vehicles (BLAST-V). With support from the U.S. Department of Energy's Vehicle Technologies Office, BLAST-V has been developed to include algorithms for estimating the available range of BEVs prior to the start of trips, for rerouting baseline travel to utilize public charging infrastructure when necessary, and for making driver travel decisions for those trips in the presence of available public charging infrastructure, all while conducting advanced vehicle simulations that account for battery electrical, thermal, and degradation response. Results from BLAST-V simulations on vehicle utility, frequency of inserted stops, duration of charging events, and additional time and distance necessary for rerouting travel are presented to illustrate how BEV utility and travel patterns can be affected by various fast charge deployments.« less
DOT National Transportation Integrated Search
2016-09-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program intends to develop a suite of applications that utilize vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication technology to reduce...
DOT National Transportation Integrated Search
2016-09-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication technology to re...
DOT National Transportation Integrated Search
2016-09-13
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Evolution of grid-wide access to database resident information in ATLAS using Frontier
NASA Astrophysics Data System (ADS)
Barberis, D.; Bujor, F.; de Stefano, J.; Dewhurst, A. L.; Dykstra, D.; Front, D.; Gallas, E.; Gamboa, C. F.; Luehring, F.; Walker, R.
2012-12-01
The ATLAS experiment deployed Frontier technology worldwide during the initial year of LHC collision data taking to enable user analysis jobs running on the Worldwide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken, such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond user analysis and subsystem specific tasks such as calibration and alignment, extending into production processing areas, such as initial reconstruction and trigger reprocessing. With a more robust and tuned system, we are better equipped to satisfy the still growing number of diverse clients and the demands of increasingly sophisticated processing and analysis.
Changes in temperature, precipitation, sea level, and coastal storms will likely increase the vulnerability of infrastructure across the United States. Using four models of vulnerability, impacts, and adaptation of infrastructure, its deployment, and its role in protecting econom...
USDA-ARS?s Scientific Manuscript database
Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...
Alternative Fuels Data Center: Deploying Alternative Fuel Vehicles and
Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality Improvement Program and Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality Vehicles and Infrastructure in Chicago, Illinois, Through the Congestion Mitigation and Air Quality
DOT National Transportation Integrated Search
2000-01-01
The purpose of this document is to present state-level statistics for the CVISN deployment described in the national report. These data will allow state stakeholders to evaluate their own deployment standings in relation to national averages. The nat...
DOT National Transportation Integrated Search
2016-08-11
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Connected Vehicle Infrastructure : Deployment and Funding Overview
DOT National Transportation Integrated Search
2018-01-01
This report reviews existing and proposed legislation relevant to connected vehicle infrastructure (CVI) implementation, identifies existing funding mechanisms for CVI implementation, reviews CVI pilot programs and case studies, and provides an overv...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Wang, Hong; Young, Stan
Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology,more » vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal infrastructure to act as the nervous system for urban transportation networks, providing not only signaling, but also observability, surveillance, and measurement capacity. The discussion of the opportunities space includes network optimization and control theory perspectives, and the current states of observability for key system parameters (what can be detected, how frequently can it be reported) as well as controllability of dynamic parameters (this includes adjusting not only the signal phase and timing, but also the ability to alter vehicle trajectories through information or direct control). The perspective of observability and controllability of the dynamic systems provides an appropriate lens to discuss future directions as CAV/CV become more prevalent in the future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, David L; Duleep, Gopal
2013-06-01
Automobile manufacturers leading the development of mass-market fuel cell vehicles (FCVs) were interviewed in Japan, Korea, Germany and the United States. There is general agreement that the performance of FCVs with respect to durability, cold start, packaging, acceleration, refueling time and range has progressed to the point where vehicles that could be brought to market in 2015 will satisfy customer expectations. However, cost and the lack of refueling infrastructure remain significant barriers. Costs have been dramatically reduced over the past decade, yet are still about twice what appears to be needed for sustainable market success. While all four countries havemore » plans for the early deployment of hydrogen refueling infrastructure, the roles of government, industry and the public in creating a viable hydrogen refueling infrastructure remain unresolved. The existence of an adequate refueling infrastructure and supporting government policies are likely to be the critical factors that determine when and where hydrogen FCVs are brought to market.« less
Vehicle-to-infrastructure deployment : what should states do now?
DOT National Transportation Integrated Search
2016-01-01
For more than a decade, the U.S. Department of Transportation (USDOT) has : been researching the potential benefits of connected vehicle technology, which : allows vehicles to communicate with each other, roadway infrastructure, traffic : management ...
Measuring ITS deployment and integration
DOT National Transportation Integrated Search
1999-01-01
A consistent and simple methodology was developed to assess both the level of deployment of individual ITS elements and the level of integration between these elements. This method is based on the metropolitan ITS infrastructure, a blueprint defined ...
Want, Andrew; Crawford, Rebecca; Kakkonen, Jenni; Kiddie, Greg; Miller, Susan; Harris, Robert E; Porter, Joanne S
2017-08-01
As part of ongoing commitments to produce electricity from renewable energy sources in Scotland, Orkney waters have been targeted for potential large-scale deployment of wave and tidal energy converting devices. Orkney has a well-developed infrastructure supporting the marine energy industry; recently enhanced by the construction of additional piers. A major concern to marine industries is biofouling on submerged structures, including energy converters and measurement instrumentation. In this study, the marine energy infrastructure and instrumentation were surveyed to characterise the biofouling. Fouling communities varied between deployment habitats; key species were identified allowing recommendations for scheduling device maintenance and preventing spread of invasive organisms. A method to measure the impact of biofouling on hydrodynamic response is described and applied to data from a wave-monitoring buoy deployed at a test site in Orkney. The results are discussed in relation to the accuracy of the measurement resources for power generation. Further applications are suggested for future testing in other scenarios, including tidal energy.
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
Vehicle-to-infrastructure deployment : what should states do now?
DOT National Transportation Integrated Search
2016-01-01
For more than a decade, the U.S. Department of Transportation (USDOT) has been researching the potential benefits of connected vehicle technology, which allows vehicles to communicate with each other, roadway infrastructure, traffic management center...
e!DAL - a framework to store, share and publish research data
2014-01-01
Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009
e!DAL--a framework to store, share and publish research data.
Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe
2014-06-24
The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.
LHCb Build and Deployment Infrastructure for run 2
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2015-12-01
After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.
NEON's Mobile Deployment Platform: A Resource for Community Research
NASA Astrophysics Data System (ADS)
Sanclements, M.
2015-12-01
Here we provide an update on construction and validation of the NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the future. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs will be designed to accommodate rapid deployment for time periods up to ~ 1 year. Broadly, the MDPs will be comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach.
Enhanced Logistics Intra-theater Support Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Groningen, Charles N.; Braun, Mary Duffy; Widing, Mary Ann
2004-01-27
Developed for use by Department of Defense deployment analysts to perform detailed Reception, Staging, Onward movement and Integration (RSO&I) analyses. ELIST requires: o Vehicle characteristics for ships, planes, trucks, railcars, buses, and helicopters. o Network (physical) characteristics defining the airport, seaport, road, rail, waterway and pipeline infrastructure available in a theater of operations. o Assets available for moving the personnel, equipment and supplies over the infrastructure network. o Movement requirements plan defining the deployment requirements of a military force. This includes defining each unit, its cargo (at various levels of resolution) , where it must move from and to, whatmore » modes it is required to travel by, and when it must be delivered through each phase of deployment.« less
Alternative Fuels Data Center: Ohio Transportation Data for Alternative
Sustainable Fleet Plan into On-Road Reality Jan. 26, 2016 Video thumbnail for Smith Dairy Deploys Natural Gas Vehicles and Fueling Infrastructure in the Midwest Smith Dairy Deploys Natural Gas Vehicles and Fueling
Optical fibre multi-parameter sensing with secure cloud based signal capture and processing
NASA Astrophysics Data System (ADS)
Newe, Thomas; O'Connell, Eoin; Meere, Damien; Yuan, Hongwei; Leen, Gabriel; O'Keeffe, Sinead; Lewis, Elfed
2016-05-01
Recent advancements in cloud computing technologies in the context of optical and optical fibre based systems are reported. The proliferation of real time and multi-channel based sensor systems represents significant growth in data volume. This coupled with a growing need for security presents many challenges and presents a huge opportunity for an evolutionary step in the widespread application of these sensing technologies. A tiered infrastructural system approach is adopted that is designed to facilitate the delivery of Optical Fibre-based "SENsing as a Service- SENaaS". Within this infrastructure, novel optical sensing platforms, deployed within different environments, are interfaced with a Cloud-based backbone infrastructure which facilitates the secure collection, storage and analysis of real-time data. Feedback systems, which harness this data to affect a change within the monitored location/environment/condition, are also discussed. The cloud based system presented here can also be used with chemical and physical sensors that require real-time data analysis, processing and feedback.
Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis
Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...
2008-01-01
Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less
CV pilot deployment concept phase 1, outreach plan — ICF Wyoming.
DOT National Transportation Integrated Search
2016-06-24
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Connected vehicle pilot deployment program phase 2, data management plan - Wyoming
DOT National Transportation Integrated Search
2017-04-10
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
DOT National Transportation Integrated Search
1997-08-01
This system architecture paper will discuss proposed architectures for the four infrastructure oriented program areas defined by the project team and presented in the Strategic Deployment Plan (August 1997). This report will concentrate on defi...
DOT National Transportation Integrated Search
1999-03-03
The US Department Of Transportations Model Deployment Initiative (MDI) program is : integrating and extending the existing ITS infrastructure in four metropolitan regions: New York/ : New Jersey/Connecticut, Phoenix, San Antonio and Seattle. The N...
Meyer, Adrian; Green, Laura; Faulk, Ciearro; Galla, Stephen; Meyer, Anne-Marie
2016-01-01
Introduction: Large amounts of health data generated by a wide range of health care applications across a variety of systems have the potential to offer valuable insight into populations and health care systems, but robust and secure computing and analytic systems are required to leverage this information. Framework: We discuss our experiences deploying a Secure Data Analysis Platform (SeDAP), and provide a framework to plan, build and deploy a virtual desktop infrastructure (VDI) to enable innovation, collaboration and operate within academic funding structures. It outlines 6 core components: Security, Ease of Access, Performance, Cost, Tools, and Training. Conclusion: A platform like SeDAP is not simply successful through technical excellence and performance. It’s adoption is dependent on a collaborative environment where researchers and users plan and evaluate the requirements of all aspects. PMID:27683665
Mesoscale carbon sequestration site screening and CCS infrastructure analysis.
Keating, Gordon N; Middleton, Richard S; Stauffer, Philip H; Viswanathan, Hari S; Letellier, Bruce C; Pasqualini, Donatella; Pawar, Rajesh J; Wolfsberg, Andrew V
2011-01-01
We explore carbon capture and sequestration (CCS) at the meso-scale, a level of study between regional carbon accounting and highly detailed reservoir models for individual sites. We develop an approach to CO(2) sequestration site screening for industries or energy development policies that involves identification of appropriate sequestration basin, analysis of geologic formations, definition of surface sites, design of infrastructure, and analysis of CO(2) transport and storage costs. Our case study involves carbon management for potential oil shale development in the Piceance-Uinta Basin, CO and UT. This study uses new capabilities of the CO(2)-PENS model for site screening, including reservoir capacity, injectivity, and cost calculations for simple reservoirs at multiple sites. We couple this with a model of optimized source-sink-network infrastructure (SimCCS) to design pipeline networks and minimize CCS cost for a given industry or region. The CLEAR(uff) dynamical assessment model calculates the CO(2) source term for various oil production levels. Nine sites in a 13,300 km(2) area have the capacity to store 6.5 GtCO(2), corresponding to shale-oil production of 1.3 Mbbl/day for 50 years (about 1/4 of U.S. crude oil production). Our results highlight the complex, nonlinear relationship between the spatial deployment of CCS infrastructure and the oil-shale production rate.
An Agent-Based Information System for Electric Vehicle Charging Infrastructure Deployment
DOT National Transportation Integrated Search
2012-08-18
The current scarcity of public charging infrastructure is one of the major barriers to mass household adoption of plug-in electric vehicles (PEVs). Although most PEV drivers can recharge their vehicles at home, the limited driving range of the vehicl...
DOT National Transportation Integrated Search
2001-05-01
The purpose of this working paper is to provide an estimate of the federal proportion of funds expended on intelligent transportation systems (ITS) infrastructure deployments for fiscal year (FY) 2000 using budget and planning data from state departm...
Highway Funding: It's Time to Think Seriously About Operations. A Policy Framework
DOT National Transportation Integrated Search
1998-09-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
DOT National Transportation Integrated Search
2016-03-14
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
on how to understand and plan for transportation advancements, including the increasing deployment of topics: Transportation electrification and the infrastructure necessary to support the increasing increasing deployment of these technologies; Impact of on-demand transit and mobility services on public
Connected vehicle pilot deployment program phase 2, data privacy plan – Wyoming.
DOT National Transportation Integrated Search
2016-04-14
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Connected Vehicle Pilot Deployment Program, Comprehensive Installation Plan - WYDOT CV Pilot
DOT National Transportation Integrated Search
2018-02-16
The Wyoming Department of Transportation's (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication technology to re...
Connected vehicle pilot deployment program phase 2 : data management plan - Tampa (THEA).
DOT National Transportation Integrated Search
2017-10-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communication technology to re...
Connected vehicle pilot deployment program phase 1, safety management plan – ICF/Wyoming.
DOT National Transportation Integrated Search
2016-03-14
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
DOT National Transportation Integrated Search
2000-01-01
In January 1996, Secretary Pena set a goal of deploying the integrated metropolitan Intelligent Transportation System (ITS) infrastructure in 75 of the nation's largest metropolitan areas by 2006. In 1997, the U.S. Department of Transportation initia...
DOT National Transportation Integrated Search
2014-12-01
The intent of this report is to provide (1) an initial assessment of National Airspace System (NAS) infrastructure affected by continuing development and deployment of unmanned aircraft systems into the NAS, and (2) a description of process challenge...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, Marc; Helwig, Michael
The California Statewide Plug-In Electric Vehicle Infrastructure Assessment conveys to interested parties the Energy Commission’s conclusions, recommendations, and intentions with respect to plug-in electric vehicle (PEV) infrastructure development. There are several relatively low-risk and high-priority electric vehicle supply equipment (EVSE) deployment options that will encourage PEV sales and
DOT National Transportation Integrated Search
2001-07-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
DOT National Transportation Integrated Search
2000-08-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
DOT National Transportation Integrated Search
2003-03-20
The Transportation Equity Act for the 21st Century (TEA-21) Public Laws 105-178 and 105-206, Title V, Section 5117(b) (3) provides for an Intelligent Transportation Infrastructure Program (ITIP) to advance the deployment of operational intelligent tr...
Vehicle-to-infrastructure (V2I) : message lexicon.
DOT National Transportation Integrated Search
2016-12-01
To help with Vehicle-to-Infrastructure (V2I) deployments, a V2I Message Lexicon was developed that explains the relationships and concepts for V2I messages and identifies the ITS standards where they may be found. This lexicon document provides a bri...
Connected vehicle pilot deployment program phase 1, concept of operations (ConOps), ICF/Wyoming.
DOT National Transportation Integrated Search
2015-12-01
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Connected Vehicle Pilot Deployment Program Phase 1, Human Use Approval Summary – ICF/Wyoming.
DOT National Transportation Integrated Search
2016-07-18
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
DOT National Transportation Integrated Search
2016-09-02
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
DOT National Transportation Integrated Search
2016-06-22
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
DOT National Transportation Integrated Search
2016-08-12
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
Connected vehicle pilot deployment program phase II data privacy plan – Tampa (THEA).
DOT National Transportation Integrated Search
2017-02-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to re...
Dynamic Collaboration Infrastructure for Hydrologic Science
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.
2016-12-01
Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.
Economic Incentives for Cybersecurity: Using Economics to Design Technologies Ready for Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishik, Claire; Sheldon, Frederick T; Ott, David
Cybersecurity practice lags behind cyber technology achievements. Solutions designed to address many problems may and do exist but frequently cannot be broadly deployed due to economic constraints. Whereas security economics focuses on the cost/benefit analysis and supply/demand, we believe that more sophisticated theoretical approaches, such as economic modeling, rarely utilized, would derive greater societal benefits. Unfortunately, today technologists pursuing interesting and elegant solutions have little knowledge of the feasibility for broad deployment of their results and cannot anticipate the influences of other technologies, existing infrastructure, and technology evolution, nor bring the solutions lifecycle into the equation. Additionally, potentially viable solutionsmore » are not adopted because the risk perceptions by potential providers and users far outweighs the economic incentives to support introduction/adoption of new best practices and technologies that are not well enough defined. In some cases, there is no alignment with redominant and future business models as well as regulatory and policy requirements. This paper provides an overview of the economics of security, reviewing work that helped to define economic models for the Internet economy from the 1990s. We bring forward examples of potential use of theoretical economics in defining metrics for emerging technology areas, positioning infrastructure investment, and building real-time response capability as part of software development. These diverse examples help us understand the gaps in current research. Filling these gaps will be instrumental for defining viable economic incentives, economic policies, regulations as well as early-stage technology development approaches, that can speed up commercialization and deployment of new technologies in cybersecurity.« less
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
DOT National Transportation Integrated Search
2014-05-01
This project seeks to develop a rapidly deployable, low-cost, and wireless system for bridge : weigh-in-motion (BWIM) and nondestructive evaluation (NDE). The system is proposed to : assist in monitoring transportation infrastructure safety, for the ...
DOT National Transportation Integrated Search
2015-09-23
This research project aimed to develop a remote sensing system capable of rapidly identifying fine-scale damage to critical transportation infrastructure following hazard events. Such a system must be pre-planned for rapid deployment, automate proces...
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
DOT National Transportation Integrated Search
2016-05-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to re...
USDA-ARS?s Scientific Manuscript database
Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...
DOT National Transportation Integrated Search
2016-06-06
The Wyoming Department of Transportations (WYDOT) Connected Vehicle (CV) Pilot Deployment Program is intended to develop a suite of applications that utilize vehicle to infrastructure (V2I) and vehicle to vehicle (V2V) communication technology to ...
47 CFR 59.3 - Information concerning deployment of new services and equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... services and equipment, including any software or upgrades of software integral to the use or operation of... services and equipment. 59.3 Section 59.3 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INFRASTRUCTURE SHARING § 59.3 Information concerning deployment of...
DOT National Transportation Integrated Search
2016-08-01
The Tampa Hillsborough Expressway Authority (THEA) Connected Vehicle (CV) Pilot Deployment Program is developing a suite of CV applications, or apps, that utilize vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V) and Vehicle to everything (V2...
Grid-based International Network for Flu observation (g-INFO).
Doan, Trung-Tung; Bernard, Aurélien; Da-Costa, Ana Lucia; Bloch, Vincent; Le, Thanh-Hoa; Legre, Yannick; Maigne, Lydia; Salzemann, Jean; Sarramia, David; Nguyen, Hong-Quang; Breton, Vincent
2010-01-01
The 2009 H1N1 outbreak has demonstrated that continuing vigilance, planning, and strong public health research capability are essential defenses against emerging health threats. Molecular epidemiology of influenza virus strains provides scientists with clues about the temporal and geographic evolution of the virus. In the present paper, researchers from France and Vietnam are proposing a global surveillance network based on grid technology: the goal is to federate influenza data servers and deploy automatically molecular epidemiology studies. A first prototype based on AMGA and the WISDOM Production Environment extracts daily from NCBI influenza H1N1 sequence data which are processed through a phylogenetic analysis pipeline deployed on EGEE and AuverGrid e-infrastructures. The analysis results are displayed on a web portal (http://g-info.healthgrid.org) for epidemiologists to monitor H1N1 pandemics.
Software Engineering Infrastructure in a Large Virtual Campus
ERIC Educational Resources Information Center
Cristobal, Jesus; Merino, Jorge; Navarro, Antonio; Peralta, Miguel; Roldan, Yolanda; Silveira, Rosa Maria
2011-01-01
Purpose: The design, construction and deployment of a large virtual campus are a complex issue. Present virtual campuses are made of several software applications that complement e-learning platforms. In order to develop and maintain such virtual campuses, a complex software engineering infrastructure is needed. This paper aims to analyse the…
DOT National Transportation Integrated Search
2017-06-13
MnDOT has already deployed an extensive infrastructure for Active Traffic Management (ATM) on I-35W and I-94 with plans to expand on other segments of the Twin Cities freeway network. The ATM system includes intelligent lane control signals (ILCS) sp...
Intensifying the proportion of urban green infrastructure has been considered as one of the remedies for air pollution levels in cities, yet the impact of numerous vegetation types deployed in different built environments has to be fully synthesised and quantified. This review ex...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Department of Defense; (2) the Department of the Interior; (3) the Department of Agriculture; (4) the Department of Commerce; (5) the Department of Transportation; (6) the Department of Veterans Affairs; and (7... local transportation infrastructure, creating significant opportunities for executive departments and...
DOT National Transportation Integrated Search
2011-12-01
The purpose of this report is to provide a summary and back-up information on the methodology, data sources, and results for the estimate of Intelligent Transportation Systems (ITS) capital expenditures in the top 75 metropolitan areas as of FY 2010....
DOT National Transportation Integrated Search
2016-01-01
This report presents the methodology and results of the independent evaluation of heavy trucks (HTs) in the Safety Pilot Model Deployment (SPMD); part of the United States Department of Transportations Intelligent Transportation Systems research p...
Grids, virtualization, and clouds at Fermilab
Timm, S.; Chadwick, K.; Garzoglio, G.; ...
2014-06-11
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less
Grids, virtualization, and clouds at Fermilab
NASA Astrophysics Data System (ADS)
Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.
2014-06-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.
Conceptual design of multi-source CCS pipeline transportation network for Polish energy sector
NASA Astrophysics Data System (ADS)
Isoli, Niccolo; Chaczykowski, Maciej
2017-11-01
The aim of this study was to identify an optimal CCS transport infrastructure for Polish energy sector in regards of selected European Commission Energy Roadmap 2050 scenario. The work covers identification of the offshore storage site location, CO2 pipeline network design and sizing for deployment at a national scale along with CAPEX analysis. It was conducted for the worst-case scenario, wherein the power plants operate under full-load conditions. The input data for the evaluation of CO2 flow rates (flue gas composition) were taken from the selected cogeneration plant with the maximum electric capacity of 620 MW and the results were extrapolated from these data given the power outputs of the remaining units. A graph search algorithm was employed to estimate pipeline infrastructure costs to transport 95 MT of CO2 annually, which amount to about 612.6 M€. Additional pipeline infrastructure costs will have to be incurred after 9 years of operation of the system due to limited storage site capacity. The results show that CAPEX estimates for CO2 pipeline infrastructure cannot be relied on natural gas infrastructure data, since both systems exhibit differences in pipe wall thickness that affects material cost.
A modular (almost) automatic set-up for elastic multi-tenants cloud (micro)infrastructures
NASA Astrophysics Data System (ADS)
Amoroso, A.; Astorino, F.; Bagnasco, S.; Balashov, N. A.; Bianchi, F.; Destefanis, M.; Lusso, S.; Maggiora, M.; Pellegrino, J.; Yan, L.; Yan, T.; Zhang, X.; Zhao, X.
2017-10-01
An auto-installing tool on an usb drive can allow for a quick and easy automatic deployment of OpenNebula-based cloud infrastructures remotely managed by a central VMDIRAC instance. A single team, in the main site of an HEP Collaboration or elsewhere, can manage and run a relatively large network of federated (micro-)cloud infrastructures, making an highly dynamic and elastic use of computing resources. Exploiting such an approach can lead to modular systems of cloud-bursting infrastructures addressing complex real-life scenarios.
caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oster, S.; Langella, S.; Hastings, S.
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL:
BioenergyKDF: Enabling Spatiotemporal Data Synthesis and Research Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, Aaron T; Movva, Sunil; Karthik, Rajasekar
2014-01-01
The Bioenergy Knowledge Discovery Framework (BioenergyKDF) is a scalable, web-based collaborative environment for scientists working on bioenergy related research in which the connections between data, literature, and models can be explored and more clearly understood. The fully-operational and deployed system, built on multiple open source libraries and architectures, stores contributions from the community of practice and makes them easy to find, but that is just its base functionality. The BioenergyKDF provides a national spatiotemporal decision support capability that enables data sharing, analysis, modeling, and visualization as well as fosters the development and management of the U.S. bioenergy infrastructure, which ismore » an essential component of the national energy infrastructure. The BioenergyKDF is built on a flexible, customizable platform that can be extended to support the requirements of any user community especially those that work with spatiotemporal data. While there are several community data-sharing software platforms available, some developed and distributed by national governments, none of them have the full suite of capabilities available in BioenergyKDF. For example, this component-based platform and database independent architecture allows it to be quickly deployed to existing infrastructure and to connect to existing data repositories (spatial or otherwise). As new data, analysis, and features are added; the BioenergyKDF will help lead research and support decisions concerning bioenergy into the future, but will also enable the development and growth of additional communities of practice both inside and outside of the Department of Energy. These communities will be able to leverage the substantial investment the agency has made in the KDF platform to quickly stand up systems that are customized to their data and research needs.« less
Intensifying the proportion of urban green infrastructure has been considered as one of the remedies for air pollution levels in cities, yet the impact of numerous vegetation types deployed in different built environments has to be fully synthesised and quantified. This review ex...
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies. PMID:29075430
Gonzalez, Enrique; Peña, Raul; Avila, Alfonso; Vargas-Rosales, Cesar; Munoz-Rodriguez, David
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.
A deployment of fine-grained sensor network and empirical analysis of urban temperature.
Thepvilojanapong, Niwat; Ono, Takahiro; Tobe, Yoshito
2010-01-01
Temperature in an urban area exhibits a complicated pattern due to complexity of infrastructure. Despite geographical proximity, structures of a group of buildings and streets affect changes in temperature. To investigate the pattern of fine-grained distribution of temperature, we installed a densely distributed sensor network called UScan. In this paper, we describe the system architecture of UScan as well as experience learned from installing 200 sensors in downtown Tokyo. The field experiment of UScan system operated for two months to collect long-term urban temperature data. To analyze the collected data in an efficient manner, we propose a lightweight clustering methodology to study the correlation between the pattern of temperature and various environmental factors including the amount of sunshine, the width of streets, and the existence of trees. The analysis reveals meaningful results and asserts the necessity of fine-grained deployment of sensors in an urban area.
Web-GIS platform for green infrastructure in Bucharest, Romania
NASA Astrophysics Data System (ADS)
Sercaianu, Mihai; Petrescu, Florian; Aldea, Mihaela; Oana, Luca; Rotaru, George
2015-06-01
In the last decade, reducing urban pollution and improving quality of public spaces became a more and more important issue for public administration authorities in Romania. The paper describes the development of a web-GIS solution dedicated to monitoring of the green infrastructure in Bucharest, Romania. Thus, the system allows the urban residents (citizens) to collect themselves and directly report relevant information regarding the current status of the green infrastructure of the city. Consequently, the citizens become an active component of the decision-support process within the public administration. Besides the usual technical characteristics of such geo-information processing systems, due to the complex legal and organizational problems that arise in collecting information directly from the citizens, additional analysis was required concerning, for example, local government involvement, environmental protection agencies regulations or public entities requirements. Designing and implementing the whole information exchange process, based on the active interaction between the citizens and public administration bodies, required the use of the "citizen-sensor" concept deployed with GIS tools. The information collected and reported from the field is related to a lot of factors, which are not always limited to the city level, providing the possibility to consider the green infrastructure as a whole. The "citizen-request" web-GIS for green infrastructure monitoring solution is characterized by a very diverse urban information, due to the fact that the green infrastructure itself is conditioned by a lot of urban elements, such as urban infrastructures, urban infrastructure works and construction density.
Human-Technology Centric In Cyber Security Maintenance For Digital Transformation Era
NASA Astrophysics Data System (ADS)
Ali, Firkhan Ali Bin Hamid; Zalisham Jali, Mohd, Dr
2018-05-01
The development of the digital transformation in the organizations has become more expanding in these present and future years. This is because of the active demand to use the ICT services among all the organizations whether in the government agencies or private sectors. While digital transformation has led manufacturers to incorporate sensors and software analytics into their offerings, the same innovation has also brought pressure to offer clients more accommodating appliance deployment options. So, their needs a well plan to implement the cyber infrastructures and equipment. The cyber security play important role to ensure that the ICT components or infrastructures execute well along the organization’s business successful. This paper will present a study of security management models to guideline the security maintenance on existing cyber infrastructures. In order to perform security model for the currently existing cyber infrastructures, combination of the some security workforces and security process of extracting the security maintenance in cyber infrastructures. In the assessment, the focused on the cyber security maintenance within security models in cyber infrastructures and presented a way for the theoretical and practical analysis based on the selected security management models. Then, the proposed model does evaluation for the analysis which can be used to obtain insights into the configuration and to specify desired and undesired configurations. The implemented cyber security maintenance within security management model in a prototype and evaluated it for practical and theoretical scenarios. Furthermore, a framework model is presented which allows the evaluation of configuration changes in the agile and dynamic cyber infrastructure environments with regard to properties like vulnerabilities or expected availability. In case of a security perspective, this evaluation can be used to monitor the security levels of the configuration over its lifetime and to indicate degradations.
CloudMan as a platform for tool, data, and analysis distribution.
Afgan, Enis; Chapman, Brad; Taylor, James
2012-11-27
Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.
Parametric Analysis for Aurora Mars Manned Mission Concept Definition
NASA Astrophysics Data System (ADS)
Augros, P.; Bonnefond, F.; Ranson, S.
In the frame of the Aurora program (ESA program), Europe plans to get its own vision about future Mars manned mission. Within this context, we have performed an end-to-end analysis of what could be these missions, focusing on transportation aspects and mobile in-situ infrastructure. This paper will define what is needed to land on Mars, what is needed to return from Mars surface, will explore the round trip options and their consequences on the mission design and feasibility and will analyze the launcher issue and the in-orbit assembly scenarios. The main results enable to rediscover a candidate mission based on a scenario close to the NASA reference mission (Ref [1]). The main interest, from transportation point of view, is that the spacecraft are similar: same insertion stage, same descent vehicle. Such design can be possible with deployable aeroshield for Mars entry vehicle, in-situ water and propellant production, improved habitat technology, conjunction like round trip (minimum V avoiding science fiction design), a launcher payload capability of 100 tons in LEO with a payload size of 30 m long and 7.5 m diameter. An alternative, limiting also the overall mass in LEO, could be a no Mars infrastructure deployment and a single spacecraft going to Mars and returning back to Earth. But it implies for the crew to stay in Mars orbit several months, waiting for the next opportunity ensuring a minimum V.
Quantum metropolitan optical network based on wavelength division multiplexing.
Ciurana, A; Martínez-Mateo, J; Peev, M; Poppe, A; Walenta, N; Zbinden, H; Martín, V
2014-01-27
Quantum Key Distribution (QKD) is maturing quickly. However, the current approaches to its application in optical networks make it an expensive technology. QKD networks deployed to date are designed as a collection of point-to-point, dedicated QKD links where non-neighboring nodes communicate using the trusted repeater paradigm. We propose a novel optical network model in which QKD systems share the communication infrastructure by wavelength multiplexing their quantum and classical signals. The routing is done using optical components within a metropolitan area which allows for a dynamically any-to-any communication scheme. Moreover, it resembles a commercial telecom network, takes advantage of existing infrastructure and utilizes commercial components, allowing for an easy, cost-effective and reliable deployment.
Throughput Analysis on 3-Dimensional Underwater Acoustic Network with One-Hop Mobile Relay.
Zhong, Xuefeng; Chen, Fangjiong; Fan, Jiasheng; Guan, Quansheng; Ji, Fei; Yu, Hua
2018-01-16
Underwater acoustic communication network (UACN) has been considered as an essential infrastructure for ocean exploitation. Performance analysis of UACN is important in underwater acoustic network deployment and management. In this paper, we analyze the network throughput of three-dimensional randomly deployed transmitter-receiver pairs. Due to the long delay of acoustic channels, complicated networking protocols with heavy signaling overhead may not be appropriate. In this paper, we consider only one-hop or two-hop transmission, to save the signaling cost. That is, we assume the transmitter sends the data packet to the receiver by one-hop direct transmission, or by two-hop transmission via mobile relays. We derive the closed-form formulation of packet delivery rate with respect to the transmission delay and the number of transmitter-receiver pairs. The correctness of the derivation results are verified by computer simulations. Our analysis indicates how to obtain a precise tradeoff between the delay constraint and the network capacity.
Throughput Analysis on 3-Dimensional Underwater Acoustic Network with One-Hop Mobile Relay
Zhong, Xuefeng; Fan, Jiasheng; Guan, Quansheng; Ji, Fei; Yu, Hua
2018-01-01
Underwater acoustic communication network (UACN) has been considered as an essential infrastructure for ocean exploitation. Performance analysis of UACN is important in underwater acoustic network deployment and management. In this paper, we analyze the network throughput of three-dimensional randomly deployed transmitter–receiver pairs. Due to the long delay of acoustic channels, complicated networking protocols with heavy signaling overhead may not be appropriate. In this paper, we consider only one-hop or two-hop transmission, to save the signaling cost. That is, we assume the transmitter sends the data packet to the receiver by one-hop direct transmission, or by two-hop transmission via mobile relays. We derive the closed-form formulation of packet delivery rate with respect to the transmission delay and the number of transmitter–receiver pairs. The correctness of the derivation results are verified by computer simulations. Our analysis indicates how to obtain a precise tradeoff between the delay constraint and the network capacity. PMID:29337911
Middleton, Richard S; Brandt, Adam R
2013-02-05
The Alberta oil sands are a significant source of oil production and greenhouse gas emissions, and their importance will grow as the region is poised for decades of growth. We present an integrated framework that simultaneously considers economic and engineering decisions for the capture, transport, and storage of oil sands CO(2) emissions. The model optimizes CO(2) management infrastructure at a variety of carbon prices for the oil sands industry. Our study reveals several key findings. We find that the oil sands industry lends itself well to development of CO(2) trunk lines due to geographic coincidence of sources and sinks. This reduces the relative importance of transport costs compared to nonintegrated transport systems. Also, the amount of managed oil sands CO(2) emissions, and therefore the CCS infrastructure, is very sensitive to the carbon price; significant capture and storage occurs only above 110$/tonne CO(2) in our simulations. Deployment of infrastructure is also sensitive to CO(2) capture decisions and technology, particularly the fraction of capturable CO(2) from oil sands upgrading and steam generation facilities. The framework will help stakeholders and policy makers understand how CCS infrastructure, including an extensive pipeline system, can be safely and cost-effectively deployed.
NREL, Sandia Team to Improve Hydrogen Fueling Infrastructure | News | NREL
hydrogen fuel cell vehicle owners have a positive fueling experience as fuel cell electric vehicles are to pave the way toward more widespread deployment of hydrogen fuel cell electric vehicles. The goals out what's working and what needs improvement is a key next step for fuel cell vehicle deployment
DOT National Transportation Integrated Search
2010-03-17
The attempted bombing of Northwest flight 253 highlighted the importance of detecting improvised explosive devices on passengers. This testimony focuses on (1) the Transportation Security Administrations (TSA) efforts to procure and deploy advance...
Seeing Red: Discourse, Metaphor, and the Implementation of Red Light Cameras in Texas
ERIC Educational Resources Information Center
Hayden, Lance Alan
2009-01-01
This study examines the deployment of automated red light camera systems in the state of Texas from 2003 through late 2007. The deployment of new technologies in general, and surveillance infrastructures in particular, can prove controversial and challenging for the formation of public policy. Red light camera surveillance during this period in…
Controlled Hydrogen Fleet and Infrastructure Demonstration and Validation Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stottler, Gary
General Motors, LLC and energy partner Shell Hydrogen, LLC, deployed a system of hydrogen fuel cell electric vehicles integrated with a hydrogen fueling station infrastructure to operate under real world conditions as part of the U.S. Department of Energy's Controlled Hydrogen Fleet and Infrastructure Validation and Demonstration Project. This technical report documents the performance and describes the learnings from progressive generations of vehicle fuel cell system technology and multiple approaches to hydrogen generation and delivery for vehicle fueling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Smart
A preliminary analysis of data from The EV Project was performed to begin answering the question: are corridor charging stations used to extend the range of electric vehicles? Data analyzed were collected from Blink brand electric vehicle supply equipment (EVSE) units based in California, Washington, and Oregon. Analysis was performed on data logged between October 1, 2012 and January 1, 2013. It should be noted that as additional AC Level 2 EVSE and DC fast chargers are deployed, and as drivers become more familiar with the use of public charging infrastructure, future analysis may have dissimilar conclusions.
Jobs and Economic Development from New Transmission and Generation in Wyoming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, E.; Tegen, S.
2011-03-01
This report is intended to inform policymakers, local government officials, and Wyoming residents about the jobs and economic development activity that could occur should new infrastructure investments in Wyoming move forward. The report and analysis presented is not a projection or a forecast of what will happen. Instead, the report uses a hypothetical deployment scenario and economic modeling tools to estimate the jobs and economic activity likely associated with these projects if or when they are built.
Jobs and Economic Development from New Transmission and Generation in Wyoming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, Eric; Tegen, Suzanne
2011-03-31
This report is intended to inform policymakers, local government officials, and Wyoming residents about the jobs and economic development activity that could occur should new infrastructure investments in Wyoming move forward. The report and analysis presented is not a projection or a forecast of what will happen. Instead, the report uses a hypothetical deployment scenario and economic modeling tools to estimate the jobs and economic activity likely associated with these projects if or when they are built.
2015-05-01
for issuing this critical change: Inability to achieve PKI Increment 2 Full Deployment Decision ( FDD ) within five years of program initiation...March 1, 2014 deadline), and Delay of over one year in the original FDD estimate provided to the Congress (1 March 2014 deadline). The proximate...to support a 1 March 2014 FDD .” The Director, Performance Assessments and Root Cause Analyses (PARCA), asked the Institute for Defense Analyses
Modeling the Cloud to Enhance Capabilities for Crises and Catastrophe Management
2016-11-16
order for cloud computing infrastructures to be successfully deployed in real world scenarios as tools for crisis and catastrophe management, where...Statement of the Problem Studied As cloud computing becomes the dominant computational infrastructure[1] and cloud technologies make a transition to hosting...1. Formulate rigorous mathematical models representing technological capabilities and resources in cloud computing for performance modeling and
Security Engineering and Educational Initiatives for Critical Information Infrastructures
2013-06-01
standard for cryptographic protection of SCADA communications. The United Kingdom’s National Infrastructure Security Co-ordination Centre (NISCC...has released a good practice guide on firewall deployment for SCADA systems and process control networks [17]. Meanwhile, National Institute for ...report. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 18 The SCADA gateway collects the data gathered by sensors, translates them from
2011-10-01
Fortunately, some products offer centralized management and deployment tools for local desktop implementation . Figure 5 illustrates the... implementation of a secure desktop infrastructure based on virtualization. It includes an overview of desktop virtualization, including an in-depth...environment in the data centre, whereas LHVD places it on the endpoint itself. Desktop virtualization implementation considerations and potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finnell, Joshua Eugene; Klein, Martin; Cain, Brian J.
2017-05-09
The proposal is to provide institutional infrastructure that facilitates management of research projects, research collaboration, and management, preservation, and discovery of data. Deploying such infrastructure will amplify the effectiveness, efficiency, and impact of research, as well as assist researchers in regards to compliance with both data management mandates and LANL security policy. This will facilitate discoverability of LANL research both within the lab and external to LANL.
NGNP Infrastructure Readiness Assessment: Consolidation Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian K Castle
2011-02-01
The Next Generation Nuclear Plant (NGNP) project supports the development, demonstration, and deployment of high temperature gas-cooled reactors (HTGRs). The NGNP project is being reviewed by the Nuclear Energy Advisory Council (NEAC) to provide input to the DOE, who will make a recommendation to the Secretary of Energy, whether or not to continue with Phase 2 of the NGNP project. The NEAC review will be based on, in part, the infrastructure readiness assessment, which is an assessment of industry's current ability to provide specified components for the FOAK NGNP, meet quality assurance requirements, transport components, have the necessary workforce inmore » place, and have the necessary construction capabilities. AREVA and Westinghouse were contracted to perform independent assessments of industry's capabilities because of their experience with nuclear supply chains, which is a result of their experiences with the EPR and AP-1000 reactors. Both vendors produced infrastructure readiness assessment reports that identified key components and categorized these components into three groups based on their ability to be deployed in the FOAK plant. The NGNP project has several programs that are developing key components and capabilities. For these components, the NGNP project have provided input to properly assess the infrastructure readiness for these components.« less
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Wireless Sensor Network Deployment for Monitoring Wildlife Passages
Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Losilla, Fernando; Kulakowski, Pawel; Garcia-Haro, Joan; Rodríguez, Alejandro; López-Bao, José-Vicente; Palomares, Francisco
2010-01-01
Wireless Sensor Networks (WSNs) are being deployed in very diverse application scenarios, including rural and forest environments. In these particular contexts, specimen protection and conservation is a challenge, especially in natural reserves, dangerous locations or hot spots of these reserves (i.e., roads, railways, and other civil infrastructures). This paper proposes and studies a WSN based system for generic target (animal) tracking in the surrounding area of wildlife passages built to establish safe ways for animals to cross transportation infrastructures. In addition, it allows target identification through the use of video sensors connected to strategically deployed nodes. This deployment is designed on the basis of the IEEE 802.15.4 standard, but it increases the lifetime of the nodes through an appropriate scheduling. The system has been evaluated for the particular scenario of wildlife monitoring in passages across roads. For this purpose, different schemes have been simulated in order to find the most appropriate network operational parameters. Moreover, a novel prototype, provided with motion detector sensors, has also been developed and its design feasibility demonstrated. Original software modules providing new functionalities have been implemented and included in this prototype. Finally, main performance evaluation results of the whole system are presented and discussed in depth. PMID:22163601
CloudMan as a platform for tool, data, and analysis distribution
2012-01-01
Background Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. Results CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. Conclusions With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions. PMID:23181507
Experiences of engineering Grid-based medical software.
Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T
2007-08-01
Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.
The Semi-opened Infrastructure Model (SopIM): A Frame to Set Up an Organizational Learning Process
NASA Astrophysics Data System (ADS)
Grundstein, Michel
In this paper, we introduce the "Semi-opened Infrastructure Model (SopIM)" implemented to deploy Artificial Intelligence and Knowledge-based Systems within a large industrial company. This model illustrates what could be two of the operating elements of the Model for General Knowledge Management within the Enterprise (MGKME) that are essential to set up the organizational learning process that leads people to appropriate and use concepts, methods and tools of an innovative technology: the "Ad hoc Infrastructures" element, and the "Organizational Learning Processes" element.
Kreofsky, Beth L H; Blegen, R Nicole; Lokken, Troy G; Kapraun, Susan M; Bushman, Matthew S; Demaerschalk, Bart M
2018-04-16
Telemedicine services in medical institutions are often developed in isolation of one another and not as part of a comprehensive telemedicine program. The Center for Connected Care is the administrative home for a broad range of telehealth services at Mayo Clinic. This article speaks of real-time video services, referenced as telemedicine throughout. This article discusses how a large healthcare system designed and built the infrastructure to support a comprehensive telemedicine practice. Based on analysis of existing services, Mayo Clinic developed a multifaceted operational plan that addressed high-priority areas and outlined clear roles and responsibilities of the Center for Connected Care and that of the clinical departments. The plan set priorities and a direction that would lead to long-term success. The plan articulated the governing and operational infrastructure necessary to support telemedicine by defining the role of the Center for Connected Care as the owner of core administrative operations and the role of the clinical departments as the owners of clinical telemedicine services. Additional opportunities were identified to develop product selection processes, implementation services, and staffing models that would be applied to ensure successful telemedicine deployment. The telemedicine team within the Center for Connected Care completed 45 business cases resulting in 54 implementations. The standardization of core products along with key operational offerings around implementation services, and the establishment of a 24/7 support model resulted in improved provider satisfaction and fewer reported technical issues. The foundation for long-term scalability and growth was developed by centralizing operations of telemedicine services, implementing sustainable processes, employing dedicated qualified personnel, and deploying robust products.
Musko, Stephen B; Clauer, C Robert; Ridley, Aaron J; Arnett, Kennneth L
2009-04-01
A major driver in the advancement of geophysical sciences is improvement in the quality and resolution of data for use in scientific analysis, discovery, and for assimilation into or validation of empirical and physical models. The need for more and better measurements together with improvements in technical capabilities is driving the ambition to deploy arrays of autonomous geophysical instrument platforms in remote regions. This is particularly true in the southern polar regions where measurements are presently sparse due to the remoteness, lack of infrastructure, and harshness of the environment. The need for the acquisition of continuous long-term data from remote polar locations exists across geophysical disciplines and is a generic infrastructure problem. The infrastructure, however, to support autonomous instrument platforms in polar environments is still in the early stages of development. We report here the development of an autonomous low-power magnetic variation data collection system. Following 2 years of field testing at the south pole station, the system is being reproduced to establish a dense chain of stations on the Antarctic plateau along the 40 degrees magnetic meridian. The system is designed to operate for at least 5 years unattended and to provide data access via satellite communication. The system will store 1 s measurements of the magnetic field variation (<0.2 nT resolution) in three vector components plus a variety of engineering status and environment parameters. We believe that the data collection platform can be utilized by a variety of low-power instruments designed for low-temperature operation. The design, technical characteristics, and operation results are presented here.
NASA Technical Reports Server (NTRS)
Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi
2010-01-01
The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.
Connected vehicle pilot deployment program.
DOT National Transportation Integrated Search
2014-01-01
The U.S. Department of Transportations (USDOTs) connected vehicle research program is a multimodal initiative to enable safe, interoperable, networked wireless communications among vehicles, infrastructure, and personal communications devices. ...
Deploying Crowd-Sourced Formal Verification Systems in a DoD Network
2013-09-01
INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. INTRODUCTION In 2014 cyber attacks on critical infrastructure are expected to increase...CSFV systems on the Internet‒‒possibly using cloud infrastructure (Dean, 2013). By using Amazon Compute Cloud (EC2) systems, DARPA will use ordinary...through standard access methods. Those clients could be mobile phones, laptops, netbooks, tablet computers or personal digital assistants (PDAs) (Smoot
Waggle: A Framework for Intelligent Attentive Sensing and Actuation
NASA Astrophysics Data System (ADS)
Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.
2014-12-01
Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.
NASA Astrophysics Data System (ADS)
Turner, Sean
2015-04-01
Water resources planning is a complex and challenging discipline in which decision makers must deal with conflicting objectives, contested socio-economic values and vast uncertainties, including long term hydrological variability. The task is arguably more demanding in England and Wales, where private water companies must adhere to a rigid set of regulatory planning guidelines in order to justify new infrastructural investments. These guidelines prescribe a "capacity expansion" approach to planning: ensure that a deterministic measure of supply, known as "Deployable Output," meets projected demand over a 25-year planning horizon. Deployable Output is derived using a method akin to yield analysis and is commensurate with the maximum rate of supply that a water resources system can sustain without incurring failure under a simulation of historical recorded hydrological conditions. This study examines whether Deployable Output analysis is fit to serve an industry in which: water companies are seeking to invest in cross-company water transfer schemes to deal with loss of water availability brought about by European environmental legislation and an increase in demand driven by population growth; water companies are expected address potential climate change impacts through their planning activities; and regulators wish to benchmark water resource system performance across the separate companies. Of particular interest, then, is the adequacy of Deployable Output analysis as a means to measuring current and future water shortage risk and comparing across supply systems. Data from the UK National River Flow Archive are used to develop a series of hypothetical reservoir systems in two hydrologically contrasting regions -- northwest England/north Wales and Southeast England. The systems are varied by adjusting the draft ratio (ratio of target annual demand to mean annual inflow), the inflow diversity (covariance of streamflow sequences supplying the system), the strength of interconnectivity in the system (water transfer capability as proportion of demand), and the proportion of the target demand that can be drafted from climate-independent supply sources (such as plentiful groundwater supplies or desalination). The reservoir capacities are then adjusted such that all systems are perfectly and equally balanced under current design standards (Deployable Output equals demand) before being subjected to comprehensive reliability, resilience, vulnerability analysis using stochastically-derived replicates of the inflow sequences. Results indicate significant discrepancies in performance, highlighting major deficiencies with the currently-accepted planning metrics as a means to measuring and comparing water shortage risk across supply systems. These discrepancies are evident in both regions examined. The work highlights a need for a reassessment of the prescribed planning methodology to better reflect aspects of water shortage risk, particularly resilience and vulnerability.
Rethinking Mobile Telephony with the IMP
2011-01-01
in the telephony industry, and portions of it such as SS7 or SCTP signaling are packet-switched, deployed mobile telephony access infrastructure is...deployment of wireless LAN technology raises the question of how a mobile telephony system might instead be architected to use wireless LAN access ...and wireless access points has made universal Internet access increasingly convenient. There are clearly barriers to this vision of accessing a
Consolidation and development roadmap of the EMI middleware
NASA Astrophysics Data System (ADS)
Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.
2012-12-01
Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.
The Particle Physics Data Grid. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livny, Miron
2002-08-16
The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less
GATECloud.net: a platform for large-scale, open-source text processing on the cloud.
Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina
2013-01-28
Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.
Public key infrastructure for DOE security research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R.; Foster, I.; Johnston, W.E.
This document summarizes the Department of Energy`s Second Joint Energy Research/Defence Programs Security Research Workshop. The workshop, built on the results of the first Joint Workshop which reviewed security requirements represented in a range of mission-critical ER and DP applications, discussed commonalties and differences in ER/DP requirements and approaches, and identified an integrated common set of security research priorities. One significant conclusion of the first workshop was that progress in a broad spectrum of DOE-relevant security problems and applications could best be addressed through public-key cryptography based systems, and therefore depended upon the existence of a robust, broadly deployed public-keymore » infrastructure. Hence, public-key infrastructure ({open_quotes}PKI{close_quotes}) was adopted as a primary focus for the second workshop. The Second Joint Workshop covered a range of DOE security research and deployment efforts, as well as summaries of the state of the art in various areas relating to public-key technologies. Key findings were that a broad range of DOE applications can benefit from security architectures and technologies built on a robust, flexible, widely deployed public-key infrastructure; that there exists a collection of specific requirements for missing or undeveloped PKI functionality, together with a preliminary assessment of how these requirements can be met; that, while commercial developments can be expected to provide many relevant security technologies, there are important capabilities that commercial developments will not address, due to the unique scale, performance, diversity, distributed nature, and sensitivity of DOE applications; that DOE should encourage and support research activities intended to increase understanding of security technology requirements, and to develop critical components not forthcoming from other sources in a timely manner.« less
A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC
NASA Astrophysics Data System (ADS)
Ahern, Tim; Trabant, Chad
2014-05-01
As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.
caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909
caGrid 1.0: an enterprise Grid infrastructure for biomedical research.
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2008-01-01
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.
NASA Astrophysics Data System (ADS)
Garcia, Oscar; Mihai Toma, Daniel; Dañobeitia, Juanjo; del Rio, Joaquin; Bartolome, Rafael; Martínez, Enoc; Nogueras, Marc; Bghiel, Ikram; Lanteri, Nadine; Rolin, Jean Francois; Beranzoli, Laura; Favali, Paolo
2017-04-01
The EMSODEV project (EMSO implementation and operation: DEVelopment of instrument module) is an Horizon-2020 UE project whose overall objective is the operation of eleven seafloor observatories and four test sites. These infrastructures are distributed throughout European seas, from the Arctic across the Atlantic and the Mediterranean to the Black Sea, and are managed by the European consortium EMSO-ERIC (European Research Infrastructure Consortium) with the participation of 8 European countries and other associated partners. Recently, we have implemented a Generic Sensor Module (EGIM) within the EMSO-ERIC distributed marine research infrastructure. EGIM is able to operate on any EMSO observatory node, mooring line, seabed station, cabled or non-cabled and surface buoy. The main role of EGIM is to measure homogeneously a set of core variables using the same hardware, sensor references, qualification methods, calibration methods, data format and access, maintenance procedures in several European ocean locations. The EGIM module acquires a wide range of ocean parameters in a long-term consistent, accurate and comparable manner from disciplines such as biology, geology, chemistry, physics, engineering, and computer science, from polar to subtropical environments, through the water column down to the deep sea. Our work includes developing standard-compliant generic software for Sensor Web Enablement (SWE) on EGIM and to perform the first onshore and offshore test bench, to support the sensors data acquisition on a new interoperable EGIM system. EGIM in its turn is linked to an acquisition drives processes, a centralized Sensor Observation Service (SOS) server and a laboratory monitor system (LabMonitor) that records events and alarms during acquisition. The measurements recorded along EMSO NODES are essential to accurately respond to the social and scientific challenges such as climate change, changes in marine ecosystems, and marine hazards. This presentation shows the first EGIM deployment and the SWE infrastructure, developed to manage the data acquisition from the underwater sensors and their insertion to the SOS interface.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Davenhall, Clive; Greenwood, Colin; Strong, Matthew
ESLEA, an EPSRC-funded project, aims to demonstrate the potential benefits of circuit-switched optical networks (lightpaths) to the UK e-Science community. This is being achieved by running a number of "proof of benefit" pilot applications over UKLight, the UK's first national optical research network. UKLight provides a new way for researchers to obtain dedicated "lightpaths" between remote sites and to deploy and test novel networking methods and technologies. It facilitates collaboration on global projects by providing a point of access to the fast growing international optical R&D infrastructure. A diverse range of data-intensive fields of academic endeavour are participating in the ESLEA project; all these groups require the integration of high-bandwidth switched lightpath circuits into their experimental and analysis infrastructure for international transport of high-volume applications data. In addition, network protocol research and development of circuit reservation mechanisms has been carried out to help the pilot applications to exploit the UKLight infrastructure effectively. Further information about ESLEA can be viewed at www.eslea.uklight.ac.uk. ESLEA activities are now coming to an end and work will finish from February to July 2007, depending upon the terms of funding of each pilot application. The first quarter of 2007 is considered the optimum time to hold a closing conference for the project. The objectives of the conference are to: 1. Provide a forum for the dissemination of research findings and learning experiences from the ESLEA project. 2. Enable colleagues from the UK and international e-Science communities to present, discuss and learn about the latest developments in networking technology. 3. Raise awareness about the deployment of the UKLight infrastructure and its relationship to SuperJANET 5. 4. Identify potential uses of UKLight by existing or future research projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.
1997-11-01
This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less
Scaling the PuNDIT project for wide area deployments
NASA Astrophysics Data System (ADS)
McKee, Shawn; Batista, Jorge; Carcassi, Gabriele; Dovrolis, Constantine; Lee, Danny
2017-10-01
In today’s world of distributed scientific collaborations, there are many challenges to providing reliable inter-domain network infrastructure. Network operators use a combination of active monitoring and trouble tickets to detect problems, but these are often ineffective at identifying issues that impact wide-area network users. Additionally, these approaches do not scale to wide area inter-domain networks due to unavailability of data from all the domains along typical network paths. The Pythia Network Diagnostic InfrasTructure (PuNDIT) project aims to create a scalable infrastructure for automating the detection and localization of problems across these networks. The project goal is to gather and analyze metrics from existing perfSONAR monitoring infrastructures to identify the signatures of possible problems, locate affected network links, and report them to the user in an intuitive fashion. Simply put, PuNDIT seeks to convert complex network metrics into easily understood diagnoses in an automated manner. We present our progress in creating the PuNDIT system and our status in developing, testing and deploying PuNDIT. We report on the project progress to-date, describe the current implementation architecture and demonstrate some of the various user interfaces it will support. We close by discussing the remaining challenges and next steps and where we see the project going in the future.
NASA Astrophysics Data System (ADS)
Chowdhry, Bhawani Shankar; White, Neil M.; Jeswani, Jai Kumar; Dayo, Khalil; Rathi, Manorma
2009-07-01
Disasters affecting infrastructure, such as the 2001 earthquakes in India, 2005 in Pakistan, 2008 in China and the 2004 tsunami in Asia, provide a common need for intelligent buildings and smart civil structures. Now, imagine massive reductions in time to get the infrastructure working again, realtime information on damage to buildings, massive reductions in cost and time to certify that structures are undamaged and can still be operated, reductions in the number of structures to be rebuilt (if they are known not to be damaged). Achieving these ideas would lead to huge, quantifiable, long-term savings to government and industry. Wireless sensor networks (WSNs) can be deployed in buildings to make any civil structure both smart and intelligent. WSNs have recently gained much attention in both public and research communities because they are expected to bring a new paradigm to the interaction between humans, environment, and machines. This paper presents the deployment of WSN nodes in the Top Quality Centralized Instrumentation Centre (TQCIC). We created an ad hoc networking application to collect real-time data sensed from the nodes that were randomly distributed throughout the building. If the sensors are relocated, then the application automatically reconfigures itself in the light of the new routing topology. WSNs are event-based systems that rely on the collective effort of several micro-sensor nodes, which are continuously observing a physical phenomenon. WSN applications require spatially dense sensor deployment in order to achieve satisfactory coverage. The degree of spatial correlation increases with the decreasing inter-node separation. Energy consumption is reduced dramatically by having only those sensor nodes with unique readings transmit their data. We report on an algorithm based on a spatial correlation technique that assures high QoS (in terms of SNR) of the network as well as proper utilization of energy, by suppressing redundant data transmission. The visualization and analysis of WSN data are presented in a Windows-based user interface.
Public Key Infrastructure Increment 2 (PKI Inc 2)
2016-03-01
DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...experienced due to a delay in achieving the FDD . The Critical Change Report was provided to Congress on July 11, 2014. Firm, Fixed-Price Feasibility...to a delay in achieving the FDD . To support the Critical Change Report, the NSA Cost Estimating organization prepared a cost estimate that was
Base Information Transport Infrastructure Wired (BITI Wired)
2016-03-01
Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Estimate has been accomplished this period leading to an approved Air Force Service Cost Position in support of the program’s December 2014 FDD milestone...validated to ensure alignment with the business case. This certification is based on my review of the December 2014 Service Cost Position and FDD
Windows Terminal Servers Orchestration
NASA Astrophysics Data System (ADS)
Bukowiec, Sebastian; Gaspar, Ricardo; Smith, Tim
2017-10-01
Windows Terminal Servers provide application gateways for various parts of the CERN accelerator complex, used by hundreds of CERN users every day. The combination of new tools such as Puppet, HAProxy and Microsoft System Center suite enable automation of provisioning workflows to provide a terminal server infrastructure that can scale up and down in an automated manner. The orchestration does not only reduce the time and effort necessary to deploy new instances, but also facilitates operations such as patching, analysis and recreation of compromised nodes as well as catering for workload peaks.
Services for domain specific developments in the Cloud
NASA Astrophysics Data System (ADS)
Schwichtenberg, Horst; Gemuend, André
2015-04-01
We will discuss and demonstrate the possibilities of new Cloud Services where the complete development of code is in the Cloud. We will discuss the possibilities of such services where the complete development cycle from programing to testing is in the cloud. This can be also combined with dedicated research domain specific services and hide the burden of accessing available infrastructures. As an example, we will show a service that is intended to complement the services of the VERCE projects infrastructure, a service that utilizes Cloud resources to offer simplified execution of data pre- and post-processing scripts. It offers users access to the ObsPy seismological toolbox for processing data with the Python programming language, executed on virtual Cloud resources in a secured sandbox. The solution encompasses a frontend with a modern graphical user interface, a messaging infrastructure as well as Python worker nodes for background processing. All components are deployable in the Cloud and have been tested on different environments based on OpenStack and OpenNebula. Deployments on commercial, public Clouds will be tested in the future.
Highway Incident Detection Timeline
DOT National Transportation Integrated Search
2017-10-16
The ITS JPO is the U.S. Department of Transportations primary advocate and national leader for ITS research, development, and future deployment of connected vehicle technologies, focusing on intelligent vehicles, intelligent infrastructure, and th...
NASA Astrophysics Data System (ADS)
Bulega, T.; Kyeyune, A.; Onek, P.; Sseguya, R.; Mbabazi, D.; Katwiremu, E.
2011-10-01
Several publications have identified technical challenges facing Uganda's National Transmission Backbone Infrastructure project. This research addresses the technical limitations of the National Transmission Backbone Infrastructure project, evaluates the goals of the project, and compares the results against the technical capability of the backbone. The findings of the study indicate a bandwidth deficit, which will be addressed by using dense wave division multiplexing repeaters, leasing bandwidth from private companies. Microwave links for redundancy, a Network Operation Center for operation and maintenance, and deployment of wireless interoperability for microwave access as a last-mile solution are also suggested.
Continuous Codes and Standards Improvement (CCSI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, Carl H; Burgess, Robert M; Buttner, William J
2015-10-21
As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less
NASA Astrophysics Data System (ADS)
Darner, R.; Shuster, W.
2016-12-01
Expansion of the urban environment can alter the landscape and creates challenges for how cities deal with energy and water. Large volumes of stormwater in areas that have combined septic and stormwater systems present on challenge. Managing the water as near to the source as possible by creates an environment that allows more infiltration and evapotranspiration. Stormwater control measures (SCM) associated with this type of development, often called green infrastructure, include rain gardens, pervious or porous pavements, bioswales, green or blue roofs, and others. In this presentation, we examine the hydrology of green infrastructure in urban sewersheds in Cleveland and Columbus, OH. We present the need for data throughout the water cycle and challenges to collecting field data at a small scale (single rain garden instrumented to measure inflows, outflow, weather, soil moisture, and groundwater levels) and at a macro scale (a project including low-cost rain gardens, highly engineered rain gardens, groundwater wells, weather stations, soil moisture, and combined sewer flow monitoring). Results will include quantifying the effectiveness of SCMs in intercepting stormwater for different precipitation event sizes. Small scale deployment analysis will demonstrate the role of active adaptive management in the ongoing optimization over multiple years of data collection.
On-track testing of a power harvesting device for railroad track health monitoring
NASA Astrophysics Data System (ADS)
Hansen, Sean E.; Pourghodrat, Abolfazl; Nelson, Carl A.; Fateh, Mahmood
2010-03-01
A considerable proportion of railroad infrastructure exists in regions which are comparatively remote. With regard to the cost of extending electrical infrastructure into these areas, road crossings in these areas do not have warning light systems or crossing gates and are commonly marked with reflective signage. For railroad track health monitoring purposes, distributed sensor networks can be applicable in remote areas, but the same limitation regarding electrical infrastructure is the hindrance. This motivated the development of an energy harvesting solution for remote railroad deployment. This paper describes on-track experimental testing of a mechanical device for harvesting mechanical power from passing railcar traffic, in view of supplying electrical power to warning light systems at crossings and to remote networks of sensors. The device is mounted to and spans two rail ties and transforms the vertical rail displacement into electrical energy through mechanical amplification and rectification into a PMDC generator. A prototype was tested under loaded and unloaded railcar traffic at low speeds. Stress analysis and speed scaling analysis are presented, results of the on-track tests are compared and contrasted to previous laboratory testing, discrepancies between the two are explained, and conclusions are drawn regarding suitability of the device for illuminating high-efficiency LED lights at railroad crossings and powering track-health sensor networks.
Problem of data quality and the limitations of the infrastructure approach
NASA Astrophysics Data System (ADS)
Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong
1998-07-01
The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.
Testing as a Service with HammerCloud
NASA Astrophysics Data System (ADS)
Medrano Llamas, Ramón; Barrand, Quentin; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel
2014-06-01
HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.
NASA Technical Reports Server (NTRS)
Jenett, Benjamin; Cellucci, Daniel; Cheung, Kenneth
2015-01-01
Automatic deployment of structures has been a focus of much academic and industrial work on infrastructure applications and robotics in general. This paper presents a robotic truss assembler designed for space applications - the Space Robot Universal Truss System (SpRoUTS) - that reversibly assembles a truss from a feedstock of hinged andflat-packed components, by folding the sides of each component up and locking onto the assembled structure. We describe the design and implementation of the robot and show that the assembled truss compares favorably with prior truss deployment systems.
Unidata cyberinfrastructure in the cloud: A progress report
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan
2016-04-01
Data services, software, and committed support are critical components of geosciences cyber-infrastructure that can help scientists address problems of unprecedented complexity, scale, and scope. Unidata is currently working on innovative ideas, new paradigms, and novel techniques to complement and extend its offerings. Our goal is to empower users so that they can tackle major, heretofore difficult problems. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. To realize the above vision, Unidata is working toward: * Providing access to many types of data from a cloud (e.g., TDS, RAMADDA and EDEX); * Deploying data-proximate tools to easily process, analyze and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Fostering partnerships with NOAA and public cloud vendors (e.g., Amazon) to harness their capabilities and resources for the benefit of the academic community.
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
Adapting New Space System Designs into Existing Ground Infrastructure
NASA Technical Reports Server (NTRS)
Delgado, Hector N.; McCleskey, Carey M.
2008-01-01
As routine space operations extend beyond earth orbit, the ability for ground infrastructures to take on new launch vehicle systems and a more complex suite of spacecraft and payloads has become a new challenge. The U.S. Vision for Space Exploration and its Constellation Program provides opportunities for our space operations community to meet this challenge. Presently, as new flight and ground systems add to the overall groundbased and space-based capabilities for NASA and its international partners, specific choices are being made as to what to abandon, what to retain, as well as what to build new. The total ground and space-based infrastructure must support a long-term, sustainable operation after it is all constructed, deployed, and activated. This paper addresses key areas of engineering concern during conceptual design, development, and routine operations, with a particular focus on: (1) legacy system reusability, (2) system supportability attributes and operations characteristics, (3) ground systems design trades and criteria, and (4) technology application survey. Each key area explored weighs the merits of reusability of the infrastructure in terms of: engineering analysis methods and techniques; top-level facility, systems, and equipment design criteria; and some suggested methods for making the operational system attributes (the "-ilities") highly visible to the design teams and decisionmakers throughout the design process.
The building of the EUDAT Cross-Disciplinary Data Infrastructure
NASA Astrophysics Data System (ADS)
Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter
2013-04-01
The EUDAT project is a European data initiative that brings together a unique consortium of 25 partners - including research communities, national data and high performance computing (HPC) centers, technology providers, and funding agencies - from 13 countries. EUDAT aims to build a sustainable cross-disciplinary and cross-national Commom Data Infrastructure (CDI) that provides a set of shared services for accessing and preserving research data. The design and deployment of these services is being coordinated by multi-disciplinary task forces comprising representatives from research communities and data centers. One of EUDAT's fundamental goals is the facilitation of cross-disciplinary data-intensive science. By providing opportunity for disciplines from across the spectrum to share data and cross-fertilize ideas, the CDI will encourage progress towards this vision of open and participatory data-intensive science. EUDAT will also facilitate this process through the creation of teams of experts from different disciplines, aiming to cooperatively develop services to meet the needs of several communities. Five research communities joined the EUDAT initiative at the start - CLARIN (Linguistics), ENES (Climate Modeling), EPOS (Earth Sciences), LifeWatch (Environmental Sciences - Biodiversity), VPH (Biological and Medical Sciences). They are acting as partners in the project, and have clear tasks and commitments. Since EUDAT started on the 1st of October 2011, we have been reviewing the approaches and requirements of these five communities regarding the deployment and use of a cross-disciplinary and persistent data e-Infrastructure. This analysis was conducted through interviews and frequent interactions with representatives of the communities. In this talk will be provided an updated status of the current CDI with specific refernce to the solid Earth science commnity of EPOS.
Work zone speed reduction utilizing dynamic speed signs
DOT National Transportation Integrated Search
2011-08-30
Vast quantities of transportation data are automatically recorded by intelligent transportations infrastructure, such as inductive loop detectors, video cameras, and side-fire radar devices. Such devices are typically deployed by traffic management c...
DOT National Transportation Integrated Search
2016-07-01
The purpose of this research was to provide a framework to guide the development and deployment of an : integrated statewide program for Intelligent Transportation Systems (ITS). ITS is a critical component of the : transportation infrastructure that...
Enabling affordable and efficiently deployed location based smart home systems.
Kelly, Damian; McLoone, Sean; Dishongh, Terry
2009-01-01
With the obvious eldercare capabilities of smart environments it is a question of "when", rather than "if", these technologies will be routinely integrated into the design of future houses. In the meantime, health monitoring applications must be integrated into already complete home environments. However, there is significant effort involved in installing the hardware necessary to monitor the movements of an elder throughout an environment. Our work seeks to address the high infrastructure requirements of traditional location-based smart home systems by developing an extremely low infrastructure localisation technique. A study of the most efficient method of obtaining calibration data for an environment is conducted and different mobile devices are compared for localisation accuracy and cost trade-off. It is believed that these developments will contribute towards more efficiently deployed location-based smart home systems.
Connectivity, interoperability and manageability challenges in internet of things
NASA Astrophysics Data System (ADS)
Haseeb, Shariq; Hashim, Aisha Hassan A.; Khalifa, Othman O.; Ismail, Ahmad Faris
2017-09-01
The vision of Internet of Things (IoT) is about interconnectivity between sensors, actuators, people and processes. IoT exploits connectivity between physical objects like fridges, cars, utilities, buildings and cities for enhancing the lives of people through automation and data analytics. However, this sudden increase in connected heterogeneous IoT devices takes a huge toll on the existing Internet infrastructure and introduces new challenges for researchers to embark upon. This paper highlights the effects of heterogeneity challenges on connectivity, interoperability, management in greater details. It also surveys some of the existing solutions adopted in the core network to solve the challenges of massive IoT deployment. The paper finally concludes that IoT architecture and network infrastructure needs to be reengineered ground-up, so that IoT solutions can be safely and efficiently deployed.
WDM-PON Architecture for FTTx Networks
NASA Astrophysics Data System (ADS)
Iannone, E.; Franco, P.; Santoni, S.
Broadband services for residential users in European countries have until now largely relied on xDSL technologies, while FTTx technologies have been mainly exploited in Asia and North America. The increasing bandwidth demand and the growing penetration of new services are pushing the deployment of optical access networks, and major European operators are now announcing FTTx projects. While FTTH is recognized as the target solution to bring broadband services to residential users, the identification of an FTTx evolutionary path able to seamlessly migrate to FTTH is key to enabling a massive deployment, easing the huge investments needed. WDM-PON architecture is an interesting solution that is able to accommodate the strategic need of building a new fiber-based access infrastructure with the possibility of adapting investments to actual demands and evolving to FTTH without requiring further interventions on fiber infrastructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Christian Robert; Lopez, Graham; Shipman, Galen
This report documents the completion of milestone STPM12-2 Kokkos User Support Infrastructure. The goal of this milestone was to develop and deploy an initial Kokkos support infrastructure, which facilitates communication and growth of the user community, adds a central place for user documentation and manages access to technical experts. Multiple possible support infrastructure venues were considered and a solution was put into place by Q1 of FY 18 consisting of (1) a Wiki programming guide, (2) github issues and projects for development planning and bug tracking and (3) a “Slack” channel for low latency support communications with the Kokkos usermore » community. Furthermore, the desirability of a cloud based training infrastructure was recognized and put in place in order to support training events.« less
NASA Astrophysics Data System (ADS)
Bloom, Kenneth; Cms Collaboration
2014-06-01
CMS is in the process of deploying an Xrootd based infrastructure to facilitate a global data federation. The services of the federation are available to export data from half the physical capacity and the majority of sites are configured to read data over the federation as a back-up. CMS began with a relatively modest set of use-cases for recovery of failed local file opens, debugging and visualization. CMS is finding that the data federation can be used to support small scale analysis and load balancing. Looking forward we see potential in using the federation to provide more flexibility in the location workflows are executed as the difference between local access and wide area access are diminished by optimization and improved networking. In this presentation we discuss the application development work and the facility deployment work, the use-cases currently in production, and the potential for the technology moving forward.
Measuring Large-Scale Social Networks with High Resolution
Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr; Cuttone, Andrea; Madsen, Mette My; Larsen, Jakob Eg; Lehmann, Sune
2014-01-01
This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection. PMID:24770359
Sensitivity of natural gas deployment in the US power sector to future carbon policy expectations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, Bryan K.; Showalter, Sharon; Wood, Frances
One option for reducing carbon emissions in the power sector is replacement of coal-fired generation with less carbon-intensive natural gas combined cycle (NGCC) generation. In the United States, where there is abundant, low-cost natural gas supply, increased NGCC deployment could be a cost-effective emissions abatement opportunity at relatively modest carbon prices. However, under scenarios in which carbon prices rise and deeper emissions reductions are achieved, other technologies may be more cost-effective than NGCC in the future. In this analysis, using a US energy system model with foresight (a version of the National Energy Modeling System or 'NEMS' model), we findmore » that varying expectations about carbon prices after 2030 does not materially affect NGCC deployment prior to 2030, all else equal. An important implication of this result is that, under the set of natural gas and carbon price trajectories explored here, myopic behavior or other imperfect expectations about potential future carbon policy do not change the natural gas deployment path or lead to stranded natural gas generation infrastructure. We explain these results in terms of the underlying economic competition between available generation technologies and discuss the broader relevance to US climate change mitigation policy.« less
Sensitivity of natural gas deployment in the US power sector to future carbon policy expectations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mignone, Bryan K.; Showalter, Sharon; Wood, Frances
One option for reducing carbon emissions in the power sector is replacement of coal-fired generation with less carbon-intensive natural gas combined cycle (NGCC) generation. In the United States, where there is abundant, low-cost natural gas supply, increased NGCC deployment could be a cost-effective emissions abatement opportunity at relatively modest carbon prices. However, under scenarios in which carbon prices rise and deeper emissions reductions are achieved, other technologies may be more cost-effective than NGCC in the future. In this analysis, using a US energy system model with foresight (a version of the National Energy Modeling System or “NEMS” model), we findmore » that varying expectations about carbon prices after 2030 does not materially affect NGCC deployment prior to 2030, all else equal. An important implication of this result is that, under the set of natural gas and carbon price trajectories explored here, myopic behavior or other imperfect expectations about potential future carbon policy do not change the natural gas deployment path or lead to stranded natural gas generation infrastructure. Lastly, we explain these results in terms of the underlying economic competition between available generation technologies and discuss the broader relevance to US climate change mitigation policy.« less
Sensitivity of natural gas deployment in the US power sector to future carbon policy expectations
Mignone, Bryan K.; Showalter, Sharon; Wood, Frances; ...
2017-11-01
One option for reducing carbon emissions in the power sector is replacement of coal-fired generation with less carbon-intensive natural gas combined cycle (NGCC) generation. In the United States, where there is abundant, low-cost natural gas supply, increased NGCC deployment could be a cost-effective emissions abatement opportunity at relatively modest carbon prices. However, under scenarios in which carbon prices rise and deeper emissions reductions are achieved, other technologies may be more cost-effective than NGCC in the future. In this analysis, using a US energy system model with foresight (a version of the National Energy Modeling System or 'NEMS' model), we findmore » that varying expectations about carbon prices after 2030 does not materially affect NGCC deployment prior to 2030, all else equal. An important implication of this result is that, under the set of natural gas and carbon price trajectories explored here, myopic behavior or other imperfect expectations about potential future carbon policy do not change the natural gas deployment path or lead to stranded natural gas generation infrastructure. We explain these results in terms of the underlying economic competition between available generation technologies and discuss the broader relevance to US climate change mitigation policy.« less
Sensitivity of natural gas deployment in the US power sector to future carbon policy expectations
Mignone, Bryan K.; Showalter, Sharon; Wood, Frances; ...
2017-09-07
One option for reducing carbon emissions in the power sector is replacement of coal-fired generation with less carbon-intensive natural gas combined cycle (NGCC) generation. In the United States, where there is abundant, low-cost natural gas supply, increased NGCC deployment could be a cost-effective emissions abatement opportunity at relatively modest carbon prices. However, under scenarios in which carbon prices rise and deeper emissions reductions are achieved, other technologies may be more cost-effective than NGCC in the future. In this analysis, using a US energy system model with foresight (a version of the National Energy Modeling System or “NEMS” model), we findmore » that varying expectations about carbon prices after 2030 does not materially affect NGCC deployment prior to 2030, all else equal. An important implication of this result is that, under the set of natural gas and carbon price trajectories explored here, myopic behavior or other imperfect expectations about potential future carbon policy do not change the natural gas deployment path or lead to stranded natural gas generation infrastructure. Lastly, we explain these results in terms of the underlying economic competition between available generation technologies and discuss the broader relevance to US climate change mitigation policy.« less
Infrastructure Systems for Advanced Computing in E-science applications
NASA Astrophysics Data System (ADS)
Terzo, Olivier
2013-04-01
In the e-science field are growing needs for having computing infrastructure more dynamic and customizable with a model of use "on demand" that follow the exact request in term of resources and storage capacities. The integration of grid and cloud infrastructure solutions allows us to offer services that can adapt the availability in terms of up scaling and downscaling resources. The main challenges for e-sciences domains will on implement infrastructure solutions for scientific computing that allow to adapt dynamically the demands of computing resources with a strong emphasis on optimizing the use of computing resources for reducing costs of investments. Instrumentation, data volumes, algorithms, analysis contribute to increase the complexity for applications who require high processing power and storage for a limited time and often exceeds the computational resources that equip the majority of laboratories, research Unit in an organization. Very often it is necessary to adapt or even tweak rethink tools, algorithms, and consolidate existing applications through a phase of reverse engineering in order to adapt them to a deployment on Cloud infrastructure. For example, in areas such as rainfall monitoring, meteorological analysis, Hydrometeorology, Climatology Bioinformatics Next Generation Sequencing, Computational Electromagnetic, Radio occultation, the complexity of the analysis raises several issues such as the processing time, the scheduling of tasks of processing, storage of results, a multi users environment. For these reasons, it is necessary to rethink the writing model of E-Science applications in order to be already adapted to exploit the potentiality of cloud computing services through the uses of IaaS, PaaS and SaaS layer. An other important focus is on create/use hybrid infrastructure typically a federation between Private and public cloud, in fact in this way when all resources owned by the organization are all used it will be easy with a federate cloud infrastructure to add some additional resources form the Public cloud for following the needs in term of computational and storage resources and release them where process are finished. Following the hybrid model, the scheduling approach is important for managing both cloud models. Thanks to this model infrastructure every time resources are available for additional request in term of IT capacities that can used "on demand" for a limited time without having to proceed to purchase additional servers.
Taal, Elisabeth (Liesbeth) M.; Vermetten, Eric; van Schaik, Digna (Anneke) J. F.; Leenstra, Tjalling
2014-01-01
Background Military deployment to combat zones puts military personnel to a number of physical and mental challenges that may adversely affect mental health. Until now, few studies have been performed in Europe on mental health utilization after military deployment. Objective We compared the incidence of mental health consultations with the Military Mental Health Service (MMHS) of military deployed to Afghanistan to that of non-deployed military personnel. Method We assessed utilization of the MMHS by the full cohort of the Netherlands Armed Forces enlisted between 2008 and 2010 through linkage of mental health and human resource information systems. Results The total population consisted of 50,508 military (18,233 deployed, 32,275 non-deployed), who accounted for 1,906 new consultations with the MMHS. The follow-up was limited to the first 2 years following deployment. We observed higher mental health care utilization in deployed vs. non-deployed military personnel; hazard ratio (HR), adjusted for sex, military branch and time in service, 1.84 [95% CI 1.61–2.11] in the first and 1.28 [1.09–1.49] in the second year after deployment. An increased risk of adjustment disorders (HR 2.59 [2.02–3.32] and 1.74 [1.30–2.32]) and of anxiety disorders (2.22 [1.52–3.25] and 2.28 [1.50–3.45]) including posttraumatic stress disorder (5.15 [2.55–10.40] and 5.28 [2.42–11.50]), but not of mood disorders (1.33 [0.90–1.97] and 1.11 [0.68–1.82]), was observed in deployed personnel in the first- and second-year post-deployment, respectively. Military personnel deployed in a unit with a higher risk of confrontation with potentially traumatic events had a higher HR (2.13 [1.84–2.47] and 1.40 [1.18–1.67]). Conclusions Though absolute risk was low, in the first and second year following deployment to Afghanistan there was an 80 and 30% higher risk for mental health problems resulting in a consultation with the Dutch MMHS compared to military never deployed to Afghanistan. These observations underscore the need for an adequate mental health infrastructure for those returning from deployment. PMID:25206952
NASA Astrophysics Data System (ADS)
Simonis, Ingo
2015-04-01
Transport infrastructure monitoring and analysis is one of the focus areas in the context of smart cities. With the growing number of people moving into densely populated urban metro areas, precise tracking of moving people and goods is the basis for profound decision-making and future planning. With the goal of defining optimal extensions and modifications to existing transport infrastructures, multi-modal transport has to be monitored and analysed. This process is performed on the basis of sensor networks that combine a variety of sensor models, types, and deployments within the area of interest. Multi-generation networks, consisting of a number of sensor types and versions, are causing further challenges for the integration and processing of sensor observations. These challenges are not getting any smaller with the development of the Internet of Things, which brings promising opportunities, but is currently stuck in a type of protocol war between big industry players from both the hardware and network infrastructure domain. In this paper, we will highlight how the OGC suite of standards, with the Sensor Web standards developed by the Sensor Web Enablement Initiative together with the latest developments by the Sensor Web for Internet of Things community can be applied to the monitoring and improvement of transport infrastructures. Sensor Web standards have been applied in the past to pure technical domains, but need to be broadened now in order to meet new challenges. Only cross domain approaches will allow to develop satisfying transport infrastructure approaches that take into account requirements coming form a variety of sectors such as tourism, administration, transport industry, emergency services, or private people. The goal is the development of interoperable components that can be easily integrated within data infrastructures and follow well defined information models to allow robust processing.
Rahman, Mahabubur; Watabe, Hiroshi
2018-05-01
Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domino, Stefan P.; Ananthan, Shreyas; Knaus, Robert C.
The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability onmore » many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with assembly timings faster than that observed on Haswell architecture. The computational workload of higher-order meshes, therefore, seems ideally suited for the many-core architecture and justi es further exploration of higher-order on NGP platforms. A Trilinos/Tpetra-based multi-threaded GMRES preconditioned by symmetric Gauss Seidel (SGS) represents the core solver infrastructure for the low-Mach advection/diffusion implicit solves. The threaded solver stack has been tested on small problems on NREL's Peregrine system using the newly developed and deployed Kokkos-view/SIMD kernels. fforts are underway to deploy the Tpetra-based solver stack on NERSC Cori system to benchmark its performance at scale on KNL machines.« less
Strategic charging infrastructure deployment for electric vehicles.
DOT National Transportation Integrated Search
2016-05-01
Electric vehicles (EV) are promoted as a foreseeable future vehicle technology to reduce dependence on fossil fuels and greenhouse : gas emissions associated with conventional vehicles. This paper proposes a data-driven approach to improving the elec...
Cross border ITS systems with traffic management centers : project summary.
DOT National Transportation Integrated Search
2016-07-31
Traffic management centers (TMCs) in Texas play a : vital role in managing traffic operations in many : major metropolitan areas. TMCs have deployed : extensive detection, monitoring, and communication : infrastructure to allow Texas Department of : ...
Deployment strategies of managed lanes on arterials : [summary].
DOT National Transportation Integrated Search
2015-02-01
Floridas continuing growth has often been attracted to areas where good highway : infrastructure already exists. Traffic loads have developed to the point where widening existing : highways is not sufficient, or perhaps impossible, to accommodate ...
NASA Astrophysics Data System (ADS)
Tarroja, Brian
The convergence of increasing populations, decreasing primary resource availability, and uncertain climates have drawn attention to the challenge of shifting the operations of key resource sectors towards a sustainable paradigm. This is prevalent in California, which has set sustainability-oriented policies such as the Renewable Portfolio Standards and Zero-Emission Vehicle mandates. To meet these goals, many options have been identified to potentially carry out these shifts. The electricity sector is focusing on accommodating renewable power generation, the transportation sector on alternative fuel drivetrains and infrastructure, and the water supply sector on conservation, reuse, and unconventional supplies. Historical performance evaluations of these options, however, have not adequately taken into account the impacts on and constraints of co-dependent infrastructures that must accommodate them and their interactions with other simultaneously deployed options. These aspects are critical for optimally choosing options to meet sustainability goals, since the combined system of all resource sectors must satisfy them. Certain operations should not be made sustainable at the expense of rendering others as unsustainable, and certain resource sectors should not meet their individual goals in a way that hinders the ability of the entire system to do so. Therefore, this work develops and utilizes an integrated platform of the electricity, transportation, and water supply sectors to characterize the performance of emerging technology and management options while taking into account their impacts on co-dependent infrastructures and identify synergistic or detrimental interactions between the deployment of different options. This is carried out by first evaluating the performance of each option in the context of individual resource sectors to determine infrastructure impacts, then again in the context of paired resource sectors (electricity-transportation, electricity-water), and finally in the context of the combined tri-sector system. This allows a more robust basis for composing preferred option portfolios to meet sustainability goals and gives a direction for coordinating the paradigm shifts of different resource sectors. Overall, it is determined that taking into account infrastructure constraints and potential operational interactions can significantly change the evaluation of the preferred role that different technologies should fulfill in contributing towards satisfying sustainability goals in the holistic context.
Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.
2012-01-01
Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238
Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R
2012-01-01
Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".
NASA Technical Reports Server (NTRS)
Bourke, R. D.; Burke, J. D.
1990-01-01
In the course of the exploration and settlement of the moon, robotic missions will precede and accompany humans. These robotic missions are defined respectively as precursors and adjuncts. Their contribution is twofold: to generate information about the lunar environment (and system performance in that environment), and to emplace elements of infrastructure for subsequent use. This paper describes information that may be gathered by robotic missions and infrastructure elements that may be deployed by them during an early lunar program phase.
ORAC-DR: A generic data reduction pipeline infrastructure
NASA Astrophysics Data System (ADS)
Jenness, Tim; Economou, Frossie
2015-03-01
ORAC-DR is a general purpose data reduction pipeline system designed to be instrument and observatory agnostic. The pipeline works with instruments as varied as infrared integral field units, imaging arrays and spectrographs, and sub-millimeter heterodyne arrays and continuum cameras. This paper describes the architecture of the pipeline system and the implementation of the core infrastructure. We finish by discussing the lessons learned since the initial deployment of the pipeline system in the late 1990s.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
NASA Astrophysics Data System (ADS)
Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.
2014-12-01
We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.
Cloud access to interoperable IVOA-compliant VOSpace storage
NASA Astrophysics Data System (ADS)
Bertocco, S.; Dowler, P.; Gaudet, S.; Major, B.; Pasian, F.; Taffoni, G.
2018-07-01
Handling, processing and archiving the huge amount of data produced by the new generation of experiments and instruments in Astronomy and Astrophysics are among the more exciting challenges to address in designing the future data management infrastructures and computing services. We investigated the feasibility of a data management and computation infrastructure, available world-wide, with the aim of merging the FAIR data management provided by IVOA standards with the efficiency and reliability of a cloud approach. Our work involved the Canadian Advanced Network for Astronomy Research (CANFAR) infrastructure and the European EGI federated cloud (EFC). We designed and deployed a pilot data management and computation infrastructure that provides IVOA-compliant VOSpace storage resources and wide access to interoperable federated clouds. In this paper, we detail the main user requirements covered, the technical choices and the implemented solutions and we describe the resulting Hybrid cloud Worldwide infrastructure, its benefits and limitations.
EV Charging Infrastructure Roadmap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karner, Donald; Garetson, Thomas; Francfort, Jim
2016-08-01
As highlighted in the U.S. Department of Energy’s EV Everywhere Grand Challenge, vehicle technology is advancing toward an objective to “… produce plug-in electric vehicles that are as affordable and convenient for the average American family as today’s gasoline-powered vehicles …” [1] by developing more efficient drivetrains, greater battery energy storage per dollar, and lighter-weight vehicle components and construction. With this technology advancement and improved vehicle performance, the objective for charging infrastructure is to promote vehicle adoption and maximize the number of electric miles driven. The EV Everywhere Charging Infrastructure Roadmap (hereafter referred to as Roadmap) looks forward and assumesmore » that the technical challenges and vehicle performance improvements set forth in the EV Everywhere Grand Challenge will be met. The Roadmap identifies and prioritizes deployment of charging infrastructure in support of this charging infrastructure objective for the EV Everywhere Grand Challenge« less
Volpe Center work on ITS helps forge new partnerships
DOT National Transportation Integrated Search
1997-01-01
Intelligent Transportation Systems (ITS) offer the promise of increased efficiency and safety in our transportation infrastructure. The Volpe Center is actively providing support for the development and deployment of ITS technologies as well as ident...
Transportation planning for electric vehicles and associated infrastructure.
DOT National Transportation Integrated Search
2017-05-01
Planning is the key to successful adoption and deployment of any new technology, and : it is particularly important when that advancement involves a paradigm shift such as : electrified transportation. At its core, electric transportation is largely ...
Six Information Technology Services Contracts for the Defense Intelligence Community
2000-04-24
This category covers Defense Intelligence Community organizations whose mission is to provide for the planning, development, deployment, operation ... management , and oversight of global information networks and infrastructure supporting intelligence producers. • Information Systems. This category
Metropolitan Model Deployment Initiative/ATIS Symposium : notes
DOT National Transportation Integrated Search
2000-02-01
The Symposium focuses on ATIS from the MMDIs investment in traveler information and integration. Infrastructure should be in place from the MMDIs for an interest in a regional multi-modal traveler information with the major arterials, freeways ...
0-6672 : ITS strategic plan for Texas : project summary.
DOT National Transportation Integrated Search
2013-08-01
The purpose of this research was to provide a : framework to guide the development and : deployment of an integrated statewide program for : intelligent transportation systems (ITS).ITS is a : critical component of the transportation : infrastructure...
Accounting for Induced Travel in Evaluation of Urban Highway Expansion
DOT National Transportation Integrated Search
2013-02-01
The US DOT sponsored Dynamic Mobility Applications (DMA) program seeks to identify, develop, and deploy applications that leverage the full potential of connected vehicles, travelers and infrastructure to enhance current operational practices and tra...
NASA Astrophysics Data System (ADS)
Delle Fratte, C.; Kennedy, J. A.; Kluth, S.; Mazzaferro, L.
2015-12-01
In a grid computing infrastructure tasks such as continuous upgrades, services installations and software deployments are part of an admins daily work. In such an environment tools to help with the management, provisioning and monitoring of the deployed systems and services have become crucial. As experiments such as the LHC increase in scale, the computing infrastructure also becomes larger and more complex. Moreover, today's admins increasingly work within teams that share responsibilities and tasks. Such a scaled up situation requires tools that not only simplify the workload on administrators but also enable them to work seamlessly in teams. In this paper will be presented our experience from managing the Max Planck Institute Tier2 using Puppet and Gitolite in a cooperative way to help the system administrator in their daily work. In addition to describing the Puppet-Gitolite system, best practices and customizations will also be shown.
Design and deployment of hybrid-telemedicine applications
NASA Astrophysics Data System (ADS)
Ikhu-Omoregbe, N. A.; Atayero, A. A.; Ayo, C. K.; Olugbara, O. O.
2005-01-01
With advances and availability of information and communication technology infrastructures in some nations and institutions, patients are now able to receive healthcare services from doctors and healthcare centers even when they are physically separated. The availability and transfer of patient data which often include medical images for specialist opinion is invaluable both to the patient and the medical practitioner in a telemedicine session. Two existing approaches to telemedicine are real-time and stored-and-forward. The real-time requires the availability or development of video-conferencing infrastructures which are expensive especially for most developing nations of the world while stored-and-forward could allow data transmission between any hospital with computer and telephone by landline link which is less expensive but with delays. We therefore propose a hybrid design of applications using hypermedia database capable of harnessing the features of real-time and stored-and-forward deployed over a wireless Virtual Private Network for the participating centers and healthcare providers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, Nate; Zhou, Ella; Getman, Dan
2015-10-01
Mathematical and computational models are widely used for the analysis and design of both physical and financial systems. Modeling the electric grid is of particular importance to China for three reasons. First, power-sector assets are expensive and long-lived, and they are critical to any country's development. China's electric load, transmission, and other energy-related infrastructure are expected to continue to grow rapidly; therefore it is crucial to understand and help plan for the future in which those assets will operate (NDRC ERI 2015). Second, China has dramatically increased its deployment of renewable energy (RE), and is likely to continue further acceleratingmore » such deployment over the coming decades. Careful planning and assessment of the various aspects (technical, economic, social, and political) of integrating a large amount of renewables on the grid is required. Third, companies need the tools to develop a strategy for their own involvement in the power market China is now developing, and to enable a possible transition to an efficient and high RE future.« less
Architecting the Communication and Navigation Networks for NASA's Space Exploration Systems
NASA Technical Reports Server (NTRS)
Bhassin, Kul B.; Putt, Chuck; Hayden, Jeffrey; Tseng, Shirley; Biswas, Abi; Kennedy, Brian; Jennings, Esther H.; Miller, Ron A.; Hudiburg, John; Miller, Dave;
2007-01-01
NASA is planning a series of short and long duration human and robotic missions to explore the Moon and then Mars. A key objective of the missions is to grow, through a series of launches, a system of systems communication, navigation, and timing infrastructure at minimum cost while providing a network-centric infrastructure that maximizes the exploration capabilities and science return. There is a strong need to use architecting processes in the mission pre-formulation stage to describe the systems, interfaces, and interoperability needed to implement multiple space communication systems that are deployed over time, yet support interoperability with each deployment phase and with 20 years of legacy systems. In this paper we present a process for defining the architecture of the communications, navigation, and networks needed to support future space explorers with the best adaptable and evolable network-centric space exploration infrastructure. The process steps presented are: 1) Architecture decomposition, 2) Defining mission systems and their interfaces, 3) Developing the communication, navigation, networking architecture, and 4) Integrating systems, operational and technical views and viewpoints. We demonstrate the process through the architecture development of the communication network for upcoming NASA space exploration missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Annika; Cappers, Peter; Goldman, Charles
2013-05-01
The U.S. Department of Energy’s (DOE’s) Smart Grid Investment Grant (SGIG) program is working with a subset of the 99 SGIG projects undertaking Consumer Behavior Studies (CBS), which examine the response of mass market consumers (i.e., residential and small commercial customers) to time-varying electricity prices (referred to herein as time-based rate programs) in conjunction with the deployment of advanced metering infrastructure (AMI) and associated technologies. The effort presents an opportunity to advance the electric industry’s understanding of consumer behavior.
NASA Astrophysics Data System (ADS)
Vergne, J.; Charade, O.; Bonaime, S.; Louis-Xavier, T.; Arnold, B.
2015-12-01
In the framework of the RESIF (réseau sismologique et géodésique français) infrastructure, more than one hundred new permanent broadband stations have to be deployed in metropolitan France within the forthcoming years. This requires a standardized installation method able to provide good noise level performance at a reasonable cost, especially for the 60 percent of stations that we expect to be settled in open environments. During the last two years we tested various types of sensor's hosting infrastructures with a strong focus on recently released posthole sensors that can be deployed at the bottom of shallow boreholes. Tests were performed at 3 different sites (two GEOSCOPE stations and a dedicated open-field prototype site) with geological conditions spanning from hard rocks to very soft soils. On each site, posthole sensors were deployed at different depths, from the surface to a maximum of 20m deep, and in different types of casing. Moreover, a reference sensor, either installed in a tunnel, a cellar or a seismic vault, has been operated continuously. We present a comprehensive comparison of the seismic noise level measured in the different hosting infrastructures and for several frequency bands corresponding to various sources of noise. At high and low frequencies, seismic noise level in some boreholes equals or outperforms the one obtained for the reference sensors. Between 0.005 and 0.05Hz, we observe a strong decrease of seismic noise level on the horizontal components in the deepest boreholes compared to near surface installations. This improvement can reach up to 30dB and is mostly due to a reduction in tilt noise induced by wind or local pressure variations. However, the absolute noise level that can be achieved clearly depends on the local geology. All these tests, together with estimated installation costs, point toward the deployment of sensors in shallow boreholes at the future French broadband stations located in open environments.
Integrated cloud infrastructure of the LIT JINR, PE "NULITS" and INP's Astana branch
NASA Astrophysics Data System (ADS)
Mazhitova, Yelena; Balashov, Nikita; Baranov, Aleksandr; Kutovskiy, Nikolay; Semenov, Roman
2018-04-01
The article describes the distributed cloud infrastructure deployed on the basis of the resources of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research (LIT JINR) and some JINR Member State organizations. It explains a motivation of that work, an approach it is based on, lists of its participants among which there are private entity "Nazarbayev University Library and IT services" (PE "NULITS") Autonomous Education Organization "Nazarbayev University" (AO NU) and The Institute of Nuclear Physics' (INP's) Astana branch.
Distributed telemedicine for the National Information Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.
1997-08-01
TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.
Advanced Optical Burst Switched Network Concepts
NASA Astrophysics Data System (ADS)
Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian
In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.
NASA Astrophysics Data System (ADS)
Cahill, Paul; Hazra, Budhaditya; Karoumi, Raid; Mathewson, Alan; Pakrashi, Vikram
2018-06-01
The application of energy harvesting technology for monitoring civil infrastructure is a bourgeoning topic of interest. The ability of kinetic energy harvesters to scavenge ambient vibration energy can be useful for large civil infrastructure under operational conditions, particularly for bridge structures. The experimental integration of such harvesters with full scale structures and the subsequent use of the harvested energy directly for the purposes of structural health monitoring shows promise. This paper presents the first experimental deployment of piezoelectric vibration energy harvesting devices for monitoring a full-scale bridge undergoing forced dynamic vibrations under operational conditions using energy harvesting signatures against time. The calibration of the harvesters is presented, along with details of the host bridge structure and the dynamic assessment procedures. The measured responses of the harvesters from the tests are presented and the use the harvesters for the purposes of structural health monitoring (SHM) is investigated using empirical mode decomposition analysis, following a bespoke data cleaning approach. Finally, the use of sequential Karhunen Loeve transforms to detect train passages during the dynamic assessment is presented. This study is expected to further develop interest in energy-harvesting based monitoring of large infrastructure for both research and commercial purposes.
Financing Options for Nontraditional Eligibilities in the CWSRF
This is a technical support reference, which looks at the varied types of financial assistance available to the CWSRF programs that can be deployed to fund eligibilities that do not fall within the mainstream of traditional grey infrastructure.
Applications of Dynamic Deployment of Services in Industrial Automation
NASA Astrophysics Data System (ADS)
Candido, Gonçalo; Barata, José; Jammes, François; Colombo, Armando W.
Service-oriented Architecture (SOA) is becoming a de facto paradigm for business and enterprise integration. SOA is expanding into several domains of application envisioning a unified solution suitable across all different layers of an enterprise infrastructure. The application of SOA based on open web standards can significantly enhance the interoperability and openness of those devices. By embedding a dynamical deployment service even into small field de- vices, it would be either possible to allow machine builders to place built- in services and still allow the integrator to deploy on-the-run the services that best fit his current application. This approach allows the developer to keep his own preferred development language, but still deliver a SOA- compliant application. A dynamic deployment service is envisaged as a fundamental framework to support more complex applications, reducing deployment delays, while increasing overall system agility. As use-case scenario, a dynamic deployment service was implemented over DPWS and WS-Management specifications allowing designing and programming an automation application using IEC61131 languages, and deploying these components as web services into devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, Marc; Muratori, Matteo; McLaren, Joyce
Increased interest in the use of alternative transportation fuels, such as natural gas, hydrogen, and electricity, is being driven by heightened concern about the climate impacts of gasoline and diesel emissions and our dependence on finite oil resources. A key barrier to widespread adoption of low- and zero-emission passenger vehicles is the availability of refueling infrastructure. Recalling the 'chicken and egg' conundrum, limited adoption of alternative fuel vehicles increases the perceived risk of investments in refueling infrastructure, while lack of refueling infrastructure inhibits vehicle adoption. In this paper, we present the results of a study of the perceived risks andmore » barriers to investment in alternative fuels infrastructure, based on interviews with industry experts and stakeholders. We cover barriers to infrastructure development for three alternative fuels for passenger vehicles: compressed natural gas, hydrogen, and electricity. As an early-mover in zero emission passenger vehicles, California provides the early market experience necessary to map the alternative fuel infrastructure business space. Results and insights identified in this study can be used to inform investment decisions, formulate incentive programs, and guide deployment plans for alternative fueling infrastructure in the U.S. and elsewhere.« less
First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
NASA Astrophysics Data System (ADS)
Aperio Bella, L.; Barberis, D.; Buttinger, W.; Formica, A.; Gallas, E. J.; Rinaldi, L.; Rybkin, G.; ATLAS Collaboration
2017-10-01
Processing of the large amount of data produced by the ATLAS experiment requires fast and reliable access to what we call Auxiliary Data Files (ADF). These files, produced by Combined Performance, Trigger and Physics groups, contain conditions, calibrations, and other derived data used by the ATLAS software. In ATLAS this data has, thus far for historical reasons, been collected and accessed outside the ATLAS Conditions Database infrastructure and related software. For this reason, along with the fact that ADF are effectively read by the software as binary objects, this class of data appears ideal for testing the proposed Run 3 conditions data infrastructure now in development. This paper describes this implementation as well as the lessons learned in exploring and refining the new infrastructure with the potential for deployment during Run 2.
DOT National Transportation Integrated Search
2002-01-01
The essence of effective environmental justice practice is summarized in three fundamental principles: (1) Avoid, minimize, or mitigate disproportionately high and adverse human health and environmental effects, including social and economic effects,...
DOT National Transportation Integrated Search
1995-02-01
This paper addresses the relationship of truck size and weight (TS&W) policy, vehicle handling and stability, and safety. Handling and stability are the primary mechanisms relating vehicle characteristics and safety. Vehicle characteristics may also ...
Air Quality Programs and Provisions of the Intermodal Surface Transportation Efficiency Act of 1991
DOT National Transportation Integrated Search
2012-11-01
The US DOT sponsored Dynamic Mobility Applications (DMA) program seeks to identify, develop, and deploy applications that leverage the full potential of connected vehicles, travelers and infrastructure to enhance current operational practices and tra...
Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility
NASA Technical Reports Server (NTRS)
Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer
2009-01-01
Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).
A JEE RESTful service to access Conditions Data in ATLAS
NASA Astrophysics Data System (ADS)
Formica, Andrea; Gallas, E. J.
2015-12-01
Usage of condition data in ATLAS is extensive for offline reconstruction and analysis (e.g. alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via REST services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemas via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browsers).
Data Mining as a Service (DMaaS)
NASA Astrophysics Data System (ADS)
Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.
2016-10-01
Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.
Sandia National Laboratories: Hydrogen Risk Assessment Models toolkit now
Energy Stationary Power Earth Science Transportation Energy Energy Research Global Security WMD Cyber & Infrastructure Security Global Security Remote Sensing & Verification Research Research Robotics R&D 100 Awards Laboratory Directed Research & Development Technology Deployment Centers
Accelerated vehicle-to-infrastructure (V2I) safety applications : concept of operations document.
DOT National Transportation Integrated Search
2001-12-01
This document summarizes lessons learned through the evaluation of four sites selected in 1996 to serve as national models for deploying and operating intelligent transportation systems (ITS) in metropolitan areas. One of the goals of the Metropolita...
Sandia National Laboratories: 100 Resilient Cities: Sandia Challenge:
Accomplishments Energy Stationary Power Earth Science Transportation Energy Energy Research Global Security WMD Cyber & Infrastructure Security Global Security Remote Sensing & Verification Research Research Robotics R&D 100 Awards Laboratory Directed Research & Development Technology Deployment Centers
DOT National Transportation Integrated Search
1998-09-16
One of the new requirements of the Intermodal Surface Transportation Efficiency Act of 1991 is the requirement that State Departments of Transportation, Metropolitan Planning Organizations, and transit operators conduct a major investment study (MIS)...
Sandia National Laboratories: National Security Missions: Defense Systems
Accomplishments Energy Stationary Power Earth Science Transportation Energy Energy Research Global Security WMD Cyber & Infrastructure Security Global Security Remote Sensing & Verification Research Research Robotics R&D 100 Awards Laboratory Directed Research & Development Technology Deployment Centers
Rapid deployment of internet-connected environmental monitoring devices
USDA-ARS?s Scientific Manuscript database
Advances in electronic sensing and monitoring systems and the growth of the communications infrastructure have enabled users to gain immediate access to information and interaction with physical devices. To facilitate the uploading, viewing, and sharing of data via the internet, while avoiding the ...
47 CFR 10.210 - WEA participation election procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Section 10.210 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL WIRELESS EMERGENCY ALERTS... requirements implemented by the Commission; and (2) Commits to support the development and deployment of technology for the “C” interface, the CMS provider Gateway, the CMS provider infrastructure, and mobile...
47 CFR 10.210 - CMAS participation election procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Section 10.210 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM... requirements implemented by the Commission; and (2) Commits to support the development and deployment of technology for the “C” interface, the CMS provider Gateway, the CMS provider infrastructure, and mobile...
47 CFR 10.210 - CMAS participation election procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Section 10.210 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM... requirements implemented by the Commission; and (2) Commits to support the development and deployment of technology for the “C” interface, the CMS provider Gateway, the CMS provider infrastructure, and mobile...
47 CFR 10.210 - WEA participation election procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 10.210 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL WIRELESS EMERGENCY ALERTS... requirements implemented by the Commission; and (2) Commits to support the development and deployment of technology for the “C” interface, the CMS provider Gateway, the CMS provider infrastructure, and mobile...
Resilient Grid Operational Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pasqualini, Donatella
Extreme weather-related disturbances, such as hurricanes, are a leading cause of grid outages historically. Although physical asset hardening is perhaps the most common way to mitigate the impacts of severe weather, operational strategies may be deployed to limit the extent of societal and economic losses associated with weather-related physical damage.1 The purpose of this study is to examine bulk power-system operational strategies that can be deployed to mitigate the impact of severe weather disruptions caused by hurricanes, thereby increasing grid resilience to maintain continuity of critical infrastructure during extreme weather. To estimate the impacts of resilient grid operational strategies, Losmore » Alamos National Laboratory (LANL) developed a framework for hurricane probabilistic risk analysis (PRA). The probabilistic nature of this framework allows us to estimate the probability distribution of likely impacts, as opposed to the worst-case impacts. The project scope does not include strategies that are not operations related, such as transmission system hardening (e.g., undergrounding, transmission tower reinforcement and substation flood protection) and solutions in the distribution network.« less
Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems
NASA Astrophysics Data System (ADS)
Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.
2014-12-01
The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.
Modular Seafloor and Water Column Systems for the Ocean Observatories Initiative Cabled Array
NASA Astrophysics Data System (ADS)
Delaney, J. R.; Manalang, D.; Harrington, M.; Tilley, J.; Dosher, J.; Cram, G.; Harkins, G.; McGuire, C.; Waite, P.; McRae, E.; McGinnis, T.; Kenney, M.; Siani, C.; Michel-Hart, N.; Denny, S.; Boget, E.; Kawka, O. E.; Daly, K. L.; Luther, D. S.; Kelley, D. S.; Milcic, M.
2016-02-01
Over the past decade, cabled ocean observatories have become an increasingly important way to collect continuous real-time data at remote subsea locations. This has led to the development of a class of subsea systems designed and built specifically to distribute power and bandwidth among sensing instrumentation on the seafloor and throughout the water column. Such systems are typically powered by shore-based infrastructure and involve networks of fiber optic and electrical cabling that provide real-time data access and control of remotely deployed instrumentation. Several subsea node types were developed and/or adapted for cabled use in order to complete the installation of the largest North American scientific cabled observatory in Oct, 2014. The Ocean Observatories Initiative (OOI) Cabled Array, funded by the US National Science Foundation, consists of a core infrastructure that includes 900 km of fiber optic/electrical cables, seven primary nodes, 18 seafloor junction boxes, three mooring-mounted winched profiling systems, and three wire-crawling profiler systems. In aggregate, the installed infrastructure has 200 dedicated scientific instrument ports (of which 120 are currently assigned), and is capable of further expansion. The installed system has a 25-year design life for reliable, sustained monitoring; and all nodes, profilers and instrument packages are ROV-serviceable. Now in it's second year of operation, the systems that comprise the Cabled Array are providing reliable, 24/7 real-time data collection from deployed instrumentation, and offer a modular and scalable class of subsea systems for ocean observing. This presentation will provide an overview of the observatory-class subsystems of the OOI Cabled Array, focusing on the junction boxes, moorings and profilers that power and communicate with deployed instrumentation.
Optical stabilization for time transfer infrastructure
NASA Astrophysics Data System (ADS)
Vojtech, Josef; Altmann, Michal; Skoda, Pavel; Horvath, Tomas; Slapak, Martin; Smotlacha, Vladimir; Havlis, Ondrej; Munster, Petr; Radil, Jan; Kundrat, Jan; Altmannova, Lada; Velc, Radek; Hula, Miloslav; Vohnout, Rudolf
2017-08-01
In this paper, we propose and present verification of all-optical methods for stabilization of the end-to-end delay of an optical fiber link. These methods are verified for deployment within infrastructure for accurate time and stable frequency distribution, based on sharing of fibers with research and educational network carrying live data traffic. Methods range from path length control, through temperature conditioning method to transmit wavelength control. Attention is given to achieve continuous control for relatively broad range of delays. We summarize design rules for delay stabilization based on the character and the total delay jitter.
Free space optical communications: coming of age
NASA Astrophysics Data System (ADS)
Stotts, Larry B.; Stadler, Brian; Lee, Gary
2008-04-01
Information superiority, where for the military or business, is the decisive advantage of the 21st Century. While business enjoys the information advantage of robust, high-bandwidth fiber optic connectivity that heavily leverages installed commercial infrastructure and service providers, mobile military forces need the wireless equivalent to leverage that advantage. In other words, an ability to deploy anywhere on the globe and maintain a robust, reliable communications and connectivity infrastructure, equivalent to that enjoyed by a CONUS commercial user, will provide US forces with information superiority. Assured high-data-rate connectivity to the tactical user is the biggest gap in developing and truly exploiting the potential of the information superiority weapon. Though information superiority is much discussed and its potential is well understood, a robust communications network available to the lowest military echelons is not yet an integral part of the force structure, although high data rate RF communications relays, e.g., Tactical Common Data Link, and low data SATCOM, e.g, Ku Spread Spectrum, are deployed and used by the military. This may change with recent advances in laser communications technologies created by the fiber optic communications revolution. This paper will provide a high level overview of the various laser communications programs conducted over the last 30 plus years, and proposed efforts to get these systems finally deployed.
Tanichi, Masaaki; Tatsuki, Toshitaka; Saito, Taku; Wakizono, Tomoki; Shigemura, Jun
2012-01-01
We assessed the core factors necessary for mental health of disaster workers according to the following experiences: 1) the Japan Self-Defense Force (JSDF) disaster relief missions associated with the Great East Japan Earthquake and the Haiti peacekeeping deployment associated with the Great Haiti Earthquake, 2) conformations of the peacekeeping mission units of various countries deployed to Haiti, and 3) JSDF assistance activities to the Japanese earthquake victims. We learned that the basic life needs were the major premises for maintaining the mental health of the disaster workers. Food, drinking supplies, medical supplies were particularly crucial, yet overlooked in Japanese worker settings compared with forces of other countries. Conversely, the workers tend to feel guilty (moushi wake nai) for the victims when their basic life infrastructures are better than those of the victims. The Japanese workers and disaster victims both tend to find comfort in styles based on their culture, in particular, open-air baths and music performances. When planning workers' environments in disaster settings, provision of basic infrastructure should be prioritized, yet a sense of balance based on cultural background may be useful to enhance the workers' comfort and minimize their guilt.
NASA Astrophysics Data System (ADS)
Sanclements, M.
2016-12-01
Here we provide an update on construction of the five NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the near future. Additionally, we include information (i.e. timelines and procedures) on requesting MDPs for PI led projects. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs are designed to accommodate rapid deployment for time periods up to 1 year. Broadly, the MDPs are comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach. More specifically, the MDPs include the capability to make tower based measures of ecosystem exchange, radiation, and precipitation in conjunction with baseline soils data such as CO2 flux, and soil temperature and moisture. An aquatics module is also available with the MDP to facilitate research integrating terrestrial and aquatic processes. Ultimately, the NEON MDPs provides a tool for linking PI led research to the continental scale data sets collected by NEON.
NEON's Mobile Deployment Platform: A Resource for Community Research
NASA Astrophysics Data System (ADS)
Sanclements, M.
2017-12-01
Here we provide an update on construction of the five NEON Mobile Deployment Platforms (MDPs) as well as a description of the infrastructure and sensors available to researchers in the near future. Additionally, we include information (i.e. timelines and procedures) on requesting MDPs for PI led projects. The MDPs will provide the means to observe stochastic or spatially important events, gradients, or quantities that cannot be reliably observed using fixed location sampling (e.g. fires and floods). Due to the transient temporal and spatial nature of such events, the MDPs are designed to accommodate rapid deployment for time periods up to 1 year. Broadly, the MDPs are comprised of infrastructure and instrumentation capable of functioning individually or in conjunction with one another to support observations of ecological change, as well as education, training and outreach. More specifically, the MDPs include the capability to make tower based measures of ecosystem exchange, radiation, and precipitation in conjunction with baseline soils data such as CO2 flux, and soil temperature and moisture. An aquatics module is also available with the MDP to facilitate research integrating terrestrial and aquatic processes. Ultimately, the NEON MDPs provide a tool for linking PI led research to the continental scale data sets collected by NEON.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences.
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant's platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses.
Return to contingency: developing a coherent strategy for future R2E/R3 land medical capabilities.
Ingram, Mike; Mahan, J
2015-03-01
Key to deploying forces in the future will be the provision of a rapidly deployable Deployed Hospital Capability. Developing this capability has been the focus of 34 Field Hospital and 2nd Medical Brigade over the last 18 months and this paper describes a personal account of this development work to date. Future contingent Deployed Hospital Capability must meet the requirements of Defence; that is to be rapidly deployable delivering a hospital standard of care. The excellence seen in clinical delivery on recent operations is intensive; in personnel, equipment, infrastructure and sustainment. The challenge in developing a coherent capability has been in balancing the clinical capability and capacity against strategic load in light of recent advances in battlefield medicine. This paper explores the issues encountered and solutions found to date in reconstituting a Very High Readiness Deployed Hospital Capability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Effects of Deployments on Spouses of Military Personnel
2008-08-01
TECHNOLOGY SUBSTANCE ABUSE TERRORISM AND HOMELAND SECURITY TRANSPORTATION AND INFRASTRUCTURE WORKFORCE AND WORKPLACE The RAND Corporation is a nonprofit...production technology , income and time constraints: max l,X,h1,m1 U(l1, X) (3.1) s.t. l1 + h1 +m1 = 1 h2 +m2 + d = 1 X = XD +XM XD = F (h1, h2) XM = wm1 + vd...b where w is wife’s real after tax wage rate, F (.) is a home production technology , v is basic pay, v deployment pay. In this model, the service
DOT National Transportation Integrated Search
2013-04-01
Connected Vehicle to Vehicle (V2V) safety applications heavily rely on the BSM, which is one of the messages defined in the Society of Automotive standard J2735, Dedicated Short Range Communications (DSRC) Message Set Dictionary, November 2009. The B...
Building a Database for Life Cycle Performance Assessment of Trenchless Technologies
Deployment of trenchless pipe rehabilitation method has steadily increased over the past 40 years and has represented an increasing proportion of the annual expenditure on the nation’s water and sewer infrastructure. Until recently, despite the massive public investments in these...
Environmental and natural resource implications of sustainable urban infrastructure systems
NASA Astrophysics Data System (ADS)
Bergesen, Joseph D.; Suh, Sangwon; Baynes, Timothy M.; Kaviti Musango, Josephine
2017-12-01
As cities grow, their environmental and natural resource footprints also tend to grow to keep up with the increasing demand on essential urban services such as passenger transportation, commercial space, and thermal comfort. The urban infrastructure systems, or socio-technical systems providing these services are the major conduits through which natural resources are consumed and environmental impacts are generated. This paper aims to gauge the potential reductions in environmental and resources footprints through urban transformation, including the deployment of resource-efficient socio-technical systems and strategic densification. Using hybrid life cycle assessment approach combined with scenarios, we analyzed the greenhouse gas (GHG) emissions, water use, metal consumption and land use of selected socio-technical systems in 84 cities from the present to 2050. The socio-technical systems analyzed are: (1) bus rapid transit with electric buses, (2) green commercial buildings, and (3) district energy. We developed a baseline model for each city considering gross domestic product, population density, and climate conditions. Then, we overlaid three scenarios on top of the baseline model: (1) decarbonization of electricity, (2) aggressive deployment of resource-efficient socio-technical systems, and (3) strategic urban densification scenarios to each city and quantified their potentials in reducing the environmental and resource impacts of cities by 2050. The results show that, under the baseline scenario, the environmental and natural resource footprints of all 84 cities combined would increase 58%-116% by 2050. The resource-efficient scenario along with strategic densification, however, has the potential to curve down GHG emissions to 17% below the 2010 level in 2050. Such transformation can also limit the increase in all resource footprints to less than 23% relative to 2010. This analysis suggests that resource-efficient urban infrastructure and decarbonization of electricity coupled with strategic densification have a potential to mitigate resources and environmental footprints of growing cities.
Semantic eScience for Ecosystem Understanding and Monitoring: The Jefferson Project Case Study
NASA Astrophysics Data System (ADS)
McGuinness, D. L.; Pinheiro da Silva, P.; Patton, E. W.; Chastain, K.
2014-12-01
Monitoring and understanding ecosystems such as lakes and their watersheds is becoming increasingly important. Accelerated eutrophication threatens our drinking water sources. Many believe that the use of nutrients (e.g., road salts, fertilizers, etc.) near these sources may have negative impacts on animal and plant populations and water quality although it is unclear how to best balance broad community needs. The Jefferson Project is a joint effort between RPI, IBM and the Fund for Lake George aimed at creating an instrumented water ecosystem along with an appropriate cyberinfrastructure that can serve as a global model for ecosystem monitoring, exploration, understanding, and prediction. One goal is to help communities understand the potential impacts of actions such as road salting strategies so that they can make appropriate informed recommendations that serve broad community needs. Our semantic eScience team is creating a semantic infrastructure to support data integration and analysis to help trained scientists as well as the general public to better understand the lake today, and explore potential future scenarios. We are leveraging our RPI Tetherless World Semantic Web methodology that provides an agile process for describing use cases, identification of appropriate background ontologies and technologies, implementation, and evaluation. IBM is providing a state-of-the-art sensor network infrastructure along with a collection of tools to share, maintain, analyze and visualize the network data. In the context of this sensor infrastructure, we will discuss our semantic approach's contributions in three knowledge representation and reasoning areas: (a) human interventions on the deployment and maintenance of local sensor networks including the scientific knowledge to decide how and where sensors are deployed; (b) integration, interpretation and management of data coming from external sources used to complement the project's models; and (c) knowledge about simulation results including parameters, interpretation of results, and comparison of results against external data. We will also demonstrate some example queries highlighting the benefits of our semantic approach and will also identify reusable components.
JINR cloud infrastructure evolution
NASA Astrophysics Data System (ADS)
Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.
2016-09-01
To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.
Chervenak, Ann L; van Erp, Theo G M; Kesselman, Carl; D'Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G
2012-01-01
Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment.
Chervenak, Ann L.; van Erp, Theo G.M.; Kesselman, Carl; D’Arcy, Mike; Sobell, Janet; Keator, David; Dahm, Lisa; Murry, Jim; Law, Meng; Hasso, Anton; Ames, Joseph; Macciardi, Fabio; Potkin, Steven G.
2015-01-01
Progress in our understanding of brain disorders increasingly relies on the costly collection of large standardized brain magnetic resonance imaging (MRI) data sets. Moreover, the clinical interpretation of brain scans benefits from compare and contrast analyses of scans from patients with similar, and sometimes rare, demographic, diagnostic, and treatment status. A solution to both needs is to acquire standardized, research-ready clinical brain scans and to build the information technology infrastructure to share such scans, along with other pertinent information, across hospitals. This paper describes the design, deployment, and operation of a federated imaging system that captures and shares standardized, de-identified clinical brain images in a federation across multiple institutions. In addition to describing innovative aspects of the system architecture and our initial testing of the deployed infrastructure, we also describe the Standardized Imaging Protocol (SIP) developed for the project and our interactions with the Institutional Review Board (IRB) regarding handling patient data in the federated environment. PMID:22941984
100 Years of Superconductivity: Perspective on Energy Applications
NASA Astrophysics Data System (ADS)
Grant, Paul
2011-11-01
One hundred years ago this past April, in 1911, traces of superconductivity were first detected near 4.2 K in mercury in the Leiden laboratory of Kammerlingh Onnes, followed seventy-five years later in January, 1986, by the discovery of ``high temperature'' superconductivity above 30 K in layered copper oxide perovskites by Bednorz and Mueller at the IBM Research Laboratory in Rueschlikon. Visions of application to the electric power infrastructure followed each event, and the decades following the 1950s witnessed numerous, successful demonstrations to electricity generation, transmission and end use -- rotating machinery, cables, transformers, storage, current limiters and power conditioning, employing both low and high temperature superconductors in the USA, Japan, Europe, and more recently, China. Despite these accomplishments, there has been to date no substantial insertion of superconducting technology in the electric power infrastructure worldwide, and its eventual deployment remains problematic. We will explore the issues delaying such deployment and suggest future electric power scenarios where superconductivity will play an essential central role.
Strategies for the implementation of a European Volcano Observations Research Infrastructure
NASA Astrophysics Data System (ADS)
Puglisi, Giuseppe
2015-04-01
Active volcanic areas in Europe constitute a direct threat to millions of people on both the continent and adjacent islands. Furthermore, eruptions of "European" volcanoes in overseas territories, such as in the West Indies, an in the Indian and Pacific oceans, can have a much broader impacts, outside Europe. Volcano Observatories (VO), which undertake volcano monitoring under governmental mandate and Volcanological Research Institutions (VRI; such as university departments, laboratories, etc.) manage networks on European volcanoes consisting of thousands of stations or sites where volcanological parameters are either continuously or periodically measured. These sites are equipped with instruments for geophysical (seismic, geodetic, gravimetric, electromagnetic), geochemical (volcanic plumes, fumaroles, groundwater, rivers, soils), environmental observations (e.g. meteorological and air quality parameters), including prototype deployment. VOs and VRIs also operate laboratories for sample analysis (rocks, gases, isotopes, etc.), near-real time analysis of space-borne data (SAR, thermal imagery, SO2 and ash), as well as high-performance computing centres; all providing high-quality information on the current status of European volcanoes and the geodynamic background of the surrounding areas. This large and high-quality deployment of monitoring systems, focused on a specific geophysical target (volcanoes), together with the wide volcanological phenomena of European volcanoes (which cover all the known volcano types) represent a unique opportunity to fundamentally improve the knowledge base of volcano behaviour. The existing arrangement of national infrastructures (i.e. VO and VRI) appears to be too fragmented to be considered as a unique distributed infrastructure. Therefore, the main effort planned in the framework of the EPOS-PP proposal is focused on the creation of services aimed at providing an improved and more efficient access to the volcanological facilities and observations on active volcanoes. The issue to facilitate the access to this valued source of information is to reshape this fragmented community into a unique infrastructure concerning common technical solutions and data policies. Some of the key actions include the implementation of virtual accesses to geophysical, geochemical, volcanological and environmental raw data and metadata, multidisciplinary volcanic and hazard products, tools for modelling volcanic processes, and transnational access to facilities of volcano observatories. Indeed this implementation will start from the outcomes of the two EC-FP7 projects, Futurevolc and MED-SUV, relevant to three out of four global volcanic Supersites, which are located in Europe and managed by European institutions. This approach will ease the exchange and collaboration among the European volcano community, thus allowing better understanding of the volcanic processes occurring at European volcanoes considered worldwide as natural laboratories.
Capability 9.3 Assembly and Deployment
NASA Technical Reports Server (NTRS)
Dorsey, John
2005-01-01
Large space systems are required for a range of operational, commercial and scientific missions objectives however, current launch vehicle capacities substantially limit the size of space systems (on-orbit or planetary). Assembly and Deployment is the process of constructing a spacecraft or system from modules which may in turn have been constructed from sub-modules in a hierarchical fashion. In-situ assembly of space exploration vehicles and systems will require a broad range of operational capabilities, including: Component transfer and storage, fluid handling, construction and assembly, test and verification. Efficient execution of these functions will require supporting infrastructure, that can: Receive, store and protect (materials, components, etc.); hold and secure; position, align and control; deploy; connect/disconnect; construct; join; assemble/disassemble; dock/undock; and mate/demate.
Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data
NASA Astrophysics Data System (ADS)
Koranda, Scott
2004-03-01
The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.
Deployment of Analytics into the Healthcare Safety Net: Lessons Learned.
Hartzband, David; Jacobs, Feygele
2016-01-01
As payment reforms shift healthcare reimbursement toward value-based payment programs, providers need the capability to work with data of greater complexity, scope and scale. This will in many instances necessitate a change in understanding of the value of data, and the types of data needed for analysis to support operations and clinical practice. It will also require the deployment of different infrastructure and analytic tools. Community health centers, which serve more than 25 million people and together form the nation's largest single source of primary care for medically underserved communities and populations, are expanding and will need to optimize their capacity to leverage data as new payer and organizational models emerge. To better understand existing capacity and help organizations plan for the strategic and expanded uses of data, a project was initiated that deployed contemporary, Hadoop-based, analytic technology into several multi-site community health centers (CHCs) and a primary care association (PCA) with an affiliated data warehouse supporting health centers across the state. An initial data quality exercise was carried out after deployment, in which a number of analytic queries were executed using both the existing electronic health record (EHR) applications and in parallel, the analytic stack. Each organization carried out the EHR analysis using the definitions typically applied for routine reporting. The analysis deploying the analytic stack was carried out using those common definitions established for the Uniform Data System (UDS) by the Health Resources and Service Administration. 1 In addition, interviews with health center leadership and staff were completed to understand the context for the findings. The analysis uncovered many challenges and inconsistencies with respect to the definition of core terms (patient, encounter, etc.), data formatting, and missing, incorrect and unavailable data. At a population level, apparent underreporting of a number of diagnoses, specifically obesity and heart disease, was also evident in the results of the data quality exercise, for both the EHR-derived and stack analytic results. Data awareness, that is, an appreciation of the importance of data integrity, data hygiene 2 and the potential uses of data, needs to be prioritized and developed by health centers and other healthcare organizations if analytics are to be used in an effective manner to support strategic objectives. While this analysis was conducted exclusively with community health center organizations, its conclusions and recommendations may be more broadly applicable.
Deployment of Analytics into the Healthcare Safety Net: Lessons Learned
Hartzband, David; Jacobs, Feygele
2016-01-01
Background As payment reforms shift healthcare reimbursement toward value-based payment programs, providers need the capability to work with data of greater complexity, scope and scale. This will in many instances necessitate a change in understanding of the value of data, and the types of data needed for analysis to support operations and clinical practice. It will also require the deployment of different infrastructure and analytic tools. Community health centers, which serve more than 25 million people and together form the nation’s largest single source of primary care for medically underserved communities and populations, are expanding and will need to optimize their capacity to leverage data as new payer and organizational models emerge. Methods To better understand existing capacity and help organizations plan for the strategic and expanded uses of data, a project was initiated that deployed contemporary, Hadoop-based, analytic technology into several multi-site community health centers (CHCs) and a primary care association (PCA) with an affiliated data warehouse supporting health centers across the state. An initial data quality exercise was carried out after deployment, in which a number of analytic queries were executed using both the existing electronic health record (EHR) applications and in parallel, the analytic stack. Each organization carried out the EHR analysis using the definitions typically applied for routine reporting. The analysis deploying the analytic stack was carried out using those common definitions established for the Uniform Data System (UDS) by the Health Resources and Service Administration.1 In addition, interviews with health center leadership and staff were completed to understand the context for the findings. Results The analysis uncovered many challenges and inconsistencies with respect to the definition of core terms (patient, encounter, etc.), data formatting, and missing, incorrect and unavailable data. At a population level, apparent underreporting of a number of diagnoses, specifically obesity and heart disease, was also evident in the results of the data quality exercise, for both the EHR-derived and stack analytic results. Conclusion Data awareness, that is, an appreciation of the importance of data integrity, data hygiene2 and the potential uses of data, needs to be prioritized and developed by health centers and other healthcare organizations if analytics are to be used in an effective manner to support strategic objectives. While this analysis was conducted exclusively with community health center organizations, its conclusions and recommendations may be more broadly applicable. PMID:28210424
Exploring the Cigala/calibra Network Data Base for Ionosphere Monitoring Over Brazil
NASA Astrophysics Data System (ADS)
Vani, B. C.; Galera Monico, J. F.; Shimabukuro, M. H.; Pereira, V. A.; Aquino, M. H.
2013-12-01
The ionosphere in Brazil is strongly influenced by the equatorial anomaly, therefore GNSS based applications are widely affected by ionospheric disturbances. A network for continuous monitoring of the ionosphere has been deployed over its territory since February/2011, as part of the CIGALA and CALIBRA projects. Through CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America), which was funded by European Commission (EC) in the framework of the FP7-GALILEO-2009-GSA (European GNSS Agency), the first stations were deployed at Presidente Prudente, São Paulo state, in February 2011. CIGALA was finalized in February 2012 with eight stations distributed over the Brazilian territory. Through CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil), which is also funded by the European Commission now in the framework of the FP7-GALILEO-2011-GSA, new stations are being deployed. Some of the stations are being specifically placed according to geomagnetic considerations aiming to support the development of a local scintillation and TEC model. CALIBRA started in November 2012 and will have two years of duration, focusing on the development of improved and new algorithms that can be applied to high accuracy GNSS techniques in order to tackle the effects of ionospheric disturbances. PolarRxS-PRO receivers, manufactured by Septentrio, have been deployed at all stations This multi-GNSS receiver can collect data at rates of up to 100 Hz, providing ionospheric TEC, scintillation parameters like S4 and Sigma-Phi, and other signal metrics like locktime for all satellites and frequencies tracked. All collected data (raw and ionosphere monitoring records) is stored at a central facility located at the Faculdade de Ciências e Tecnologia da Universidade Estadual Paulista (FCT/UNESP) in Presidente Prudente. To deal with the large amount of data, an analysis infrastructure has also been established in the form of a web based software named ISMR Query Tool, which provides a capability to identify specific behaviors of ionospheric activity through data visualization and data mining. Its web availability and user-specified features allow the users to interact with the data through a simple internet connection, enabling to obtain insight about the ionosphere according with their own previous knowledge. Information about the network, the projects and the tool can be found at the FCT/UNESP Ionosphere web portal available at http://is-cigala-calibra.fct.unesp.br/. This contribution will provide an overview of results extracted using the monitoring and analysis infrastructure, explaining the possibilities offered by the ISMR Query Tool to support analysis of the ionosphere as well as the development of models and mitigation techniques to counter the effects of ionospheric disturbances on GNSS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blair, Nate; Zhou, Ella; Getman, Dan
2015-10-01
This is the Chinese translation of NREL/TP-6A20-64831. Mathematical and computational models are widely used for the analysis and design of both physical and financial systems. Modeling the electric grid is of particular importance to China for three reasons. First, power-sector assets are expensive and long-lived, and they are critical to any country's development. China's electric load, transmission, and other energy-related infrastructure are expected to continue to grow rapidly; therefore it is crucial to understand and help plan for the future in which those assets will operate. Second, China has dramatically increased its deployment of renewable energy (RE), and is likelymore » to continue further accelerating such deployment over the coming decades. Careful planning and assessment of the various aspects (technical, economic, social, and political) of integrating a large amount of renewables on the grid is required. Third, companies need the tools to develop a strategy for their own involvement in the power market China is now developing, and to enable a possible transition to an efficient and high RE future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duren, Mike; Aldridge, Hal; Abercrombie, Robert K
2013-01-01
Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution andmore » management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.« less
DOT National Transportation Integrated Search
2016-01-01
Transportation infrastructure is quickly moving towards revolutionary changes to : accommodate the deployment of AVs. On the other hand, the transition to new : vehicle technologies will be shaped in large part by changes in performance of : roadway ...
Measurement-Driven Characterization of the Mobile Environment
ERIC Educational Resources Information Center
Soroush, Hamed
2013-01-01
The concurrent deployment of high-quality wireless networks and large-scale cloud services offers the promise of secure ubiquitous access to seemingly limitless amount of content. However, as users' expectations have grown more demanding, the performance and connectivity failures endemic to the existing networking infrastructure have become more…
NASA Astrophysics Data System (ADS)
Phillips, D. A.; Meertens, C. M.; Mattioli, G. S.; Miller, M. M.; Charlevoix, D. J.; Maggert, D.; Hodgkinson, K. M.; Henderson, D. B.; Puskas, C. M.; Bartel, B. A.; Baker, S.; Blume, F.; Normandeau, J.; Feaux, K.; Galetzka, J.; Williamson, H.; Pettit, J.; Crosby, C. J.; Boler, F. M.
2015-12-01
UNAVCO responds to community requests for support during and following significant geophysical events such as earthquakes, volcanic activity, landslides, glacial and ice-sheet movements, unusual uplift or subsidence, extreme meteorological events, or other hazards. UNAVCO can also respond proactively to events in anticipation of community demand for relevant data, data products or other services. Recent major events to which UNAVCO responded include the 2015 M7.8 Nepal EQ, the 2014 M6.0 American Canyon (Napa) EQ, the 2014 M8.2 Chile EQ, the 2011 M9.0 Tohoku, Japan EQ and tsunami, the 2010 M8.8 Maule, Chile EQ, and the 2010 M7.0 Haiti EQ. UNAVCO provided geophysical event response support for 15 events in 2014 alone. UNAVCO event response resources include geodetic infrastructure, data, and education and community engagement. Specific support resources include: field engineering personnel; continuous and campaign GNSS/GPS station deployment; real-time and/or high rate field GNSS/GPS station upgrades or deployment; data communications and power systems deployment; tiltmeter, strainmeter, and borehole seismometer deployments; terrestrial laser scanning (TLS a.k.a. ground-based LiDAR); InSAR data support; education and community engagement assistance or products; data processing services; generation of custom GNSS/GPS or borehole data sets and products; equipment shipping and logistics coordination; and assistance with RAPID proposal preparation, budgeting, and submission. The most critical aspect of a successful event response is effective and efficient communication. To facilitate such communication, UNAVCO creates event response web pages describing the event and the support being provided, and in the case of major events also provides an online event response forum. These resources are shared broadly with the geophysical community through multiple dissemination strategies including social media of UNAVCO and partner organizations. We will provide an overview of resources available to the community from UNAVCO in response to events. We will also highlight examples of the infrastructure, data and data products, and education and community engagement support provided by UNAVCO for major recent events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotkin, S.; Stephens, T.; McManus, W.
2013-03-01
Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could bemore » used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plotkin, Steve; Stephens, Thomas; McManus, Walter
2013-03-01
Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could bemore » used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.« less
Initial steps towards a production platform for DNA sequence analysis on the grid.
Luyf, Angela C M; van Schaik, Barbera D C; de Vries, Michel; Baas, Frank; van Kampen, Antoine H C; Olabarriaga, Silvia D
2010-12-14
Bioinformatics is confronted with a new data explosion due to the availability of high throughput DNA sequencers. Data storage and analysis becomes a problem on local servers, and therefore it is needed to switch to other IT infrastructures. Grid and workflow technology can help to handle the data more efficiently, as well as facilitate collaborations. However, interfaces to grids are often unfriendly to novice users. In this study we reused a platform that was developed in the VL-e project for the analysis of medical images. Data transfer, workflow execution and job monitoring are operated from one graphical interface. We developed workflows for two sequence alignment tools (BLAST and BLAT) as a proof of concept. The analysis time was significantly reduced. All workflows and executables are available for the members of the Dutch Life Science Grid and the VL-e Medical virtual organizations All components are open source and can be transported to other grid infrastructures. The availability of in-house expertise and tools facilitates the usage of grid resources by new users. Our first results indicate that this is a practical, powerful and scalable solution to address the capacity and collaboration issues raised by the deployment of next generation sequencers. We currently adopt this methodology on a daily basis for DNA sequencing and other applications. More information and source code is available via http://www.bioinformaticslaboratory.nl/
The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community
NASA Astrophysics Data System (ADS)
Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt
2014-05-01
Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... emissions inventories, monitoring, and modeling, to assure attainment and maintenance of the standards... NAAQS required the deployment of a system of new monitors to measure ambient levels of that new... requirements, including emissions inventories, monitoring, and modeling to assure attainment and maintenance of...
PIV Logon Configuration Guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Glen Alan
This document details the configurations and enhancements implemented to support the usage of federal Personal Identity Verification (PIV) Card for logon on unclassified networks. The guidance is a reference implementation of the configurations and enhancements deployed at the Los Alamos National Laboratory (LANL) by Network and Infrastructure Engineering – Core Services (NIE-CS).
Learning Spaces and Pedagogic Change: Envisioned, Enacted and Experienced
ERIC Educational Resources Information Center
Mulcahy, Dianne; Cleveland, Ben; Aberton, Helen
2015-01-01
Building on work in how spaces of learning can contribute to the broader policy agenda of achieving pedagogic change, this article takes as its context the Building the Education Revolution infrastructure programme in Australia. Deploying a sociomaterial approach to researching learning spaces and pedagogic change and drawing on data from…
DOT National Transportation Integrated Search
2017-05-19
The ITS JPO is the U.S. Department of Transportations primary advocate and national leader for ITS research, development, and future deployment of connected vehicle technologies, focusing on intelligent vehicles, intelligent infrastructure, and th...
Towards an Enterprise Level Measure of Security
ERIC Educational Resources Information Center
Marchant, Robert L.
2013-01-01
Vulnerabilities of Information Technology (IT) Infrastructure have grown at the similar pace (at least) as the sophistication and complexity of the technology that is the cornerstone of our IT enterprises. Despite massive increased funding for research, for development, and to support deployment of Information Assurance (IA) defenses, the damages…
Transportation Challenges in the Hampton Roads, VA, Region
2012-06-01
ORDERS ( PPO ) ...........................................................11 J. HIGHWAYS FOR NATIONAL DEFENSE (HND) ...................................12 K... PPO Port Planning Orders RND Railroads for National Defense SDDCTEA Surface Deployment and Distribution Command Transportation Engineering...important Continental United States (CONUS) port infrastructure in both peacetime and wartime. Strategic Seaports and Port Planning Orders ( PPOs ) were
76 FR 29135 - National Defense Transportation Day and National Transportation Week, 2011
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
... movement created by America's transportation infrastructure facilitates our Nation's economic vitality. Our... also permits our military to move personnel and supplies at a moment's notice. The ability to deploy... America A Proclamation America has long depended on a robust and reliable transportation network to...
Supporting Collaborative Model and Data Service Development and Deployment with DevOps
NASA Astrophysics Data System (ADS)
David, O.
2016-12-01
Adopting DevOps practices for model service development and deployment enables a community to engage in service-oriented modeling and data management. The Cloud Services Integration Platform (CSIP) developed the last 5 years at Colorado State University provides for collaborative integration of environmental models into scalable model and data services as a micro-services platform with API and deployment infrastructure. Originally developed to support USDA natural resource applications, it proved suitable for a wider range of applications in the environmental modeling domain. While extending its scope and visibility it became apparent community integration and adequate work flow support through the full model development and application cycle drove successful outcomes.DevOps provide best practices, tools, and organizational structures to optimize the transition from model service development to deployment by minimizing the (i) operational burden and (ii) turnaround time for modelers. We have developed and implemented a methodology to fully automate a suite of applications for application lifecycle management, version control, continuous integration, container management, and container scaling to enable model and data service developers in various institutions to collaboratively build, run, deploy, test, and scale services within minutes.To date more than 160 model and data services are available for applications in hydrology (PRMS, Hydrotools, CFA, ESP), water and wind erosion prediction (WEPP, WEPS, RUSLE2), soil quality trends (SCI, STIR), water quality analysis (SWAT-CP, WQM, CFA, AgES-W), stream degradation assessment (SWAT-DEG), hydraulics (cross-section), and grazing management (GRAS). In addition, supporting data services include soil (SSURGO), ecological site (ESIS), climate (CLIGEN, WINDGEN), land management and crop rotations (LMOD), and pesticides (WQM), developed using this workflow automation and decentralized governance.
Sensor4PRI: A Sensor Platform for the Protection of Railway Infrastructures
Cañete, Eduardo; Chen, Jaime; Díaz, Manuel; Llopis, Luis; Rubio, Bartolomé
2015-01-01
Wireless Sensor Networks constitute pervasive and distributed computing systems and are potentially one of the most important technologies of this century. They have been specifically identified as a good candidate to become an integral part of the protection of critical infrastructures. In this paper we focus on railway infrastructure protection and we present the details of a sensor platform designed to be integrated into a slab track system in order to carry out both installation and maintenance monitoring activities. In the installation phase, the platform helps operators to install the slab tracks in the right position. In the maintenance phase, the platform collects information about the structural health and behavior of the infrastructure when a train travels along it and relays the readings to a base station. The base station uses trains as data mules to upload the information to the internet. The use of a train as a data mule is especially suitable for collecting information from remote or inaccessible places which do not have a direct connection to the internet and require less network infrastructure. The overall aim of the system is to deploy a permanent economically viable monitoring system to improve the safety of railway infrastructures. PMID:25734648
Deploying Server-side File System Monitoring at NERSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselton, Andrew
2009-05-01
The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.
Separating Added Value from Hype: Some Experiences and Prognostications
NASA Astrophysics Data System (ADS)
Reed, Dan
2004-03-01
These are exciting times for the interplay of science and computing technology. As new data archives, instruments and computing facilities are connected nationally and internationally, a new model of distributed scientific collaboration is emerging. However, any new technology brings both opportunities and challenges -- Grids are no exception. In this talk, we will discuss some of the experiences deploying Grid software in production environments, illustrated with experiences from the NSF PACI Alliance, the NSF Extensible Terascale Facility (ETF) and other Grid projects. From these experiences, we derive some guidelines for deployment and some suggestions for community engagement, software development and infrastructure
A Satellite-Based Infrastructure Providing Broadband IP Services on Board High Speed Trains
NASA Astrophysics Data System (ADS)
Feltrin, Eros; Weller, Elisabeth
After the earlier technologies that offered satellite mobile services for civil and military applications, today’s specific antenna design, modulation techniques and most powerful new generation satellites also allow a good level of performance to be achieved on-board high speed modes of transport such as aircraft and trains. This paper reports the Eutelsat’s experience in the developing and deploying architecture based on a spread spectrum system in order to provide broadband connectivity on board of high speed trains. After introducing the adopted technologies, the architecture and the constraints, some results obtained from analysis, testing and measuring of the availability of the service are reported and commented upon.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant’s platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses. PMID:26752627
National Fusion Collaboratory: Grid Computing for Simulations and Experiments
NASA Astrophysics Data System (ADS)
Greenwald, Martin
2004-05-01
The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palminitier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
Wide use of advanced inverters could double the electricity-distribution system’s hosting capacity for distributed PV at low costs—from about 170 GW to 350 GW (see Palmintier et al. 2016). At the distribution system level, increased variable generation due to high penetrations of distributed PV (typically rooftop and smaller ground-mounted systems) could challenge the management of distribution voltage, potentially increase wear and tear on electromechanical utility equipment, and complicate the configuration of circuit-breakers and other protection systems—all of which could increase costs, limit further PV deployment, or both. However, improved analysis of distribution system hosting capacity—the amount of distributed PV thatmore » can be interconnected without changing the existing infrastructure or prematurely wearing out equipment—has overturned previous rule-of-thumb assumptions such as the idea that distributed PV penetrations higher than 15% require detailed impact studies. For example, new analysis suggests that the hosting capacity for distributed PV could rise from approximately 170 GW using traditional inverters to about 350 GW with the use of advanced inverters for voltage management, and it could be even higher using accessible and low-cost strategies such as careful siting of PV systems within a distribution feeder and additional minor changes in distribution operations. Also critical to facilitating distributed PV deployment is the improvement of interconnection processes, associated standards and codes, and compensation mechanisms so they embrace PV’s contributions to system-wide operations. Ultimately SunShot-level PV deployment will require unprecedented coordination of the historically separate distribution and transmission systems along with incorporation of energy storage and “virtual storage,” which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Additional analysis and innovation are neede« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingerfelt, Eric J; Endeve, Eirik; Hui, Yawei
Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now--with the rise of multimodal acquisition systems and the associated processing capability--the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalablemore » data analysis and simulation and manage uploaded data files via an intuitive, cross-platform client user interface. This framework delivers authenticated, "push-button" execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing compute-and-data cloud infrastructures and HPC environments like Titan at the Oak Ridge Leadershp Computing Facility (OLCF).« less
NASA Astrophysics Data System (ADS)
Cohen, J. S.; McGarity, A. E.
2017-12-01
The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land use types.
Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring
Gharavi, Hamid; Hu, Bin
2018-01-01
With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505
Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.
Gharavi, Hamid; Hu, Bin
2017-01-01
With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.
Design of Adaptive Policy Pathways under Deep Uncertainties
NASA Astrophysics Data System (ADS)
Babovic, Vladan
2013-04-01
The design of large-scale engineering and infrastructural systems today is growing in complexity. Designers need to consider sociotechnical uncertainties, intricacies, and processes in the long- term strategic deployment and operations of these systems. In this context, water and spatial management is increasingly challenged not only by climate-associated changes such as sea level rise and increased spatio-temporal variability of precipitation, but also by pressures due to population growth and particularly accelerating rate of urbanisation. Furthermore, high investment costs and long term-nature of water-related infrastructure projects requires long-term planning perspective, sometimes extending over many decades. Adaptation to such changes is not only determined by what is known or anticipated at present, but also by what will be experienced and learned as the future unfolds, as well as by policy responses to social and water events. As a result, a pathway emerges. Instead of responding to 'surprises' and making decisions on ad hoc basis, exploring adaptation pathways into the future provide indispensable support in water management decision-making. In this contribution, a structured approach for designing a dynamic adaptive policy based on the concepts of adaptive policy making and adaptation pathways is introduced. Such an approach provides flexibility which allows change over time in response to how the future unfolds, what is learned about the system, and changes in societal preferences. The introduced flexibility provides means for dealing with complexities of adaptation under deep uncertainties. It enables engineering systems to change in the face of uncertainty to reduce impacts from downside scenarios while capitalizing on upside opportunities. This contribution presents comprehensive framework for development and deployment of adaptive policy pathway framework, and demonstrates its performance under deep uncertainties on a case study related to urban water catchment in Singapore. Ingredients of this approach are: (a) transient scenarios (time series of various uncertain developments such as climate change, economic developments, societal changes), (b) a methodology for exploring many options and sequences of these options across different futures, and (c) a stepwise policy analysis. The strategy is applied on case of flexible deployment of novel, so-called Next Generation Infrastructure, and assessed in context of the proposed. Results of the study show that flexible design alternatives deliver much enhanced performance compared to systems optimized under deterministic forecasts of the future. The work also demonstrates that explicit incorporation of uncertainty and flexibility into decision-making process reduces capital expenditures while allowing decision makers to learn about system evolution throughout the lifetime of the project.
Emerging air pollution measurement technologies that require minimal infrastructure to deploy may lead to new insights on air pollution spatial variability in urban areas. Through a collaboration between the USEPA and HKEPD, this study evaluates the performance of a compact, roo...
A PKI Approach for Deploying Modern Secure Distributed E-Learning and M-Learning Environments
ERIC Educational Resources Information Center
Kambourakis, Georgios; Kontoni, Denise-Penelope N.; Rouskas, Angelos; Gritzalis, Stefanos
2007-01-01
While public key cryptography is continuously evolving and its installed base is growing significantly, recent research works examine its potential use in e-learning or m-learning environments. Public key infrastructure (PKI) and attribute certificates (ACs) can provide the appropriate framework to effectively support authentication and…
DOT National Transportation Integrated Search
2016-08-31
A major challenge for achieving large-scale adoption of EVs is an accessible infrastructure for the communities. The societal benefits of large-scale adoption of EVs cannot be realized without adequate deployment of publicly accessible charging stati...
Alternative Fuels Data Center: Plug-In Electric Vehicle Deployment Policy
addendum to a pre-existing zoning ordinance to specify permissible use of EVSE in single- and multi-family -ready requirements may include EVSE installation, pre-wiring, or space reservation. California building charging infrastructure. Examples include the scope of EVSE pre-wiring or installation from a
ERIC Educational Resources Information Center
Schlager, Kenneth J.
2008-01-01
This report describes a communications system engineering planning process that demonstrates an ability to design and deploy cost-effective broadband networks in low density rural areas. The emphasis in on innovative solutions and systems optimization because of the marginal nature of rural telecommunications infrastructure investments. Otherwise,…
DOT National Transportation Integrated Search
2010-12-01
This report presents the results for the national evaluation of the FY 2003 Earmarked ITS Integration Project: Southern Wyoming, I-80 Dynamic Message Signs. The I-80 Dynamic Message Signs project is a rural infrastructure deployment of ITS devices th...
The First Wave: The Beginnings of Radio in Canadian Distance Education
ERIC Educational Resources Information Center
Buck, George H.
2006-01-01
This article describes one of the first developments and deployment of radio for distance learning and education in Canada, beginning in the early 1920s. Anticipating a recent initiative of public-private partnerships, the impetus, infrastructure, and initial programs were provided by a large corporation. Description of the system, its purpose,…
Wireless Technology Infrastructures for Authentication of Patients: PKI that Rings
Sax, Ulrich; Kohane, Isaac; Mandl, Kenneth D.
2005-01-01
As the public interest in consumer-driven electronic health care applications rises, so do concerns about the privacy and security of these applications. Achieving a balance between providing the necessary security while promoting user acceptance is a major obstacle in large-scale deployment of applications such as personal health records (PHRs). Robust and reliable forms of authentication are needed for PHRs, as the record will often contain sensitive and protected health information, including the patient's own annotations. Since the health care industry per se is unlikely to succeed at single-handedly developing and deploying a large scale, national authentication infrastructure, it makes sense to leverage existing hardware, software, and networks. This report proposes a new model for authentication of users to health care information applications, leveraging wireless mobile devices. Cell phones are widely distributed, have high user acceptance, and offer advanced security protocols. The authors propose harnessing this technology for the strong authentication of individuals by creating a registration authority and an authentication service, and examine the problems and promise of such a system. PMID:15684133
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
Wireless technology infrastructures for authentication of patients: PKI that rings.
Sax, Ulrich; Kohane, Isaac; Mandl, Kenneth D
2005-01-01
As the public interest in consumer-driven electronic health care applications rises, so do concerns about the privacy and security of these applications. Achieving a balance between providing the necessary security while promoting user acceptance is a major obstacle in large-scale deployment of applications such as personal health records (PHRs). Robust and reliable forms of authentication are needed for PHRs, as the record will often contain sensitive and protected health information, including the patient's own annotations. Since the health care industry per se is unlikely to succeed at single-handedly developing and deploying a large scale, national authentication infrastructure, it makes sense to leverage existing hardware, software, and networks. This report proposes a new model for authentication of users to health care information applications, leveraging wireless mobile devices. Cell phones are widely distributed, have high user acceptance, and offer advanced security protocols. The authors propose harnessing this technology for the strong authentication of individuals by creating a registration authority and an authentication service, and examine the problems and promise of such a system.
Liu, Changzheng; Oak Ridge National Lab.; Lin, Zhenhong; ...
2016-12-08
Plug-in electric vehicles (PEVs) are widely regarded as an important component of the technology portfolio designed to accomplish policy goals in sustainability and energy security. However, the market acceptance of PEVs in the future remains largely uncertain from today's perspective. By integrating a consumer choice model based on nested multinomial logit and Monte Carlo simulation, this study analyzes the uncertainty of PEV market penetration using Monte Carlo simulation. Results suggest that the future market for PEVs is highly uncertain and there is a substantial risk of low penetration in the early and midterm market. Top factors contributing to market sharemore » variability are price sensitivities, energy cost, range limitation, and charging availability. The results also illustrate the potential effect of public policies in promoting PEVs through investment in battery technology and infrastructure deployment. Here, continued improvement of battery technologies and deployment of charging infrastructure alone do not necessarily reduce the spread of market share distributions, but may shift distributions toward right, i.e., increase the probability of having great market success.« less
Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.
List, Markus
2017-06-10
Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.
Workforce deployment--a critical organizational competency.
Harms, Roxanne
2009-01-01
Staff scheduling has historically been embedded within hospital operations, often defined by each new manager of a unit or program, and notably absent from the organization's practice and standards infrastructure and accountabilities of the executive team. Silvestro and Silvestro contend that "there is a need to recognize that hospital performance relies critically on the competence and effectiveness of roster planning activities, and that these activities are therefore of strategic importance." This article highlights the importance of including staff scheduling--or workforce deployment--in health care organizations' long-term strategic solutions to cope with the deepening workforce shortage (which is likely to hit harder than ever as the economy begins to recover). Viewing workforce deployment as a key organizational competency is a critical success factor for health care in the next decade, and the Workforce Deployment Maturity Model is discussed as a framework to enable organizations to measure their current capabilities, identify priorities and set goals for increasing organizational competency using a methodical and deliberate approach.
Software as a service approach to sensor simulation software deployment
NASA Astrophysics Data System (ADS)
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure
Fontaine, Michael D.
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing. PMID:23766690
A Cyber-ITS framework for massive traffic data analysis using cyber infrastructure.
Xia, Yingjie; Hu, Jia; Fontaine, Michael D
2013-01-01
Traffic data is commonly collected from widely deployed sensors in urban areas. This brings up a new research topic, data-driven intelligent transportation systems (ITSs), which means to integrate heterogeneous traffic data from different kinds of sensors and apply it for ITS applications. This research, taking into consideration the significant increase in the amount of traffic data and the complexity of data analysis, focuses mainly on the challenge of solving data-intensive and computation-intensive problems. As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. The techniques of the framework include data representation, domain decomposition, resource allocation, and parallel processing. All these techniques are based on data-driven and application-oriented models and are organized as a component-and-workflow-based model in order to achieve technical interoperability and data reusability. A case study of the Cyber-ITS framework is presented later based on a traffic state estimation application that uses the fusion of massive Sydney Coordinated Adaptive Traffic System (SCATS) data and GPS data. The results prove that the Cyber-ITS-based implementation can achieve a high accuracy rate of traffic state estimation and provide a significant computational speedup for the data fusion by parallel computing.
NASA Astrophysics Data System (ADS)
Zhou, Hao; Hirose, Mitsuhito; Greenwood, William; Xiao, Yong; Lynch, Jerome; Zekkos, Dimitrios; Kamat, Vineet
2016-04-01
Unmanned aerial vehicles (UAVs) can serve as a powerful mobile sensing platform for assessing the health of civil infrastructure systems. To date, the majority of their uses have been dedicated to vision and laser-based spatial imaging using on-board cameras and LiDAR units, respectively. Comparatively less work has focused on integration of other sensing modalities relevant to structural monitoring applications. The overarching goal of this study is to explore the ability for UAVs to deploy a network of wireless sensors on structures for controlled vibration testing. The study develops a UAV platform with an integrated robotic gripper that can be used to install wireless sensors in structures, drop a heavy weight for the introduction of impact loads, and to uninstall wireless sensors for reinstallation elsewhere. A pose estimation algorithm is embedded in the UAV to estimate the location of the UAV during sensor placement and impact load introduction. The Martlet wireless sensor network architecture is integrated with the UAV to provide the UAV a mobile sensing capability. The UAV is programmed to command field deployed Martlets, aggregate and temporarily store data from the wireless sensor network, and to communicate data to a fixed base station on site. This study demonstrates the integrated UAV system using a simply supported beam in the lab with Martlet wireless sensors placed by the UAV and impact load testing performed. The study verifies the feasibility of the integrated UAV-wireless monitoring system architecture with accurate modal characteristics of the beam estimated by modal analysis.
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
NASA Astrophysics Data System (ADS)
Takeuchi, Kazuya; Masuda, Arata; Akahori, Shunsuke; Higashi, Yoshiyuki; Miura, Nanako
2017-04-01
This paper proposes an aerial robot that can land on and cling to a steel structure using electric permanent magnets to be- have as a vibration sensor probe for use in vibration-based structural health monitoring. In the last decade, structural health monitoring techniques have been studied intensively to tackle with serious social issues that most of the infrastructures in advanced countries are being deteriorated. In the typical concept of the structural health monitoring, vibration sensors like accelerometers are installed in the structure to continuously collect the dynamical response of the operating structure to find a symptom of the structural damage. It is unreasonable, however, to permanently deploy the sensors to numerous infrastructures because most of the infrastructures except for those of primary importance do not need continuous measurement and evaluation. In this study, the aerial robot plays a role of a mobile detachable sensor unit. The design guidelines of the aerial robot that performs the vibration measurement from the analysis model of the robot is shown. Experiments to evaluate the frequency response function of the acceleration measured by the robot with respect to the acceleration at the point where the robot adheres are carried out. And the experimental results show that the prototype robot can measure the acceleration of the host structure accurately up to 150 Hz.
State investments in high-technology job growth.
Leicht, Kevin T; Jenkins, J Craig
2017-07-01
Since the early 1970's state and local governments have launched an array of economic development programs designed to promote high-technology development. The question our analysis addresses is whether these programs promote long-term high-technology employment growth net of state location and agglomeration advantages. Proponents talk about an infrastructure strategy that promotes investment in public research and specialized infrastructure to attract and grow new high technology industries in specific locations, and a more decentralized entrepreneurial strategy that reinforces local agglomeration capacities by investing in new enterprises and products, promoting the development of local networks and partnerships. Our results support the entrepreneurial strategy, suggesting that state governments can accelerate high technology development by adopting market-supportive programs that complement private sector initiatives. In addition to positive direct benefits of technology deployment/transfer programs and SBIR programs, entrepreneurial programs affect change in high-technology employment in concert with existing locational and agglomeration advantages. Rural (i.e. low population density) states tend to benefit by technology development programs. Infrastructure strategy programs also facilitate high technology job growth in places where local advantages already exist. Our results suggest that critics of industrial policy are correct that high technology growth is organic and endogenous, yet state governments are able to "pick winners and losers" in ways that grow their local economy. Copyright © 2017 Elsevier Inc. All rights reserved.
A retrospective analysis of funding and focus in US advanced fission innovation
NASA Astrophysics Data System (ADS)
Abdulla, A.; Ford, M. J.; Morgan, M. G.; Victor, D. G.
2017-08-01
Deep decarbonization of the global energy system will require large investments in energy innovation and the deployment of new technologies. While many studies have focused on the expenditure that will be needed, here we focus on how government has spent public sector resources on innovation for a key carbon-free technology: advanced nuclear. We focus on nuclear power because it has been contributing almost 20% of total US electric generation, and because the US program in this area has historically been the world’s leading effort. Using extensive data acquired through the Freedom of Information Act, we reconstruct the budget history of the Department of Energy’s program to develop advanced, non-light water nuclear reactors. Our analysis shows that—despite spending 2 billion since the late 1990s—no advanced design is ready for deployment. Even if the program had been well designed, it still would have been insufficient to demonstrate even one non-light water technology. It has violated much of the wisdom about the effective execution of innovative programs: annual funding varies fourfold, priorities are ephemeral, incumbent technologies and fuels are prized over innovation, and infrastructure spending consumes half the budget. Absent substantial changes, the possibility of US-designed advanced reactors playing a role in decarbonization by mid-century is low.
Martin, J B; Wilkins, A S; Stawski, S K
1998-08-01
The evolving health care environment demands that health care organizations fully utilize information technologies (ITs). The effective deployment of IT requires the development and implementation of a comprehensive IT strategic plan. A number of approaches to health care IT strategic planning exist, but they are outdated or incomplete. The component alignment model (CAM) introduced here recognizes the complexity of today's health care environment, emphasizing continuous assessment and realignment of seven basic components: external environment, emerging ITs, organizational infrastructure, mission, IT infrastructure, business strategy, and IT strategy. The article provides a framework by which health care organizations can develop an effective IT strategic planning process.
Hydrogen Fuel Cell Performance as Telecommunications Backup Power in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, Jennifer; Saur, Genevieve; Sprik, Sam
2015-03-01
Working in collaboration with the U.S. Department of Energy (DOE) and industry project partners, the National Renewable Energy Laboratory (NREL) acts as the central data repository for the data collected from real-world operation of fuel cell backup power systems. With American Recovery and Reinvestment Act of 2009 (ARRA) co-funding awarded through DOE's Fuel Cell Technologies Office, more than 1,300 fuel cell units were deployed over a three-plus-year period in stationary, material handling equipment, auxiliary power, and backup power applications. This surpassed a Fuel Cell Technologies Office ARRA objective to spur commercialization of an early market technology by installing 1,000 fuelmore » cell units across several different applications, including backup power. By December 2013, 852 backup power units out of 1,330 fuel cell units deployed were providing backup service, mainly for telecommunications towers. For 136 of the fuel cell backup units, project participants provided detailed operational data to the National Fuel Cell Technology Evaluation Center for analysis by NREL's technology validation team. NREL analyzed operational data collected from these government co-funded demonstration projects to characterize key fuel cell backup power performance metrics, including reliability and operation trends, and to highlight the business case for using fuel cells in these early market applications. NREL's analyses include these critical metrics, along with deployment, U.S. grid outage statistics, and infrastructure operation.« less
A cyber infrastructure for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul
2016-07-01
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
The road less taken: modularization and waterways as a domestic disaster response mechanism.
Donahue, Donald A; Cunnion, Stephen O; Godwin, Evelyn A
2013-01-01
Preparedness scenarios project the need for significant healthcare surge capacity. Current planning draws heavily from the military model, leveraging deployable infrastructure to augment or replace extant capabilities. This approach would likely prove inadequate in a catastrophic disaster, as the military model relies on forewarning and an extended deployment cycle. Local equipping for surge capacity is prohibitively costly while movement of equipment can be subject to a single point of failure. Translational application of maritime logistical techniques and an ancient mode of transportation can provide a robust and customizable approach to disaster relief for greater than 90 percent of the American population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, James Elmer; Johnson, Stephen Guy; Dwight, Carla Chelan
Radioisotope power systems (RPSs) have enabled missions requiring reliable, long-lasting power in remote, harsh environments such as space since the early 1960s. Costs for RPSs are high, but are often misrepresented due to the complexity of space missions and inconsistent charging practices among the many and changing participant organizations over the years. This paper examines historical documentation associated with two past successful flight missions, each with a different RPS design, to provide a realistic cost basis for RPS production and deployment. The missions and their respective RPSs are Cassini, launched in 1997, that uses the general purpose heat source (GPHS)more » radioisotope thermoelectric generator (RTG), and Mars Science Laboratory (MSL), launched in 2011, that uses the multi-mission RTG (MMRTG). Actual costs in their respective years are discussed for each of the two RTG designs and the missions they enabled, and then present day values to 2015 are computed to compare the costs. Costs for this analysis were categorized into two areas: development of the specific RTG technology, and production and deployment of an RTG. This latter category includes material costs for the flight components (including Pu-238 and fine weave pierced fabric (FWPF)); manufacturing of flight components; assembly, testing, and transport of the flight RTG(s); ground operations involving the RTG(s) through launch; nuclear safety analyses for the launch and for the facilities housing the RTG(s) during all phases of ground operations; DOE’s support for NEPA analyses; and radiological contingency planning. This analysis results in a fairly similar 2015 normalized cost for the production and deployment of an RTG—approximately $118M for the GPHS-RTG and $109M for the MMRTG. In addition to these two successful flight missions, the costs for development of the MMRTG are included to serve as a future reference. Note that development costs included herein for the MMRTG do not include costs from NASA staff or facilities for their development efforts—they only include the amounts costed by DOE and DOE contractors. The 2015 value for MMRTG development is $83M. Both of the RPS types analyzed herein use the general purpose heat source (GPHS) module as the “heart of the RPS.” The estimates presented herein do not include development costs for the GPHS. These estimates also do not include the RPS infrastructure cost to maintain the facilities, equipment, and personnel necessary to enable the production of RPSs, except to the extent that the infrastructure is utilized during the production campaigns to provide RPSs for missions. It was not until after the Cassini mission that an RPS infrastructure funding structure was defined and funded separately from mission-specific elements. The information presented herein could allow for more accurate budget planning estimates for space missions being considered over the next decade and beyond.« less
NASA Astrophysics Data System (ADS)
Sabeur, Zoheir; Middleton, Stuart; Veres, Galina; Zlatev, Zlatko; Salvo, Nicola
2010-05-01
The advancement of smart sensor technology in the last few years has led to an increase in the deployment of affordable sensors for monitoring the environment around Europe. This is generating large amounts of sensor observation information and inevitably leading to problems about how to manage large volumes of data as well as making sense out the data for decision-making. In addition, the various European Directives (Water Framework Diectives, Bathing Water Directives, Habitat Directives, etc.. ) which regulate human activities in the environment and the INSPIRE Directive on spatial information management regulations have implicitely led the designated European Member States environment agencies and authorities to put in place new sensor monitoring infrastructure and share information about environmental regions under their statutory responsibilities. They will need to work cross border and collectively reach environmental quality standards. They will also need to regularly report to the EC on the quality of the environments of which they are responsible and make such information accessible to the members of the public. In recent years, early pioneering work on the design of service oriented architecture using sensor networks has been achieved. Information web-services infrastructure using existing data catalogues and web-GIS map services can now be enriched with the deployment of new sensor observation and data fusion and modelling services using OGC standards. The deployment of the new services which describe sensor observations and intelligent data-processing using data fusion techniques can now be implemented and provide added value information with spatial-temporal uncertainties to the next generation of decision support service systems. The new decision support service systems have become key to implement across Europe in order to comply with EU environmental regulations and INSPIRE. In this paper, data fusion services using OGC standards with sensor observation data streams are described in context of a geo-distributed service infrastructure specialising in multiple environmental risk management and decision-support. The sensor data fusion services are deployed and validated in two use cases. These are respectively concerned with: 1) Microbial risks forecast in bathing waters; and 2) Geohazards in urban zones during underground tunneling activities. This research was initiated in the SANY Integrated Project(www.sany-ip.org) and funded by the European Commission under the 6th Framework Programme.
Can Economics Provide Insights into Trust Infrastructure?
NASA Astrophysics Data System (ADS)
Vishik, Claire
Many security technologies require infrastructure for authentication, verification, and other processes. In many cases, viable and innovative security technologies are never adopted on a large scale because the necessary infrastructure is slow to emerge. Analyses of such technologies typically focus on their technical flaws, and research emphasizes innovative approaches to stronger implementation of the core features. However, an observation can be made that in many cases the success of adoption pattern depends on non-technical issues rather than technology-lack of economic incentives, difficulties in finding initial investment, inadequate government support. While a growing body of research is dedicated to economics of security and privacy in general, few theoretical studies in this area have been completed, and even fewer that look at the economics of “trust infrastructure” beyond simple “cost of ownership” models. This exploratory paper takes a look at some approaches in theoretical economics to determine if they can provide useful insights into security infrastructure technologies and architectures that have the best chance to be adopted. We attempt to discover if models used in theoretical economics can help inform technology developers of the optimal business models that offer a better chance for quick infrastructure deployment.
A National Strategy to Develop Pragmatic Clinical Trials Infrastructure
Guise, Jeanne‐Marie; Dolor, Rowena J.; Meissner, Paul; Tunis, Sean; Krishnan, Jerry A.; Pace, Wilson D.; Saltz, Joel; Hersh, William R.; Michener, Lloyd; Carey, Timothy S.
2014-01-01
Abstract An important challenge in comparative effectiveness research is the lack of infrastructure to support pragmatic clinical trials, which compare interventions in usual practice settings and subjects. These trials present challenges that differ from those of classical efficacy trials, which are conducted under ideal circumstances, in patients selected for their suitability, and with highly controlled protocols. In 2012, we launched a 1‐year learning network to identify high‐priority pragmatic clinical trials and to deploy research infrastructure through the NIH Clinical and Translational Science Awards Consortium that could be used to launch and sustain them. The network and infrastructure were initiated as a learning ground and shared resource for investigators and communities interested in developing pragmatic clinical trials. We followed a three‐stage process of developing the network, prioritizing proposed trials, and implementing learning exercises that culminated in a 1‐day network meeting at the end of the year. The year‐long project resulted in five recommendations related to developing the network, enhancing community engagement, addressing regulatory challenges, advancing information technology, and developing research methods. The recommendations can be implemented within 24 months and are designed to lead toward a sustained national infrastructure for pragmatic trials. PMID:24472114
Enterprise-class Digital Imaging and Communications in Medicine (DICOM) image infrastructure.
York, G; Wortmann, J; Atanasiu, R
2001-06-01
Most current picture archiving and communication systems (PACS) are designed for a single department or a single modality. Few PACS installations have been deployed that support the needs of the hospital or the entire Integrated Delivery Network (IDN). The authors propose a new image management architecture that can support a large, distributed enterprise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1993-09-15
This report contains an extensive evaluation of GE advanced boiling water reactor plants prepared for United State Department of Energy. The general areas covered in this report are: core and system performance; fuel cycle; infrastructure and deployment; and safety and environmental approval.
DOT National Transportation Integrated Search
1998-01-01
To achieve its goal of integrating the intelligent transportation infrastructure throughout the Valley of the Sun, AZTech had to build links. AZTech's success is highly dependent on its ability to establish strong links throughout its partnership of ...
Next Steps in Network Time Synchronization For Navy Shipboard Applications
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting NEXT STEPS IN NETWORK TIME SYNCHRONIZATION FOR NAVY SHIPBOARD APPLICATIONS...dynamic manner than in previous designs. This new paradigm creates significant network time synchronization challenges. The Navy has been...deploying the Network Time Protocol (NTP) in shipboard computing infrastructures to meet the current network time synchronization requirements
Security Shift in Future Network Architectures
2010-11-01
RTO-MP-IST-091 2 - 1 Security Shift in Future Network Architectures Tim Hartog, M.Sc Information Security Dept. TNO Information and...current practice military communication infrastructures are deployed as stand-alone networked information systems. Network -Enabled Capabilities (NEC) and...information architects and security specialists about the separation of network and information security, the consequences of this shift and our view
DOT National Transportation Integrated Search
2012-02-01
For rapid deployment of bridge scan missions, sub-inch aerial imaging using small format aerial photography : is suggested. Under-belly photography is used to generate high resolution aerial images that can be geo-referenced and : used for quantifyin...
Applications of the pipeline environment for visual informatics and genomics computations
2011-01-01
Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102
The Satellite Data Thematic Core Service within the EPOS Research Infrastructure
NASA Astrophysics Data System (ADS)
Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio
2017-04-01
EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model, 3D displacement maps, seismic hazard maps). Moreover, the services will release both on-demand and systematic products. The latter will be generated and made available to the users on a continuous basis, by processing each Sentinel-1 data once acquired, over a defined number of areas of interest; while the former will allow users to select data, areas, and time period to carry out their own analyses via an on-line platform. The satellite components will be integrated within the EPOS infrastructure through a common and harmonized interface that will allow users to search, process and share remote sensing images and results. This gateway to the satellite services will be represented by the ESA- Geohazards Exploitation Platform (GEP), a new cloud-based platform for the satellite Earth Observations designed to support the scientific community in the understanding of high impact natural disasters. Satellite Data TCS will use GEP as the common interface toward the main EPOS portal to provide EPOS users not only with data products but also with relevant processing and visualisation software, thus allowing users to gather and process on a cloud-computing infrastructure large datasets without any need to download them locally.
NASA Technical Reports Server (NTRS)
Thangavelu, Madhu
1994-01-01
Traditional concepts of lunar bases describe scenarios where components of the bases are landed on the lunar surface, one at a time, and then put together to form a complete stationary lunar habitat. Recently, some concepts have described the advantages of operating a mobile or 'roving' lunar base. Such a base vastly improves the exploration range from a primary lunar base. Roving bases would also allow the crew to first deploy, test, operationally certify, and then regularly maintain, service, and evolve long life-cycle facilities like observatories or other science payload platforms that are operated far apart from each other across the extraterrestrial surface. The Nomad Explorer is such a mobile lunar base. This paper describes the architectural program of the Nomad Explorer, its advantages over a stationary lunar base, and some of the embedded system concepts which help the roving base to speedily establish a global extraterrestrial infrastructure. A number of modular autonomous logistics landers will carry deployable or erectable payloads, service, and logistically resupply the Nomad Explorer at regular intercepts along the traverse. Starting with the deployment of science experiments and telecommunication networks, and the manned emplacement of a variety of remote outposts using a unique EVA Bell system that enhances manned EVA, the Nomad Explorer architecture suggests the capability for a rapid global development of the extraterrestrial body. The Moon and Mars are candidates for this 'mission oriented' strategy. The lunar case is emphasized in this paper.
Virtual Astronomy: The Legacy of the Virtual Astronomical Observatory
NASA Astrophysics Data System (ADS)
Hanisch, Robert J.; Berriman, G. B.; Lazio, J.; Szalay, A. S.; Fabbiano, G.; Plante, R. L.; McGlynn, T. A.; Evans, J.; Emery Bunn, S.; Claro, M.; VAO Project Team
2014-01-01
Over the past ten years, the Virtual Astronomical Observatory (VAO, http://usvao.org) and its predecessor, the National Virtual Observatory (NVO), have developed and operated a software infrastructure consisting of standards and protocols for data and science software applications. The Virtual Observatory (VO) makes it possible to develop robust software for the discovery, access, and analysis of astronomical data. Every major publicly funded research organization in the US and worldwide has deployed at least some components of the VO infrastructure; tens of thousands of VO-enabled queries for data are invoked daily against catalog, image, and spectral data collections; and groups within the community have developed tools and applications building upon the VO infrastructure. Further, NVO and VAO have helped ensure access to data internationally by co-founding the International Virtual Observatory Alliance (IVOA, http://ivoa.net). The products of the VAO are being archived in a publicly accessible repository. Several science tools developed by the VAO will continue to be supported by the organizations that developed them: the Iris spectral energy distribution package (SAO), the Data Discovery Tool (STScI/MAST, HEASARC), and the scalable cross-comparison service (IPAC). The final year of VAO is focused on development of the data access protocol for data cubes, creation of Python language bindings to VO services, and deployment of a cloud-like data storage service that links to VO data discovery tools (SciDrive). We encourage the community to make use of these tools and services, to extend and improve them, and to carry on with the vision for virtual astronomy: astronomical research enabled by easy access to distributed data and computational resources. Funding for VAO development and operations has been provided jointly by NSF and NASA since May 2010. NSF funding will end in September 2014, though with the possibility of competitive solicitations for VO-based tool development. NASA intends to maintain core VO services such as the resource registry (the index of VO-accessible data collections), monitoring services, and a website as part of the remit of HEASARC, IPAC (IRSA, NED), and MAST.
Monteiro Gil, Octávia; Vaz, Pedro; Romm, Horst; De Angelis, Cinzia; Antunes, Ana Catarina; Barquinero, Joan-Francesc; Beinke, Christina; Bortolin, Emanuela; Burbidge, Christopher Ian; Cucu, Alexandra; Della Monaca, Sara; Domene, Mercedes Moreno; Fattibene, Paola; Gregoire, Eric; Hadjidekova, Valeria; Kulka, Ulrike; Lindholm, Carita; Meschini, Roberta; M'Kacher, Radhia; Moquet, Jayne; Oestreicher, Ursula; Palitti, Fabrizio; Pantelias, Gabriel; Montoro Pastor, Alegria; Popescu, Irina-Anca; Quattrini, Maria Cristina; Ricoul, Michelle; Rothkamm, Kai; Sabatier, Laure; Sebastià, Natividad; Sommer, Sylwester; Terzoudi, Georgia; Testa, Antonella; Trompier, François; Vral, Anne
2017-01-01
To identify and assess, among the participants in the RENEB (Realizing the European Network of Biodosimetry) project, the emergency preparedness, response capabilities and resources that can be deployed in the event of a radiological or nuclear accident/incident affecting a large number of individuals. These capabilities include available biodosimetry techniques, infrastructure, human resources (existing trained staff), financial and organizational resources (including the role of national contact points and their articulation with other stakeholders in emergency response) as well as robust quality control/assurance systems. A survey was prepared and sent to the RENEB partners in order to acquire information about the existing, operational techniques and infrastructure in the laboratories of the different RENEB countries and to assess the capacity of response in the event of radiological or nuclear accident involving mass casualties. The survey focused on several main areas: laboratory's general information, country and staff involved in biological and physical dosimetry; retrospective assays used, the number of assays available per laboratory and other information related to biodosimetry and emergency preparedness. Following technical intercomparisons amongst RENEB members, an update of the survey was performed one year later concerning the staff and the available assays. The analysis of RENEB questionnaires allowed a detailed assessment of existing capacity of the RENEB network to respond to nuclear and radiological emergencies. This highlighted the key importance of international cooperation in order to guarantee an effective and timely response in the event of radiological or nuclear accidents involving a considerable number of casualties. The deployment of the scientific and technical capabilities existing within the RENEB network members seems mandatory, to help other countries with less or no capacity for biological or physical dosimetry, or countries overwhelmed in case of a radiological or nuclear accident involving a large number of individuals.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
DUMAND-II (deep underwater muon and neutrino detector) progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, K.K.; The DUMAND Collaboration
1995-07-10
The DUMAND II detector will search for astronomical sources of high energy neutrinos. Successful deployment of the basic infrastructure, including the shore cable, the underwater junction box, and an environmental module was accomplished in December, 1993. One optical module string was also deployed and operated, logging data for about 10 hours. The underwater cable was connected to the shore station where we were able to successfully exercise system controls and log further environmental data. After this time, water leaking into the electronics control module for the deployed string disabled the string electrical system. The acquired data are consistent with themore » expected rate of downgoing muons, and our ability to reconstruct muons was demonstrated. The measured acoustical backgrounds are consistent with expectation, which should allow acoustical detection of nearby PeV particle cascades. The disabled string has been recovered and is undergoing repairs ashore. We have identified the source of the water leak and implemented additional testing and QC procedures to ensure no repetition in our next deployment. We will be ready to deploy three strings and begin continuous data taking in late 1994 or early 1995. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.« less
NASA Astrophysics Data System (ADS)
The CHAIN-REDS Project is organising a workshop on "e-Infrastructures for e-Sciences" focusing on Cloud Computing and Data Repositories under the aegis of the European Commission and in co-location with the International Conference on e-Science 2013 (IEEE2013) that will be held in Beijing, P.R. of China on October 17-22, 2013. The core objective of the CHAIN-REDS project is to promote, coordinate and support the effort of a critical mass of non-European e-Infrastructures for Research and Education to collaborate with Europe addressing interoperability and interoperation of Grids and other Distributed Computing Infrastructures (DCI). From this perspective, CHAIN-REDS will optimise the interoperation of European infrastructures with those present in 6 other regions of the world, both from a development and use point of view, and catering to different communities. Overall, CHAIN-REDS will provide input for future strategies and decision-making regarding collaboration with other regions on e-Infrastructure deployment and availability of related data; it will raise the visibility of e-Infrastructures towards intercontinental audiences, covering most of the world and will provide support to establish globally connected and interoperable infrastructures, in particular between the EU and the developing regions. Organised by IHEP, INFN and Sigma Orionis with the support of all project partners, this workshop will aim at: - Presenting the state of the art of Cloud computing in Europe and in China and discussing the opportunities offered by having interoperable and federated e-Infrastructures; - Exploring the existing initiatives of Data Infrastructures in Europe and China, and highlighting the Data Repositories of interest for the Virtual Research Communities in several domains such as Health, Agriculture, Climate, etc.
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
NASA Astrophysics Data System (ADS)
Sipos, Roland; Govi, Giacomo; Franzoni, Giovanni; Di Guida, Salvatore; Pfeiffer, Andreas
2017-10-01
The CMS experiment at CERN LHC has a dedicated infrastructure to handle the alignment and calibration data. This infrastructure is composed of several services, which take on various data management tasks required for the consumption of the non-event data (also called as condition data) in the experiment activities. The criticality of these tasks imposes tights requirements for the availability and the reliability of the services executing them. In this scope, a comprehensive monitoring and alarm generating system has been developed. The system has been implemented based on the Nagios open source industry standard for monitoring and alerting services, and monitors the database back-end, the hosting nodes and key heart-beat functionalities for all the services involved. This paper describes the design, implementation and operational experience with the monitoring system developed and deployed at CMS in 2016.
Baloye, David O.
2016-01-01
The understanding and institutionalisation of the seamless link between urban critical infrastructure and disaster management has greatly helped the developed world to establish effective disaster management processes. However, this link is conspicuously missing in developing countries, where disaster management has been more reactive than proactive. The consequence of this is typified in poor response time and uncoordinated ways in which disasters and emergency situations are handled. As is the case with many Nigerian cities, the challenges of urban development in the city of Abeokuta have limited the effectiveness of disaster and emergency first responders and managers. Using geospatial techniques, the study attempted to design and deploy a spatial database running a web-based information system to track the characteristics and distribution of critical infrastructure for effective use during disaster and emergencies, with the purpose of proactively improving disaster and emergency management processes in Abeokuta.
GSDC: A Unique Data Center in Korea for HEP research
NASA Astrophysics Data System (ADS)
Ahn, Sang-Un
2017-04-01
Global Science experimental Data hub Center (GSDC) at Korea Institute of Science and Technology Information (KISTI) is a unique data center in South Korea established for promoting the fundamental research fields by supporting them with the expertise on Information and Communication Technology (ICT) and the infrastructure for High Performance Computing (HPC), High Throughput Computing (HTC) and Networking. GSDC has supported various research fields in South Korea dealing with the large scale of data, e.g. RENO experiment for neutrino research, LIGO experiment for gravitational wave detection, Genome sequencing project for bio-medical, and HEP experiments such as CDF at FNAL, Belle at KEK, and STAR at BNL. In particular, GSDC has run a Tier-1 center for ALICE experiment using the LHC at CERN since 2013. In this talk, we present the overview on computing infrastructure that GSDC runs for the research fields and we discuss on the data center infrastructure management system deployed at GSDC.
Cafe: A Generic Configurable Customizable Composite Cloud Application Framework
NASA Astrophysics Data System (ADS)
Mietzner, Ralph; Unger, Tobias; Leymann, Frank
In this paper we present Cafe (Composite Application Framework) an approach to describe configurable composite service-oriented applications and to automatically provision them across different providers. Cafe enables independent software vendors to describe their composite service-oriented applications and the components that are used to assemble them. Components can be internal to the application or external and can be deployed in any of the delivery models present in the cloud. The components are annotated with requirements for the infrastructure they later need to be run on. Providers on the other hand advertise their infrastructure services by describing them as infrastructure capabilities. The separation of software vendors and providers enables end users and providers to follow a best-of-breed strategy by combining arbitrary applications with arbitrary providers. We show how such applications can be automatically provisioned and present an architecture and a prototype that implements the concepts.
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Hazard Management with DOORS: Rail Infrastructure Projects
NASA Astrophysics Data System (ADS)
Hughes, Dave; Saeed, Amer
LOI is a major rail infrastructure project that will contribute to a modernised transport system in time for the 2012 Olympic Games. A review of the procedures and tool infrastructure was conducted in early 2006, coinciding with a planned move to main works. A hazard log support tool was needed to provide: an automatic audit trial, version control and support collaborative working. A DOORS based Hazard Log (DHL) was selected as the Tool Strategy. A systematic approach was followed for the development of DHL, after a series of tests and acceptance gateways, DHL was handed over to the project in autumn 2006. The first few months were used for operational trials and he Hazard Management rocedure was modified to be a hybrid approach that used the strengths of DHL and Excel. The user experience in the deployment of DHL is summarised and directions for future improvement identified.
A Programmable SDN+NFV Architecture for UAV Telemetry Monitoring
NASA Technical Reports Server (NTRS)
White, Kyle J. S.; Pezaros, Dimitrios P.; Denney, Ewen; Knudson, Matt D.
2017-01-01
With the explosive growth in UAV numbers forecast worldwide, a core concern is how to manage the ad-hoc network configuration required for mobility management. As UAVs migrate among ground control stations, associated network services, routing and operational control must also rapidly migrate to ensure a seamless transition. In this paper, we present a novel, lightweight and modular architecture which supports high mobility, resilience and flexibility through the application of SDN and NFV principles on top of the UAV infrastructure. By combining SDN programmability and Network Function Virtualization we can achieve resilient infrastructure migration of network services, such as network monitoring and anomaly detection, coupled with migrating UAVs to enable high mobility management. Our container-based monitoring and anomaly detection Network Functions (NFs) can be tuned to specific UAV models providing operators better insight during live, high-mobility deployments. We evaluate our architecture against telemetry from over 80flights from a scientific research UAV infrastructure.
Elements of an integrated health monitoring framework
NASA Astrophysics Data System (ADS)
Fraser, Michael; Elgamal, Ahmed; Conte, Joel P.; Masri, Sami; Fountain, Tony; Gupta, Amarnath; Trivedi, Mohan; El Zarki, Magda
2003-07-01
Internet technologies are increasingly facilitating real-time monitoring of Bridges and Highways. The advances in wireless communications for instance, are allowing practical deployments for large extended systems. Sensor data, including video signals, can be used for long-term condition assessment, traffic-load regulation, emergency response, and seismic safety applications. Computer-based automated signal-analysis algorithms routinely process the incoming data and determine anomalies based on pre-defined response thresholds and more involved signal analysis techniques. Upon authentication, appropriate action may be authorized for maintenance, early warning, and/or emergency response. In such a strategy, data from thousands of sensors can be analyzed with near real-time and long-term assessment and decision-making implications. Addressing the above, a flexible and scalable (e.g., for an entire Highway system, or portfolio of Networked Civil Infrastructure) software architecture/framework is being developed and implemented. This framework will network and integrate real-time heterogeneous sensor data, database and archiving systems, computer vision, data analysis and interpretation, physics-based numerical simulation of complex structural systems, visualization, reliability & risk analysis, and rational statistical decision-making procedures. Thus, within this framework, data is converted into information, information into knowledge, and knowledge into decision at the end of the pipeline. Such a decision-support system contributes to the vitality of our economy, as rehabilitation, renewal, replacement, and/or maintenance of this infrastructure are estimated to require expenditures in the Trillion-dollar range nationwide, including issues of Homeland security and natural disaster mitigation. A pilot website (http://bridge.ucsd.edu/compositedeck.html) currently depicts some basic elements of the envisioned integrated health monitoring analysis framework.
Gaussian processes for personalized e-health monitoring with wearable sensors.
Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel
2013-01-01
Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabharwall, Piyush; O'Brien, James E.; McKellar, Michael G.
2015-03-01
Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performancemore » of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial application and demonstration under NHES.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Kate; Burman, Kari; Simpkins, Travis
Resilient PV, which is solar paired with storage ('solar-plus-storage'), provides value both during normal grid operation and power outages as opposed to traditional solar PV, which functions only when the electric grid is operating. During normal grid operations, resilient PV systems help host sites generate revenue and/or reduce electricity bill charges. During grid outages, resilient PV provides critical emergency power that can help people in need and ease demand on emergency fuel supplies. The combination of grid interruptions during recent storms, the proliferation of solar PV, and the growing deployment of battery storage technologies has generated significant interest in usingmore » these assets for both economic and resiliency benefits. This report analyzes the technical and economic viability for resilient PV on three critical infrastructure sites in New York City (NYC): a school that is part of a coastal storm shelter system, a fire station, and a NYCHA senior center that serves as a cooling center during heat emergencies. This analysis differs from previous solar-plus-storage studies by placing a monetary value on resiliency and thus, in essence, modeling a new revenue stream for the avoided cost of a power outage. Analysis results show that resilient PV is economically viable for NYC's critical infrastructure and that it may be similarly beneficial to other commercial buildings across the city. This report will help city building owners, managers, and policymakers better understand the economic and resiliency benefits of resilient PV. As NYC fortifies its building stock against future storms of increasing severity, resilient PV can play an important role in disaster response and recovery while also supporting city greenhouse gas emission reduction targets and relieving stress to the electric grid from growing power demands.« less
Cyber threat model for tactical radio networks
NASA Astrophysics Data System (ADS)
Kurdziel, Michael T.
2014-05-01
The shift to a full information-centric paradigm in the battlefield has allowed ConOps to be developed that are only possible using modern network communications systems. Securing these Tactical Networks without impacting their capabilities has been a challenge. Tactical networks with fixed infrastructure have similar vulnerabilities to their commercial counterparts (although they need to be secure against adversaries with greater capabilities, resources and motivation). However, networks with mobile infrastructure components and Mobile Ad hoc Networks (MANets) have additional unique vulnerabilities that must be considered. It is useful to examine Tactical Network based ConOps and use them to construct a threat model and baseline cyber security requirements for Tactical Networks with fixed infrastructure, mobile infrastructure and/or ad hoc modes of operation. This paper will present an introduction to threat model assessment. A definition and detailed discussion of a Tactical Network threat model is also presented. Finally, the model is used to derive baseline requirements that can be used to design or evaluate a cyber security solution that can be scaled and adapted to the needs of specific deployments.
Evolution of a Materials Data Infrastructure
NASA Astrophysics Data System (ADS)
Warren, James A.; Ward, Charles H.
2018-06-01
The field of materials science and engineering is writing a new chapter in its evolution, one of digitally empowered materials discovery, development, and deployment. The 2008 Integrated Computational Materials Engineering (ICME) study report helped usher in this paradigm shift, making a compelling case and strong recommendations for an infrastructure supporting ICME that would enable access to precompetitive materials data for both scientific and engineering applications. With the launch of the Materials Genome Initiative in 2011, which drew substantial inspiration from the ICME study, digital data was highlighted as a core component of a Materials Innovation Infrastructure, along with experimental and computational tools. Over the past 10 years, our understanding of what it takes to provide accessible materials data has matured and rapid progress has been made in establishing a Materials Data Infrastructure (MDI). We are learning that the MDI is essential to eliminating the seams between experiment and computation by providing a means for them to connect effortlessly. Additionally, the MDI is becoming an enabler, allowing materials engineering to tie into a much broader model-based engineering enterprise for product design.
Geospatial Data as a Service: Towards planetary scale real-time analytics
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Larraondo, P. R.; Antony, J.; Richards, C. J.
2017-12-01
The rapid growth of earth systems, environmental and geophysical datasets poses a challenge to both end-users and infrastructure providers. For infrastructure and data providers, tasks like managing, indexing and storing large collections of geospatial data needs to take into consideration the various use cases by which consumers will want to access and use the data. Considerable investment has been made by the Earth Science community to produce suitable real-time analytics platforms for geospatial data. There are currently different interfaces that have been defined to provide data services. Unfortunately, there is considerable difference on the standards, protocols or data models which have been designed to target specific communities or working groups. The Australian National University's National Computational Infrastructure (NCI) is used for a wide range of activities in the geospatial community. Earth observations, climate and weather forecasting are examples of these communities which generate large amounts of geospatial data. The NCI has been carrying out significant effort to develop a data and services model that enables the cross-disciplinary use of data. Recent developments in cloud and distributed computing provide a publicly accessible platform where new infrastructures can be built. One of the key components these technologies offer is the possibility of having "limitless" compute power next to where the data is stored. This model is rapidly transforming data delivery from centralised monolithic services towards ubiquitous distributed services that scale up and down adapting to fluctuations in the demand. NCI has developed GSKY, a scalable, distributed server which presents a new approach for geospatial data discovery and delivery based on OGC standards. We will present the architecture and motivating use-cases that drove GSKY's collaborative design, development and production deployment. We show our approach offers the community valuable exploratory analysis capabilities, for dealing with petabyte-scale geospatial data collections.
An EMSO data case study within the INDIGO-DC project
NASA Astrophysics Data System (ADS)
Monna, Stephen; Marcucci, Nicola M.; Marinaro, Giuditta; Fiore, Sandro; D'Anca, Alessandro; Antonacci, Marica; Beranzoli, Laura; Favali, Paolo
2017-04-01
We present our experience based on a case study within the INDIGO-DataCloud (INtegrating Distributed data Infrastructures for Global ExplOitation) project (www.indigo-datacloud.eu). The aim of INDIGO-DC is to develop a data and computing platform targeting scientific communities. Our case study is an example of activities performed by INGV using data from seafloor observatories that are nodes of the infrastructure EMSO (European Multidisciplinary Seafloor and water column Observatory)-ERIC (www.emso-eu.org). EMSO is composed of several deep-seafloor and water column observatories, deployed at key sites in the European waters, thus forming a widely distributed pan-European infrastructure. In our case study we consider data collected by the NEMO-SN1 observatory, one of the EMSO nodes used for geohazard monitoring, located in the Western Ionian Sea in proximity of Etna volcano. Starting from the case study, through an agile approach, we defined some requirements for INDIGO developers, and tested some of the proposed INDIGO solutions that are of interest for our research community. Given that EMSO is a distributed infrastructure, we are interested in INDIGO solutions that allow access to distributed data storage. Access should be both user-oriented and machine-oriented, and with the use of a common identity and access system. For this purpose, we have been testing: - ONEDATA (https://onedata.org), as global data management system. - INDIGO-IAM as Identity and Access Management system. Another aspect we are interested in is the efficient data processing, and we have focused on two types of INDIGO products: - Ophidia (http://ophidia.cmcc.it), a big data analytics framework for eScience for the analysis of multidimensional data. - A collection of INDIGO Services to run processes for scientific computing through the INDIGO Orchestrator.
Algorithms for Lightweight Key Exchange.
Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio
2017-06-27
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okhravi, Hamed; Sheldon, Frederick T.; Haines, Joshua
Data diodes provide protection of critical cyber assets by the means of physically enforcing traffic direction on the network. In order to deploy data diodes effectively, it is imperative to understand the protection they provide, the protection they do not provide, their limitations, and their place in the larger security infrastructure. In this work, we study data diodes, their functionalities and limitations. We then propose two critical infrastructure systems that can benefit from the additional protection offered by data diodes: process control networks and net-centric cyber decision support systems. We review the security requirements of these systems, describe the architectures,more » and study the trade-offs. Finally, the architectures are evaluated against different attack patterns.« less
Software architecture and design of the web services facilitating climate model diagnostic analysis
NASA Astrophysics Data System (ADS)
Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.
2015-12-01
Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.
Wake Island Supplemental Environmental Assessment
2007-02-01
operations, the oxidizer transfer system would be flushed with water . This operation is expected to yield approximately 5 grams (0.2 ounces) of nitric...Defense System (BMDS) to provide a defensive capability for the U.S., its deployed forces, friends, and allies from ballistic missile threats. The...infrastructure, land use, physical resources, noise, socioeconomics, transportation, and water resources. MDA determined that six of the thirteen resource
In Touch With Industry: ICAF Industry Studies, 1997
1997-01-01
Society of Civil Engineers, Washington, DC. . 1994. "Materials for Tomorrow’s Infrastructure: A Ten-Year Plan for Deploying High - Performance ...identified high - performance electronics as a key to modern warfare and conflict prevention. Clearly, the nation’s defense strategy relies heavily on...priced, high performance systems. As a consequence, hardware makers have undergone multiple restructures, consolidations, mergers, and global
Transforming revenue management.
Silveria, Richard; Alliegro, Debra; Nudd, Steven
2008-11-01
Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
A data management infrastructure for bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Byun, Jaewook; Kim, Daeyoung; Sohn, Hoon; Bae, In Hwan; Law, Kincho H.
2015-04-01
This paper discusses a data management infrastructure framework for bridge monitoring applications. As sensor technologies mature and become economically affordable, their deployment for bridge monitoring will continue to grow. Data management becomes a critical issue not only for storing the sensor data but also for integrating with the bridge model to support other functions, such as management, maintenance and inspection. The focus of this study is on the effective data management of bridge information and sensor data, which is crucial to structural health monitoring and life cycle management of bridge structures. We review the state-of-the-art of bridge information modeling and sensor data management, and propose a data management framework for bridge monitoring based on NoSQL database technologies that have been shown useful in handling high volume, time-series data and to flexibly deal with unstructured data schema. Specifically, Apache Cassandra and Mongo DB are deployed for the prototype implementation of the framework. This paper describes the database design for an XML-based Bridge Information Modeling (BrIM) schema, and the representation of sensor data using Sensor Model Language (SensorML). The proposed prototype data management framework is validated using data collected from the Yeongjong Bridge in Incheon, Korea.
NASA Technical Reports Server (NTRS)
Lazio, Joseph; Bowman, Judd D.; Burns, Jack O.; Farrell, W. M.; Jones, D. L.; Kasper, J. C.; MacDowall, R. J.; Stewart, K. P.; Weiler, K.
2012-01-01
Observations with radio telescopes address key problems in cosmology, astrobiology, heliophysics, and planetary science including the first light in the Universe (Cosmic Dawn), magnetic fields of extrasolar planets, particle acceleration mechanisms, and the lunar ionosphere. The Moon is a unique science platform because it allows access to radio frequencies that do not penetrate the Earth's ionosphere and because its far side is shielded from intense terrestrial emissions. The instrument packages and infrastructure needed for radio telescopes can be transported and deployed as part of Exploration activities, and the resulting science measurements may inform Exploration (e.g., measurements of lunar surface charging). An illustrative roadmap for the staged deployment of lunar radio telescopes
Efficient Software Systems for Cardio Surgical Departments
NASA Astrophysics Data System (ADS)
Fountoukis, S. G.; Diomidous, M. J.
2009-08-01
Herein, the design implementation and deployment of an object oriented software system, suitable for the monitoring of cardio surgical departments, is investigated. Distributed design architectures are applied and the implemented software system can be deployed on distributed infrastructures. The software is flexible and adaptable to any cardio surgical environment regardless of the department resources used. The system exploits the relations and the interdependency of the successive bed positions that the patients occupy at the different health care units during their stay in a cardio surgical department, to determine bed availabilities and to perform patient scheduling and instant rescheduling whenever necessary. It also aims to successful monitoring of the workings of the cardio surgical departments in an efficient manner.
Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.
2017-12-01
THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.
The Fundamental Spatial Data in the Public Administration Registers
NASA Astrophysics Data System (ADS)
Čada, V.; Janečka, K.
2016-06-01
The system of basic registers was launched in the Czech Republic in 2012. The system provides a unique solution to centralize and keep actual most common and widely used information as a part of the eGovernment. The basic registers are the central information source for information systems of public authorities. In October 2014, the Czech government approved the conception of The Strategy for the Development of the Infrastructure for Spatial Information in the Czech Republic to 2020 (GeoInfoStrategy) that serves as a basis for the NSDI. The paper describes the challenges in building the National Spatial Data Infrastructure (NSDI) in the Czech Republic with focus on the fundamental spatial data and related basic registers. The GeoInfoStrategy should also contribute to increasing of the competitiveness of the economy. Therefore the paper also reflects the Directive 2014/61/EU of the European Parliament and of the Council on measures to reduce the cost of deploying high-speed electronic communication networks. The Directive states that citizens as well as the private and public sectors must have the opportunity to be part of the digital economy. A high quality digital infrastructure underpins virtually all sectors of a modern and innovative economy. To ensure a development of such infrastructure in the Czech Republic, the Register of passive infrastructure providing information on the features of passive infrastructure has to be established.
NASA Astrophysics Data System (ADS)
Xiao, Xiaojun; Du, Chunsheng; Zhou, Rongsheng
2004-04-01
As a result of data traffic"s exponential growth, network is currently evolving from fixed circuit switched services to dynamic packet switched services, which has brought unprecedented changes to the existing transport infrastructure. It is generally agreed that automatic switched optical network (ASON) is one of the promising solutions for the next generation optical networks. In this paper, we present the results of our experimental tests and economic analysis on ASON. The intention of this paper is to present our perspective, in terms of evolution strategy toward ASON, on next generation optical networks. It is shown through experimental tests that the performance of current Pre-standard ASON enabled equipments satisfies the basic requirements of network operators and is ready for initial deployment. The results of the economic analysis show that network operators can be benefit from the deployment of ASON from three sides. Firstly, ASON can reduce the CAPEX for network expanding by integrating multiple ADM & DCS into one box. Secondly, ASON can reduce the OPEX for network operation by introducing automatic resource control scheme. Finally, ASON can increase margin revenue by providing new optical network services such as Bandwidth on Demand, optical VPN etc. Finally, the evolution strategy is proposed as our perspective toward next generation optical networks. We hope the evolution strategy introduced may be helpful for the network operators to gracefully migrate their fixed ring based legacy networks to next generation dynamic mesh based network.
Beery, Joshua A; Day, Jennifer E
2015-03-03
Wind energy development is an increasingly popular form of renewable energy infrastructure in rural areas. Communities generally perceive socioeconomic benefits accrue and that community funding structures are preferable to corporate structures, yet lack supporting quantitative data to inform energy policy. This study uses the Everpower wind development, to be located in Midwestern Ohio, as a hypothetical modeling environment to identify and examine socioeconomic impact trends arising from corporate, community and diversified funding structures. Analysis of five National Renewable Energy Laboratory Jobs and Economic Development Impact models incorporating local economic data and review of relevant literature were conducted. The findings suggest that community and diversified funding structures exhibit 40-100% higher socioeconomic impact levels than corporate structures. Prioritization of funding sources and retention of federal tax incentives were identified as key elements. The incorporation of local shares was found to mitigate the negative effects of foreign private equity, local debt financing increased economic output and opportunities for private equity investment were identified. The results provide the groundwork for energy policies focused to maximize socioeconomic impacts while creating opportunities for inclusive economic participation and improved social acceptance levels fundamental to the deployment of renewable energy technology.
The ESA Space Weather Applications Pilot Project
NASA Astrophysics Data System (ADS)
Glover, A.; Hilgers, A.; Daly, E.
Following the completion in 2001 of two parallel studies to consider the feasibility of a European Space Weather Programme ESA embarked upon a space weather pilot study with the goal of prototyping European space weather services and assessing the overall market for such within Europe This pilot project centred on a number of targeted service development activities supported by a common infrastructure and making use of only existing space weather assets Each service activity included clear participation from at least one identified service user who was requested to provide initial requirements and regular feedback during the operational phase of the service These service activities are now reaching the end of their 2-year development and testing phase and are now accessible each with an element of the service in the public domain see http www esa-spaceweathet net swenet An additional crucial element of the study was the inclusion of a comprehensive and independent analysis of the benefits both economic and strategic of embarking on a programme which would include the deployment of an infrastructure with space-based elements The results of this study will be reported together with their implication for future coordinated European activities in this field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Qi; Al-Shaer, Ehab; Chatterjee, Samrat
The Infrastructure Distributed Denial of Service (IDDoS) attacks continue to be one of the most devastating challenges facing cyber systems. The new generation of IDDoS attacks exploit the inherent weakness of cyber infrastructure including deterministic nature of routes, skew distribution of flows, and Internet ossification to discover the network critical links and launch highly stealthy flooding attacks that are not observable at the victim end. In this paper, first, we propose a new metric to quantitatively measure the potential susceptibility of any arbitrary target server or domain to stealthy IDDoS attacks, and es- timate the impact of such susceptibility onmore » enterprises. Second, we develop a proactive route mutation technique to minimize the susceptibility to these attacks by dynamically changing the flow paths periodically to invalidate the adversary knowledge about the network and avoid targeted critical links. Our proposed approach actively changes these network paths while satisfying security and qualify of service requirements. We present an integrated approach of proactive route mutation that combines both infrastructure-based mutation that is based on reconfiguration of switches and routers, and middle-box approach that uses an overlay of end-point proxies to construct a virtual network path free of critical links to reach a destination. We implemented the proactive path mutation technique on a Software Defined Network using the OpendDaylight controller to demonstrate a feasible deployment of this approach. Our evaluation validates the correctness, effectiveness, and scalability of the proposed approaches.« less
Software Defined Cyberinfrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Ian; Blaiszik, Ben; Chard, Kyle
Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less
Analysis of the World Experience of Smart Grid Deployment: Economic Effectiveness Issues
NASA Astrophysics Data System (ADS)
Ratner, S. V.; Nizhegorodtsev, R. M.
2018-06-01
Despite the positive dynamics in the growth of RES-based power production in electric power systems of many countries, the further development of commercially mature technologies of wind and solar generation is often constrained by the existing grid infrastructure and conventional energy supply practices. The integration of large wind and solar power plants into a single power grid and the development of microgeneration require the widespread introduction of a new smart grid technology cluster (smart power grids), whose technical advantages over the conventional ones have been fairly well studied, while issues of their economic effectiveness remain open. Estimation and forecasting potential economic effects from the introduction of innovative technologies in the power sector during the stage preceding commercial development is a methodologically difficult task that requires the use of knowledge from different sciences. This paper contains the analysis of smart grid project implementation in Europe and the United States. Interval estimates are obtained for their basic economic parameters. It was revealed that the majority of smart grid implemented projects are not yet commercially effective, since their positive externalities are usually not recognized on the revenue side due to the lack of universal methods for public benefits monetization. The results of the research can be used in modernization and development planning for the existing grid infrastructure both at the federal level and at the level of certain regions and territories.
Hydrogen Fuel Cell Analysis: Lessons Learned from Stationary Power Generation Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott E. Grasman; John W. Sheffield; Fatih Dogan
2010-04-30
This study considered opportunities for hydrogen in stationary applications in order to make recommendations related to RD&D strategies that incorporate lessons learned and best practices from relevant national and international stationary power efforts, as well as cost and environmental modeling of pathways. The study analyzed the different strategies utilized in power generation systems and identified the different challenges and opportunities for producing and using hydrogen as an energy carrier. Specific objectives included both a synopsis/critical analysis of lessons learned from previous stationary power programs and recommendations for a strategy for hydrogen infrastructure deployment. This strategy incorporates all hydrogen pathways andmore » a combination of distributed power generating stations, and provides an overview of stationary power markets, benefits of hydrogen-based stationary power systems, and competitive and technological challenges. The motivation for this project was to identify the lessons learned from prior stationary power programs, including the most significant obstacles, how these obstacles have been approached, outcomes of the programs, and how this information can be used by the Hydrogen, Fuel Cells & Infrastructure Technologies Program to meet program objectives primarily related to hydrogen pathway technologies (production, storage, and delivery) and implementation of fuel cell technologies for distributed stationary power. In addition, the lessons learned address environmental and safety concerns, including codes and standards, and education of key stakeholders.« less
Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar
2018-01-30
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
NASA Astrophysics Data System (ADS)
Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio
2015-12-01
Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.
Vernet, David; Corral, Guiomar
2018-01-01
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748
NASA Astrophysics Data System (ADS)
Hafner, K.; Davis, P.; Wilson, D.; Sumy, D.
2017-12-01
The Global Seismographic Network (GSN) recently received delivery of the next generation Very Broadband (VBB) borehole sensors purchased through funding from the DOE. Deployment of these sensors will be underway during the end of summer and fall of 2017 and they will eventually replace the aging KS54000 sensors at approximately one-third of the GSN network stations. We will present the latest methods of deploying these sensors in the existing deep boreholes. To achieve lower noise performance at some sites, emplacement in shallow boreholes might result in lower noise performance for the existing site conditions. In some cases shallow borehole installations may be adapted to vault stations (which make up two thirds of the network), as a means of reducing tilt-induced signals on the horizontal components. The GSN is creating a prioritized list of equipment upgrades at selected stations with the ultimate goal of optimizing overall network data availability and noise performance. For an overview of the performance of the current GSN relative to selected set of metrics, we are utilizing data quality metrics and Probability Density Functions (PDFs)) generated by the IRIS Data Management Centers' (DMC) MUSTANG (Modular Utility for Statistical Knowledge Gathering) and LASSO (Latest Assessment of Seismic Station Observations) tools. We will present our metric analysis of GSN performance in 2016, and show the improvements at GSN sites resulting from recent instrumentation and infrastructure upgrades.
CentNet—A deployable 100-station network for surface exchange research
NASA Astrophysics Data System (ADS)
Oncley, S.; Horst, T. W.; Semmer, S.; Militzer, J.; Maclean, G.; Knudson, K.
2014-12-01
Climate, air quality, atmospheric composition, surface hydrology, and ecological processes are directly affected by the Earth's surface. Complexity of this surface exists at multiple spatial scales, which complicates the understanding of these processes. NCAR/EOL currently provides a facility to the research community to make direct eddy-covariance flux observations to quantify surface-atmosphere interactions. However, just as model resolution has continued to increase, there is a need to increase the spatial density of flux measurements to capture the wide variety of scales that contribute to exchange processes close to the surface. NCAR/EOL now has developed the CentNet facility, that is envisioned to have on the order of 100 surface flux stations deployable for periods of months to years. Each station would measure standard meteorological variables, all components of the surface energy balance (including turbulence fluxes and radiation), atmospheric composition, and other quantities to characterize the surface. Thus, CentNet can support observational research in the biogeosciences, hydrology, urban meteorology, basic meteorology, and turbulence. CentNet has been designed to be adaptable to a wide variety of research problems while keeping operations manageable. Tower infrastructure has been designed to be lightweight, easily deployed, and with a minimal set-up footprint. CentNet uses sensor networks to increase spatial sampling at each station. The data system saves every sample on site to retain flexibility in data analysis. We welcome guidance on development and funding priorities as we build CentNet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nils Johnson; Joan Ogden
2010-12-31
In this final report, we describe research results from Phase 2 of a technical/economic study of fossil hydrogen energy systems with carbon dioxide (CO{sub 2}) capture and storage (CCS). CO{sub 2} capture and storage, or alternatively, CO{sub 2} capture and sequestration, involves capturing CO{sub 2} from large point sources and then injecting it into deep underground reservoirs for long-term storage. By preventing CO{sub 2} emissions into the atmosphere, this technology has significant potential to reduce greenhouse gas (GHG) emissions from fossil-based facilities in the power and industrial sectors. Furthermore, the application of CCS to power plants and hydrogen production facilitiesmore » can reduce CO{sub 2} emissions associated with electric vehicles (EVs) and hydrogen fuel cell vehicles (HFCVs) and, thus, can also improve GHG emissions in the transportation sector. This research specifically examines strategies for transitioning to large-scale coal-derived energy systems with CCS for both hydrogen fuel production and electricity generation. A particular emphasis is on the development of spatially-explicit modeling tools for examining how these energy systems might develop in real geographic regions. We employ an integrated modeling approach that addresses all infrastructure components involved in the transition to these energy systems. The overall objective is to better understand the system design issues and economics associated with the widespread deployment of hydrogen and CCS infrastructure in real regions. Specific objectives of this research are to: Develop improved techno-economic models for all components required for the deployment of both hydrogen and CCS infrastructure, Develop novel modeling methods that combine detailed spatial data with optimization tools to explore spatially-explicit transition strategies, Conduct regional case studies to explore how these energy systems might develop in different regions of the United States, and Examine how the design and cost of coal-based H{sub 2} and CCS infrastructure depend on geography and location.« less
Infrastructure for collaborative science and societal applications in the Columbia River estuary
NASA Astrophysics Data System (ADS)
Baptista, António M.; Seaton, Charles; Wilkin, Michael P.; Riseman, Sarah F.; Needoba, Joseph A.; Maier, David; Turner, Paul J.; Kärnä, Tuomas; Lopez, Jesse E.; Herfort, Lydie; Megler, V. M.; McNeil, Craig; Crump, Byron C.; Peterson, Tawnya D.; Spitz, Yvette H.; Simon, Holly M.
2015-12-01
To meet societal needs, modern estuarine science needs to be interdisciplinary and collaborative, combine discovery with hypotheses testing, and be responsive to issues facing both regional and global stakeholders. Such an approach is best conducted with the benefit of data-rich environments, where information from sensors and models is openly accessible within convenient timeframes. Here, we introduce the operational infrastructure of one such data-rich environment, a collaboratory created to support (a) interdisciplinary research in the Columbia River estuary by the multi-institutional team of investigators of the Science and Technology Center for Coastal Margin Observation & Prediction and (b) the integration of scientific knowledge into regional decision making. Core components of the operational infrastructure are an observation network, a modeling system and a cyber-infrastructure, each of which is described. The observation network is anchored on an extensive array of long-term stations, many of them interdisciplinary, and is complemented by on-demand deployment of temporary stations and mobile platforms, often in coordinated field campaigns. The modeling system is based on finiteelement unstructured-grid codes and includes operational and process-oriented simulations of circulation, sediments and ecosystem processes. The flow of information is managed through a dedicated cyber-infrastructure, conversant with regional and national observing systems.
Oceanographic Research Capacity in the US Virgin Islands
NASA Astrophysics Data System (ADS)
Jobsis, P.; Habtes, S. Y.
2016-02-01
The University of the Virgin Islands (UVI), a small HBCU with campuses on both St Thomas and St Croix, has a growing marine science department that is quickly increasing its capacity for oceanographic monitoring and research due to VI-EPSCoR (National Science Foundation's Experimental Program to Stimulate Competitive Research in the Virgin Islands) and associations with CariCOOS (the Caribbean Coastal Ocean Observing System). CariCOOS is managed through the University of Puerto Rico Mayaguez, with funding from NOAA's Integrated Ocean Observing System (IOOS). Over the past five years two oceanographic data buoys have been deployed increasing the real-time oceanographic data available for the northeastern Caribbean. In addition, researchers at UVI have deployed ADCPs and conducted CTD casts at relevant research sites as part of routine territorial monitoring programs. With VI-EPSCoR funding UVI has developed an Institute for Geocomputational Analysis and Statistic (GeoCAS) to conduct geospatial analysis and to act as a data repository and hosting/serving center for research, environmental and other relevant data. Much of the oceanographic data is available at www.caricoos.org and www.geocas.uvi.edu. As the marine research infrastructure at UVI continues to grow, the oceanographic and marine biology research program at the University's Center for Marine and Environmental Studies will continue to expand. This will benefit not only UVI researchers but also any researcher with interests in this region of the Caribbean.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
WiFi RFID demonstration for resource tracking in a statewide disaster drill.
Cole, Stacey L; Siddiqui, Javeed; Harry, David J; Sandrock, Christian E
2011-01-01
To investigate the capabilities of Radio Frequency Identification (RFID) tracking of patients and medical equipment during a simulated disaster response scenario. RFID infrastructure was deployed at two small rural hospitals, in one large academic medical center and in two vehicles. Several item types from the mutual aid equipment list were selected for tracking during the demonstration. A central database server was installed at the UC Davis Medical Center (UCDMC) that collected RFID information from all constituent sites. The system was tested during a statewide disaster drill. During the drill, volunteers at UCDMC were selected to locate assets using the traditional method of locating resources and then using the RFID system. This study demonstrated the effectiveness of RFID infrastructure in real-time resource identification and tracking. Volunteers at UCDMC were able to locate assets substantially faster using RFID, demonstrating that real-time geolocation can be substantially more efficient and accurate than traditional manual methods. A mobile, Global Positioning System (GPS)-enabled RFID system was installed in a pediatric ambulance and connected to the central RFID database via secure cellular communication. This system is unique in that it provides for seamless region-wide tracking that adaptively uses and seamlessly integrates both outdoor cellular-based mobile tracking and indoor WiFi-based tracking. RFID tracking can provide a real-time picture of the medical situation across medical facilities and other critical locations, leading to a more coordinated deployment of resources. The RFID system deployed during this study demonstrated the potential to improve the ability to locate and track victims, healthcare professionals, and medical equipment during a region-wide disaster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geveci, Berk
The purpose of the SDAV institute is to provide tools and expertise in scientific data management, analysis, and visualization to DOE’s application scientists. Our goal is to actively work with application teams to assist them in achieving breakthrough science, and to provide technical solutions in the data management, analysis, and visualization regimes that are broadly used by the computational science community. Over the last 5 years members of our institute worked directly with application scientists and DOE leadership-class facilities to assist them by applying the best tools and technologies at our disposal. We also enhanced our tools based on inputmore » from scientists on their needs. Many of the applications we have been working with are based on connections with scientists established in previous years. However, we contacted additional scientists though our outreach activities, as well as engaging application teams running on leading DOE computing systems. Our approach is to employ an evolutionary development and deployment process: first considering the application of existing tools, followed by the customization necessary for each particular application, and then the deployment in real frameworks and infrastructures. The institute is organized into three areas, each with area leaders, who keep track of progress, engagement of application scientists, and results. The areas are: (1) Data Management, (2) Data Analysis, and (3) Visualization. Kitware has been involved in the Visualization area. This report covers Kitware’s contributions over the last 5 years (February 2012 – February 2017). For details on the work performed by the SDAV institute as a whole, please see the SDAV final report.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burman, K.; Olis, D.; Gevorgian, V.
2011-09-01
This report focuses on the economic and technical feasibility of integrating renewable energy technologies into the U.S. Virgin Islands transmission and distribution systems. The report includes three main areas of analysis: 1) the economics of deploying utility-scale renewable energy technologies on St. Thomas/St. John and St. Croix; 2) potential sites for installing roof- and ground-mount PV systems and wind turbines and the impact renewable generation will have on the electrical subtransmission and distribution infrastructure, and 3) the feasibility of a 100- to 200-megawatt power interconnection of the Puerto Rico Electric Power Authority (PREPA), Virgin Islands Water and Power Authority (WAPA),more » and British Virgin Islands (BVI) grids via a submarine cable system.« less
2005-01-01
IT infrastructure necessary to become global IT powers but are hampered by the significant economic disparity between the rural and urban areas of...analogous to the plight of rural residents of the 1930’s when rural electrification was being discussed as part of the New Deal. A disjointed national...slides entitled Promoting Broadband Deployment in Rural America. Available at: 34 <http://www.ntia.doc.gov/ntiahome/speeches/2005/ KY_01122005_files
ERIC Educational Resources Information Center
Tseng, Paul T. Y.; Yen, David C.; Hung, Yu-Chung; Wang, Nana C. F.
2008-01-01
The objective of this article is to explore the experience of reconciling the strategic information system (IS) management with the radical transition of the Information Technology (IT) infrastructure in Taiwan's Bureau of Foreign Trade (BOFT) between 1998 and 2003. This investigation will be beneficial for the implementation of IT projects, as…
Glass, Bob; Mathis, Mike; Cochran, Ron; Garback, John
2018-06-08
Take a ride on a new type of bus, fueled by hydrogen. These hydrogen taxis are part of a Department of Energy-funded deployment of hydrogen powered vehicles and fueling infrastructure at nine federal facilities across the country to demonstrate this market-ready advanced technology. Produced and leased by Ford Motor Company , they consist of one 12- passenger bus and one nine-passenger bus. More information at: http://go.usa.gov/Tgr
Bytes: Weapons of Mass Disruption
2002-04-01
advances compound the problems of protecting complex global infrastructures from attacks. How should the U.S. integrate the many disparate...deploy and sustain military forces.".16 According to the direst of information warfare theories , all computer systems are vulnerable to attack. The...Crisis Show of Force Punitive Strikes Armed Intervention Regional Conflict Regional War Global Conventional War Strategic Nuclear War IW & C2W area of
Personal Devices in Public Settings: Lessons Learned from an iPod Touch/iPad Project
ERIC Educational Resources Information Center
Crichton, Susan; Pegler, Karen; White, Duncan
2012-01-01
Our paper reports findings from a two-phase deployment of iPod Touch and iPad devices in a large, urban Canadian school board. The purpose of the study was to gain an understanding of the infrastructure required to support handheld devices in classrooms; the opportunities and challenges teachers face as they begin to use handheld devices for…
Long-Range Ballistic Missile Defense in Europe
2010-04-26
land-based configurations. • Phase 3 ( 2018 timeframe): Deploy improved area coverage in Europe against medium- and intermediate-range Iranian...military services. “I think that all our military programs should be managed through those regular processes,” he said, and “that would include...10 interceptors itself would likely have comprised an area somewhat larger than a football field. The area of supporting infrastructure was likely
2011-02-01
almost entirely dependent on the national transmission grid . . . [which] is fragile, vulnerable, near its capacity limit, and outside of DOD control...has returned. A major factor in this resurgence has come from developing countries, where expressed and pro- jected demands for electricity are...rapidly growing and limited infrastructural and investment capacity generates interest in reactors that can be deployed rapidly and in- crementally.14
Architectural Implications of Cloud Computing
2011-10-24
Public Cloud Infrastructure-as-a- Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of...Twitter #SEIVirtualForum © 2011 Carnegie Mellon University Software -as-a- Service ( SaaS ) Model of software deployment in which a third-party...and System Solutions (RTSS) Program. Her current interests and projects are in service -oriented architecture (SOA), cloud computing, and context
NASA Astrophysics Data System (ADS)
Balaji, Bharathan
Commercial buildings consume 19% of energy in the US as of 2010, and traditionally, their energy use has been optimized through improved equipment efficiency and retrofits. Beyond improved hardware and infrastructure, there exists a tremendous potential in reducing energy use through better monitoring and operation. We present several applications that we developed and deployed to support our thesis that building energy use can be reduced through sensing, monitoring and optimization software that modulates use of building subsystems including HVAC. We focus on HVAC systems as these constitute 48-55% of building energy use. Specifically, in case of sensing, we describe an energy apportionment system that enables us to estimate real-time zonal HVAC power consumption by analyzing existing sensor information. With this energy breakdown, we can measure effectiveness of optimization solutions and identify inefficiencies. Central to energy efficiency improvement is determination of human occupancy in buildings. But this information is often unavailable or expensive to obtain using wide scale sensor deployment. We present our system that infers room level occupancy inexpensively by leveraging existing WiFi infrastructure. Occupancy information can be used not only to directly control HVAC but also to infer state of the building for predictive control. Building energy use is strongly influenced by human behaviors, and timely feedback mechanisms can encourage energy saving behavior. Occupants interact with HVAC using thermostats which has shown to be inadequate for thermal comfort. Building managers are responsible for incorporating energy efficiency measures, but our interviews reveal that they struggle to maintain efficiency due to lack of analytical tools and contextual information. We present our software services that provide energy feedback to occupants and building managers, improves comfort with personalized control and identifies energy wasting faults. For wide scale deployment of such energy saving software, they need to be portable across multiple buildings. However, buildings consist of heterogeneous equipment and use inconsistent naming schema, and developers need extensive domain knowledge to map sensor information to a standard format. To enable portability, we present an active learning algorithm that automates mapping building sensor metadata to a standard naming schema.
Impact of different cloud deployments on real-time video applications for mobile video cloud users
NASA Astrophysics Data System (ADS)
Khan, Kashif A.; Wang, Qi; Luo, Chunbo; Wang, Xinheng; Grecos, Christos
2015-02-01
The latest trend to access mobile cloud services through wireless network connectivity has amplified globally among both entrepreneurs and home end users. Although existing public cloud service vendors such as Google, Microsoft Azure etc. are providing on-demand cloud services with affordable cost for mobile users, there are still a number of challenges to achieve high-quality mobile cloud based video applications, especially due to the bandwidth-constrained and errorprone mobile network connectivity, which is the communication bottleneck for end-to-end video delivery. In addition, existing accessible clouds networking architectures are different in term of their implementation, services, resources, storage, pricing, support and so on, and these differences have varied impact on the performance of cloud-based real-time video applications. Nevertheless, these challenges and impacts have not been thoroughly investigated in the literature. In our previous work, we have implemented a mobile cloud network model that integrates localized and decentralized cloudlets (mini-clouds) and wireless mesh networks. In this paper, we deploy a real-time framework consisting of various existing Internet cloud networking architectures (Google Cloud, Microsoft Azure and Eucalyptus Cloud) and a cloudlet based on Ubuntu Enterprise Cloud over wireless mesh networking technology for mobile cloud end users. It is noted that the increasing trend to access real-time video streaming over HTTP/HTTPS is gaining popularity among both research and industrial communities to leverage the existing web services and HTTP infrastructure in the Internet. To study the performance under different deployments using different public and private cloud service providers, we employ real-time video streaming over the HTTP/HTTPS standard, and conduct experimental evaluation and in-depth comparative analysis of the impact of different deployments on the quality of service for mobile video cloud users. Empirical results are presented and discussed to quantify and explain the different impacts resulted from various cloud deployments, video application and wireless/mobile network setting, and user mobility. Additionally, this paper analyses the advantages, disadvantages, limitations and optimization techniques in various cloud networking deployments, in particular the cloudlet approach compared with the Internet cloud approach, with recommendations of optimized deployments highlighted. Finally, federated clouds and inter-cloud collaboration challenges and opportunities are discussed in the context of supporting real-time video applications for mobile users.
Using container orchestration to improve service management at the RAL Tier-1
NASA Astrophysics Data System (ADS)
Lahiff, Andrew; Collier, Ian
2017-10-01
In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.
Algorithms for Lightweight Key Exchange †
Santonja, Juan; Zamora, Antonio
2017-01-01
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006
A Simple Technique for Securing Data at Rest Stored in a Computing Cloud
NASA Astrophysics Data System (ADS)
Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai
"Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.
Carbon Lock-In: Barriers to the Deployment of Climate Change Mitigation Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapsa, Melissa Voss; Brown, Marilyn A.
The United States shares with many other countries the objective of stabilizing greenhouse gas (GHG) concentrations in the Earth's atmosphere at a level that would prevent dangerous interference with the climate system. Many believe that accelerating the pace of technology improvement and deployment could significantly reduce the cost of achieving this goal. The critical role of new technologies is underscored by the fact that most anthropogenic greenhouse gases emitted over the next century will come from equipment and infrastructure built in the future. As a result, new technologies and fuels have the potential to transform the nation's energy system whilemore » meeting climate change as well as energy security and other goals.« less
Pervasive Monitoring—An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making. PMID:22163537
Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.
2013-01-01
Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567
Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G
2013-01-01
Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a "digital skin for planet earth". The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making.