Key Management Infrastructure Increment 2 (KMI Inc 2)
2016-03-01
2016 Major Automated Information System Annual Report Key Management Infrastructure Increment 2 (KMI Inc 2) Defense Acquisition Management...PB - President’s Budget RDT&E - Research, Development, Test, and Evaluation SAE - Service Acquisition Executive TBD - To Be Determined TY - Then...Assigned: April 6, 2015 Program Information Program Name Key Management Infrastructure Increment 2 (KMI Inc 2) DoD Component DoD The acquiring DoD
NGNP Infrastructure Readiness Assessment: Consolidation Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian K Castle
2011-02-01
The Next Generation Nuclear Plant (NGNP) project supports the development, demonstration, and deployment of high temperature gas-cooled reactors (HTGRs). The NGNP project is being reviewed by the Nuclear Energy Advisory Council (NEAC) to provide input to the DOE, who will make a recommendation to the Secretary of Energy, whether or not to continue with Phase 2 of the NGNP project. The NEAC review will be based on, in part, the infrastructure readiness assessment, which is an assessment of industry's current ability to provide specified components for the FOAK NGNP, meet quality assurance requirements, transport components, have the necessary workforce inmore » place, and have the necessary construction capabilities. AREVA and Westinghouse were contracted to perform independent assessments of industry's capabilities because of their experience with nuclear supply chains, which is a result of their experiences with the EPR and AP-1000 reactors. Both vendors produced infrastructure readiness assessment reports that identified key components and categorized these components into three groups based on their ability to be deployed in the FOAK plant. The NGNP project has several programs that are developing key components and capabilities. For these components, the NGNP project have provided input to properly assess the infrastructure readiness for these components.« less
Design-build agreements : a case study review of the included handover requirements.
DOT National Transportation Integrated Search
2009-04-01
Road infrastructure is a key component of any regions transportation system. It allows : unprecedented levels of mobility, accessibility, and economic growth. On the other hand, the cost : associated with inadequate road infrastructure can amount ...
A Connected History of Health and Education: Learning Together toward a Better City
ERIC Educational Resources Information Center
Howard, Joanne; Howard, Diane; Dotson, Ebbin
2015-01-01
The infrastructure, financial, and human resource histories of health and education are offered as key components of future strategic planning initiatives in learning cities, and 10 key components of strategic planning initiatives designed to enhance the health and wealth of citizens of learning cities are discussed.
Proof of concept for using unmanned aerial vehicles for high mast pole and bridge inspections.
DOT National Transportation Integrated Search
2015-06-01
Bridges and high mast luminaires (HMLs) are key components of transportation infrastructures. Effective inspection : processes are crucial to maintain the structural integrity of these components. The most common approach for : inspections is visual ...
Measuring infrastructure: A key step in program evaluation and planning
Schmitt, Carol L.; Glasgow, LaShawn; Lavinghouze, S. Rene; Rieker, Patricia P.; Fulmer, Erika; McAleer, Kelly; Rogers, Todd
2016-01-01
State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General’s call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model’s utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. PMID:27037655
Measuring infrastructure: A key step in program evaluation and planning.
Schmitt, Carol L; Glasgow, LaShawn; Lavinghouze, S Rene; Rieker, Patricia P; Fulmer, Erika; McAleer, Kelly; Rogers, Todd
2016-06-01
State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General's call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model's utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Economic Impact Analysis of Short Line Railroads : Research Project Capsule
DOT National Transportation Integrated Search
2012-10-01
Rail transportation is of vital : importance to the state of : Louisiana and is a key : component of the states : business infrastructure, supporting agricultural, petroleum, chemical, and manufacturing : industries across the state. Short line (C...
2011-01-01
Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205
Workload Reduction in Online Courses: Getting Some Shuteye
ERIC Educational Resources Information Center
Dunlap, Joanna C.
2005-01-01
Instructors are a key component of any successful facilitated, asynchronous online course. They are tasked with providing the infrastructure for learning; modeling effective participation, collaboration, and learning strategies; monitoring and assessing learning and providing feedback, remediation, and grades; troubleshooting and resolving…
A primer on precision medicine informatics.
Sboner, Andrea; Elemento, Olivier
2016-01-01
In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Electronic Voting Protocol Using Identity-Based Cryptography.
Gallegos-Garcia, Gina; Tapia-Recillas, Horacio
2015-01-01
Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps.
Electronic Voting Protocol Using Identity-Based Cryptography
Gallegos-Garcia, Gina; Tapia-Recillas, Horacio
2015-01-01
Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps. PMID:26090515
Quantum metropolitan optical network based on wavelength division multiplexing.
Ciurana, A; Martínez-Mateo, J; Peev, M; Poppe, A; Walenta, N; Zbinden, H; Martín, V
2014-01-27
Quantum Key Distribution (QKD) is maturing quickly. However, the current approaches to its application in optical networks make it an expensive technology. QKD networks deployed to date are designed as a collection of point-to-point, dedicated QKD links where non-neighboring nodes communicate using the trusted repeater paradigm. We propose a novel optical network model in which QKD systems share the communication infrastructure by wavelength multiplexing their quantum and classical signals. The routing is done using optical components within a metropolitan area which allows for a dynamically any-to-any communication scheme. Moreover, it resembles a commercial telecom network, takes advantage of existing infrastructure and utilizes commercial components, allowing for an easy, cost-effective and reliable deployment.
1988-01-01
to rees- tablish connectivity for governmental users on a damaged net- work in...telephone call originates as an electrical current at a user’s home or business and travels to a telephone switching office over a local loop of copper...infrastructure. HISTORICAL PERSPECTIVE A timeline of key events with respect to the two key study components-fiber-optics communications
Public key infrastructure for DOE security research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R.; Foster, I.; Johnston, W.E.
This document summarizes the Department of Energy`s Second Joint Energy Research/Defence Programs Security Research Workshop. The workshop, built on the results of the first Joint Workshop which reviewed security requirements represented in a range of mission-critical ER and DP applications, discussed commonalties and differences in ER/DP requirements and approaches, and identified an integrated common set of security research priorities. One significant conclusion of the first workshop was that progress in a broad spectrum of DOE-relevant security problems and applications could best be addressed through public-key cryptography based systems, and therefore depended upon the existence of a robust, broadly deployed public-keymore » infrastructure. Hence, public-key infrastructure ({open_quotes}PKI{close_quotes}) was adopted as a primary focus for the second workshop. The Second Joint Workshop covered a range of DOE security research and deployment efforts, as well as summaries of the state of the art in various areas relating to public-key technologies. Key findings were that a broad range of DOE applications can benefit from security architectures and technologies built on a robust, flexible, widely deployed public-key infrastructure; that there exists a collection of specific requirements for missing or undeveloped PKI functionality, together with a preliminary assessment of how these requirements can be met; that, while commercial developments can be expected to provide many relevant security technologies, there are important capabilities that commercial developments will not address, due to the unique scale, performance, diversity, distributed nature, and sensitivity of DOE applications; that DOE should encourage and support research activities intended to increase understanding of security technology requirements, and to develop critical components not forthcoming from other sources in a timely manner.« less
A Resource Guide Identifying Technology Tools for Schools. Appendix
ERIC Educational Resources Information Center
Fox, Christine; Jones, Rachel
2009-01-01
SETDA and NASTID's "Technology Tools for Schools Resource Guide" provides definitions of key technology components and relevant examples, where appropriate as a glossary for educators. The guide also presents essential implementation and infrastructure considerations that decision makers should think about when implementing technology in schools.…
The National Geospatial Technical Operations Center
Craun, Kari J.; Constance, Eric W.; Donnelly, Jay; Newell, Mark R.
2009-01-01
The United States Geological Survey (USGS) National Geospatial Technical Operations Center (NGTOC) provides geospatial technical expertise in support of the National Geospatial Program in its development of The National Map, National Atlas of the United States, and implementation of key components of the National Spatial Data Infrastructure (NSDI).
Trees and vegetation can be key components of urban green infrastructure and green spaces such as parks and residential yards. Large trees, characterized by broad canopies, and high leaf and stem volumes, can intercept a substantial amount of stormwater while promoting evapotrans...
2008-06-19
ground troop component of a deployed contingency, and not a stationary infrastructure. With respect to fast- moving vehicles and aircraft, troops...the rapidly- moving user. In fact, the Control Group users could have been randomly assigned the Stationary , Sea, or 134 Ground Mobility Category...additional re-keying on the non- stationary users, just as they induce no re-keying on the Stationary users (assuming those fast- moving aircraft have the
Case management information systems: how to put the pieces together now and beyond year 2000.
Matthews, Pamela
2002-01-01
The case management process is a critical management and operational component in the delivery of customer services across the patient care continuum. Case management has transcended time and will continue to be a viable infrastructure process for successful organizations in the future. A key component of the case management infrastructure is information systems and technology support. Case management challenges include effective deployment and use of systems and technology. As more sophisticated, integrated systems are made available, case managers can use these tools to continue to expand effectively beyond the patient's episodic event to provide greater levels of cradle-to-grave management of healthcare. This article explores methods for defining case management system needs and identifying automation options available to the case manager.
77 FR 42704 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Vision Sensors, 12 AN/APG-78 Fire Control Radars (FCR) with Radar Electronics Unit (LONGBOW component... Target Acquisition and Designation Sight, 27 AN/AAR-11 Modernized Pilot Night Vision Sensors, 12 AN/APG... enhance the protection of key oil and gas infrastructure and platforms which are vital to U.S. and western...
NASA Astrophysics Data System (ADS)
Khripko, Elena
2017-10-01
In the present article we study the issues of organizational resistance to reengineering of business processes in construction of transport infrastructure. Reengineering in a company of transport sector is, first and foremost, an innovative component of business strategy. We analyze the choice of forward and reverse reengineering tools and terms of their application in connection with organizational resistance. Reengineering is defined taking into account four aspects: fundamentality, radicality, abruptness, business process. We describe the stages of reengineering and analyze key requirements to newly created business processes.
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew
2016-01-01
EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.
Advancing Mental Health Research: Washington University's Center for Mental Health Services Research
ERIC Educational Resources Information Center
Proctor, Enola K.; McMillen, Curtis; Haywood, Sally; Dore, Peter
2008-01-01
Research centers have become a key component of the research infrastructure in schools of social work, including the George Warren Brown School of Social Work at Washington University. In 1993, that school's Center for Mental Health Services Research (CMHSR) received funding from the National Institute of Mental Health (NIMH) as a Social Work…
ERIC Educational Resources Information Center
Anagnostopoulos, Dorothea; Rutledge, Stacey; Bali, Valentina
2013-01-01
This article examines how SEAs in three states designed, installed, and operated statewide, longitudinal student information systems (SLSIS). SLSIS track individual students' progress in K-12 schools, college, and beyond and link it to individual schools and teachers. They are key components of the information infrastructure of test-based…
Fisher, Ronald E; Norman, Michael
2010-07-01
The US Department of Homeland Security (DHS) is developing indices to better assist in the risk management of critical infrastructures. The first of these indices is the Protective Measures Index - a quantitative index that measures overall protection across component categories: physical security, security management, security force, information sharing, protective measures and dependencies. The Protective Measures Index, which can also be recalculated as the Vulnerability Index, is a way to compare differing protective measures (eg fence versus security training). The second of these indices is the Resilience Index, which assesses a site's resilience and consists of three primary components: robustness, resourcefulness and recovery. The third index is the Criticality Index, which assesses the importance of a facility. The Criticality Index includes economic, human, governance and mass evacuation impacts. The Protective Measures Index, Resilience Index and Criticality Index are being developed as part of the Enhanced Critical Infrastructure Protection initiative that DHS protective security advisers implement across the nation at critical facilities. This paper describes two core themes: determination of the vulnerability, resilience and criticality of a facility and comparison of the indices at different facilities.
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
Towards Large-Scale, Non-Destructive Inspection of Concrete Bridges
NASA Astrophysics Data System (ADS)
Mahmoud, A.; Shah, A. H.; Popplewell, N.
2005-04-01
It is estimated that the rehabilitation of deteriorating engineering infrastructure in the harsh North American environment could cost billions of dollars. Bridges are key infrastructure components for surface transportation. Steel-free and fibre-reinforced concrete is used increasingly nowadays to circumvent the vulnerability of steel rebar to corrosion. Existing steel-free and fibre-reinforced bridges may experience extensive surface-breaking cracks that need to be characterized without incurring further damage. In the present study, a method that uses Lamb elastic wave propagation to non-destructively characterize cracks in plain as well as fibre-reinforced concrete is investigated both numerically and experimentally. Numerical and experimental data are corroborated with good agreement.
Parrish, Richard H.
2015-01-01
Numerous gaps in the current medication use system impede complete transmission of electronically identifiable and standardized extemporaneous formulations as well as a uniform approach to medication therapy management (MTM) for paediatric patients. The Pharmacy Health Information Technology Collaborative (Pharmacy HIT) identified six components that may have direct importance for pharmacy related to medication use in children. This paper will discuss key positions within the information technology infrastructure (HIT) where an electronic repository for the medication management of paediatric patients’ compounded non-sterile products (pCNP) and care provision could be housed optimally to facilitate and maintain transmission of e-prescriptions (eRx) from initiation to fulfillment. Further, the paper will propose key placement requirements to provide for maximal interoperability of electronic medication management systems to minimize disruptions across the continuum of care. PMID:28970375
Reiter, Kristin L; Song, Paula H; Minasian, Lori; Good, Marjorie; Weiner, Bryan J; McAlearney, Ann Scheck
2012-09-01
The Community Clinical Oncology Program (CCOP) plays an essential role in the efforts of the National Cancer Institute (NCI) to increase enrollment in clinical trials. Currently, there is little practical guidance in the literature to assist provider organizations in analyzing the return on investment (ROI), or business case, for establishing and operating a provider-based research network (PBRN) such as the CCOP. In this article, the authors present a conceptual model of the business case for PBRN participation, a spreadsheet-based tool and advice for evaluating the business case for provider participation in a CCOP organization. A comparative, case-study approach was used to identify key components of the business case for hospitals attempting to support a CCOP research infrastructure. Semistructured interviews were conducted with providers and administrators. Key themes were identified and used to develop the financial analysis tool. Key components of the business case included CCOP start-up costs, direct revenue from the NCI CCOP grant, direct expenses required to maintain the CCOP research infrastructure, and incidental benefits, most notably downstream revenues from CCOP patients. The authors recognized the value of incidental benefits as an important contributor to the business case for CCOP participation; however, currently, this component is not calculated. The current results indicated that providing a method for documenting the business case for CCOP or other PBRN involvement will contribute to the long-term sustainability and expansion of these programs by improving providers' understanding of the financial implications of participation. Copyright © 2011 American Cancer Society.
DOT National Transportation Integrated Search
2009-12-01
This volume focuses on one of the key components of the IRSV system, i.e., the AMBIS module. This module serves as one of : the tools used in this study to translate raw remote sensing data in the form of either high-resolution aerial photos or v...
Good practices on cost - effective road infrastructure safety investments.
Yannis, George; Papadimitriou, Eleonora; Evgenikos, Petros; Dragomanovits, Anastasios
2016-12-01
The paper presents the findings of a research project aiming to quantify and subsequently classify several infrastructure-related road safety measures, based on the international experience attained through extensive and selected literature review and additionally on a full consultation process including questionnaire surveys addressed to experts and relevant workshops. Initially, a review of selected research reports was carried out and an exhaustive list of road safety infrastructure investments covering all types of infrastructure was compiled. Individual investments were classified according to the infrastructure investment area and the type of investment and were thereafter analysed on the basis of key safety components. These investments were subsequently ranked in relation to their safety effects and implementation costs and on the basis of this ranking, a set of five most promising investments was selected for an in-depth analysis. The results suggest that the overall cost effectiveness of a road safety infrastructure investment is not always in direct correlation with the safety effect and is recommended that cost-benefit ratios and safety effects are always examined in conjunction with each other in order to identify the optimum solution for a specific road safety problem in specific conditions and with specific objectives.
Roux, D J
2001-06-01
This article explores the strategies that were, and are being, used to facilitate the transition from scientific development to operational application of the South African River Health Programme (RHP). Theoretical models from the field of the management of technology are used to provide insight into the dynamics that influence the relationship between the creation and application of environmental programmes, and the RHP in particular. Four key components of the RHP design are analysed, namely the (a) guiding team, (b) concepts, tools and methods, (c) infra-structural innovations and (d) communication. These key components evolved over three broad life stages of the programme, which are called the design, growth and anchoring stages.
Optoelectronic Infrastructure for Radio Frequency and Optical Phased Arrays
NASA Technical Reports Server (NTRS)
Cai, Jianhong
2015-01-01
Optoelectronic integrated circuits offer radiation-hardened solutions for satellite systems in addition to improved size, weight, power, and bandwidth characteristics. ODIS, Inc., has developed optoelectronic integrated circuit technology for sensing and data transfer in phased arrays. The technology applies integrated components (lasers, amplifiers, modulators, detectors, and optical waveguide switches) to a radio frequency (RF) array with true time delay for beamsteering. Optical beamsteering is achieved by controlling the current in a two-dimensional (2D) array. In this project, ODIS integrated key components to produce common RF-optical aperture operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.
1997-11-01
This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Snyder, Kimberly; Rieker, Patricia P.
2014-01-01
Functioning program infrastructure is necessary for achieving public health outcomes. It is what supports program capacity, implementation, and sustainability. The public health program infrastructure model presented in this article is grounded in data from a broader evaluation of 18 state tobacco control programs and previous work. The newly developed Component Model of Infrastructure (CMI) addresses the limitations of a previous model and contains 5 core components (multilevel leadership, managed resources, engaged data, responsive plans and planning, networked partnerships) and 3 supporting components (strategic understanding, operations, contextual influences). The CMI is a practical, implementation-focused model applicable across public health programs, enabling linkages to capacity, sustainability, and outcome measurement. PMID:24922125
X-33/RLV System Health Management/ Vehicle Health Management
NASA Technical Reports Server (NTRS)
Garbos, Raymond J.; Mouyos, William
1998-01-01
To reduce operations cost, the RLV must include the following elements: highly reliable, robust subsystems designed for simple repair access with a simplified servicing infrastructure and incorporating expedited decision making about faults and anomalies. A key component for the Single Stage to Orbit (SSTO) RLV System used to meet these objectives is System Health Management (SHM). SHM deals with the vehicle component- Vehicle Health Management (VHM), the ground processing associated with the fleet (GVHM) and the Ground Infrastructure Health Management (GIHM). The objective is to provide an automated collection and paperless health decision, maintenance and logistics system. Many critical technologies are necessary to make the SHM (and more specifically VHM) practical, reliable and cost effective. Sanders is leading the design, development and integration of the SHM system for RLV and X-33 SHM (a sub-scale, sub-orbit Advanced Technology Demonstrator). This paper will present the X-33 SHM design which forms the baseline for RLV SHM. This paper will also discuss other applications of these technologies.
HELIO: The Heliophysics Integrated Observatory
NASA Technical Reports Server (NTRS)
Bentley, R. D.; Csillaghy, A.; Aboudarham, J.; Jacquey, C.; Hapgood, M. A.; Bocchialini, K.; Messerotti, M.; Brooke, J.; Gallagher, P.; Fox, P.;
2011-01-01
Heliophysics is a new research field that explores the Sun-Solar System Connection; it requires the joint exploitation of solar, heliospheric, magnetospheric and ionospheric observations. HELIO, the Heliophysics Integrated Observatory, will facilitate this study by creating an integrated e-Infrastructure that has no equivalent anywhere else. It will be a key component of a worldwide effort to integrate heliophysics data and will coordinate closely with international organizations to exploit synergies with complementary domains. HELIO was proposed under a Research Infrastructure call in the Capacities Programme of the European Commission's 7th Framework Programme (FP7). The project was selected for negotiation in January 2009; following a successful conclusion to these, the project started on 1 June 2009 and will last for 36 months.
Bootstrapping and Maintaining Trust in the Cloud
2016-03-16
of infrastructure-as-a- service (IaaS) cloud computing services such as Ama- zon Web Services, Google Compute Engine, Rackspace, et. al. means that...Implementation We implemented keylime in ∼3.2k lines of Python in four components: registrar, node, CV, and tenant. The registrar offers a REST-based web ...bootstrap key K. It provides an unencrypted REST-based web service for these two functions. As described earlier, the pro- tocols for exchanging data
Defense Strategies for Asymmetric Networked Systems with Discrete Components.
Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun
2018-05-03
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.
Defense Strategies for Asymmetric Networked Systems with Discrete Components
Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.
2018-01-01
We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, R. K.; Peters, Scott
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) Cyber Security for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing Cyber Security for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modifiedmore » and used as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
Cryptographic Key Management and Critical Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K
The Department of Energy Office of Electricity Delivery and Energy Reliability (DOE-OE) CyberSecurity for Energy Delivery Systems (CSEDS) industry led program (DE-FOA-0000359) entitled "Innovation for Increasing CyberSecurity for Energy Delivery Systems (12CSEDS)," awarded a contract to Sypris Electronics LLC to develop a Cryptographic Key Management System for the smart grid (Scalable Key Management Solutions for Critical Infrastructure Protection). Oak Ridge National Laboratory (ORNL) and Sypris Electronics, LLC as a result of that award entered into a CRADA (NFE-11-03562) between ORNL and Sypris Electronics, LLC. ORNL provided its Cyber Security Econometrics System (CSES) as a tool to be modified and usedmore » as a metric to address risks and vulnerabilities in the management of cryptographic keys within the Advanced Metering Infrastructure (AMI) domain of the electric sector. ORNL concentrated our analysis on the AMI domain of which the National Electric Sector Cyber security Organization Resource (NESCOR) Working Group 1 (WG1) has documented 29 failure scenarios. The computational infrastructure of this metric involves system stakeholders, security requirements, system components and security threats. To compute this metric, we estimated the stakes that each stakeholder associates with each security requirement, as well as stochastic matrices that represent the probability of a threat to cause a component failure and the probability of a component failure to cause a security requirement violation. We applied this model to estimate the security of the AMI, by leveraging the recently established National Institute of Standards and Technology Interagency Report (NISTIR) 7628 guidelines for smart grid security and the International Electrotechnical Commission (IEC) 63351, Part 9 to identify the life cycle for cryptographic key management, resulting in a vector that assigned to each stakeholder an estimate of their average loss in terms of dollars per day of system operation. To further address probabilities of threats, information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain from NESCOR WG1. From these five selected scenarios, we characterized them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrated how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA.« less
NASA Astrophysics Data System (ADS)
Hernández Ernst, Vera; Poigné, Axel; Los, Walter
2010-05-01
Understanding and managing the complexity of the biodiversity system in relation to global changes concerning land use and climate change with their social and economic implications is crucial to mitigate species loss and biodiversity changes in general. The sustainable development and exploitation of existing biodiversity resources require flexible and powerful infrastructures offering, on the one hand, the access to large-scale databases of observations and measures, to advanced analytical and modelling software, and to high performance computing environments and, on the other hand, the interlinkage of European scientific communities among each others and with national policies. The European Strategy Forum on Research Infrastructures (ESFRI) selected the "LifeWatch e-science and technology infrastructure for biodiversity research" as a promising development to construct facilities to contribute to meet those challenges. LifeWatch collaborates with other selected initiatives (e.g. ICOS, ANAEE, NOHA, and LTER-Europa) to achieve the integration of the infrastructures at landscape and regional scales. This should result in a cooperating cluster of such infrastructures supporting an integrated approach for data capture and transmission, data management and harmonisation. Besides, facilities for exploration, forecasting, and presentation using heterogeneous and distributed data and tools should allow the interdisciplinary scientific research at any spatial and temporal scale. LifeWatch is an example of a new generation of interoperable research infrastructures based on standards and a service-oriented architecture that allow for linkage with external resources and associated infrastructures. External data sources will be established data aggregators as the Global Biodiversity Information Facility (GBIF) for species occurrences and other EU Networks of Excellence like the Long-Term Ecological Research Network (LTER), GMES, and GEOSS for terrestrial monitoring, the MARBEF network for marine data, and the Consortium for European Taxonomic Facilities (CETAF) and its European Distributed Institute for Taxonomy (EDIT) for taxonomic data. But also "smaller" networks and "volunteer scientists" may send data (e.g. GPS supported species observations) to a LifeWatch repository. Autonomous operating wireless environmental sensors and other smart hand-held devices will contribute to increase data capture activities. In this way LifeWatch will directly underpin the development of GEOBON, the biodiversity component if GEOSS, the Global Earth observation System. To overcome all major technical difficulties imposed by the variety of currently and future technologies, protocols, data formats, etc., LifeWatch will define and use common open interfaces. For this purpose, the LifeWatch Reference Model was developed during the preparatory phase specifying the service-oriented architecture underlying the ICT-infrastructure. The Reference Model identifies key requirements and key architectural concepts to support workflows for scientific in-silico experiments, tracking of provenance, and semantic enhancement, besides meeting the functional requirements mentioned before. It provides guidelines for the specification and implementation of services and information models, defining as well a number of generic services and models. Another key issue addressed by the Reference Model is that the cooperation of many developer teams residing in many European countries has to be organized to obtain compatible results in that conformance with the specifications and policies of the Reference Model will be required. The LifeWatch Reference Model is based on the ORCHESTRA Reference Model for geospatial-oriented architectures and services networks that provides a generic framework and has been endorsed as best practice by the Open Geospatial Consortium (OGC). The LifeWatch Infrastructure will allow (interdisciplinary) scientific researchers to collaborate by creating e-Laboratories or by composing e-Services which can be shared and jointly developed. For it a long-term vision for the LifeWatch Biodiversity Workbench Portal has been developed as a one-stop application for the LifeWatch infrastructure based on existing and emerging technologies. There the user can find all available resources such as data, workflows, tools, etc. and access LifeWatch applications that integrate different resource and provides key capabilities like resource discovery and visualisation, creation of workflows, creation and management of provenance, and the support of collaborative activities. While LifeWatch developers will construct components for solving generic LifeWatch tasks, users may add their own facilities to fulfil individual needs. Examples for application of the LifeWatch Reference Model and the LifeWatch Biodiversity Workbench Portal will be given.
Key success factors of health research centers: A mixed method study.
Tofighi, Shahram; Teymourzadeh, Ehsan; Heydari, Majid
2017-08-01
In order to achieve success in future goals and activities, health research centers are required to identify their key success factors. This study aimed to extract and rank the factors affecting the success of research centers at one of the medical universities in Iran. This study is a mixed method (qualitative-quantitative) study, which was conducted between May to October in 2016. The study setting was 22 health research centers. In qualitative phase, we extracted the factors affecting the success in research centers through purposeful interviews with 10 experts of centers, and classified them into themes and sub-themes. In the quantitative phase, we prepared a questionnaire and scored and ranked the factors recognized by 54 of the study samples by Friedman test. Nine themes and 42 sub-themes were identified. Themes included: strategic orientation, management, human capital, support, projects, infrastructure, communications and collaboration, paradigm and innovation and they were rated respectively as components of success in research centers. Among the 42 identified factors, 10 factors were ranked respectively as the key factors of success, and included: science and technology road map, strategic plan, evaluation indexes, committed human resources, scientific evaluation of members and centers, innovation in research and implementation, financial support, capable researchers, equipment infrastructure and teamwork. According to the results, the strategic orientation was the most important component in the success of research centers. Therefore, managers and authorities of research centers should pay more attention to strategic areas in future planning, including the science and technology road map and strategic plan.
Key success factors of health research centers: A mixed method study
Tofighi, Shahram; Teymourzadeh, Ehsan; Heydari, Majid
2017-01-01
Background In order to achieve success in future goals and activities, health research centers are required to identify their key success factors. Objective This study aimed to extract and rank the factors affecting the success of research centers at one of the medical universities in Iran. Methods This study is a mixed method (qualitative-quantitative) study, which was conducted between May to October in 2016. The study setting was 22 health research centers. In qualitative phase, we extracted the factors affecting the success in research centers through purposeful interviews with 10 experts of centers, and classified them into themes and sub-themes. In the quantitative phase, we prepared a questionnaire and scored and ranked the factors recognized by 54 of the study samples by Friedman test. Results Nine themes and 42 sub-themes were identified. Themes included: strategic orientation, management, human capital, support, projects, infrastructure, communications and collaboration, paradigm and innovation and they were rated respectively as components of success in research centers. Among the 42 identified factors, 10 factors were ranked respectively as the key factors of success, and included: science and technology road map, strategic plan, evaluation indexes, committed human resources, scientific evaluation of members and centers, innovation in research and implementation, financial support, capable researchers, equipment infrastructure and teamwork. Conclusion According to the results, the strategic orientation was the most important component in the success of research centers. Therefore, managers and authorities of research centers should pay more attention to strategic areas in future planning, including the science and technology road map and strategic plan. PMID:28979733
Digital divide, biometeorological data infrastructures and human vulnerability definition
NASA Astrophysics Data System (ADS)
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2018-05-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
Digital divide, biometeorological data infrastructures and human vulnerability definition.
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2018-05-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
Digital divide, biometeorological data infrastructures and human vulnerability definition
NASA Astrophysics Data System (ADS)
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2017-06-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
2002-03-22
may be derived from detailed inspection of the IC itself or from illicit appropriation of design information. Counterfeit smart cards can be mass...Infrastructure (PKI) as the Internet to securely and privately exchange data and money through the use of a public and a private cryptographic key pair...interference devices (SQDIS), electrical testing, and electron beam testing. • Other attacks, such as UV or X-rays or high temperatures, could cause erasure
Tracking the deployment of the integrated metropolitan ITS infrastructure in the USA : FY99 results
DOT National Transportation Integrated Search
2000-05-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
Consequence-driven cyber-informed engineering (CCE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Sarah G.; St Michel, Curtis; Smith, Robert
The Idaho National Lab (INL) is leading a high-impact, national security-level initiative to reprioritize the way the nation looks at high-consequence risk within the industrial control systems (ICS) environment of the country’s most critical infrastructure and other national assets. The Consequence-driven Cyber-informed Engineering (CCE) effort provides both private and public organizations with the steps required to examine their own environments for high-impact events/risks; identify implementation of key devices and components that facilitate that risk; illuminate specific, plausible cyber attack paths to manipulate these devices; and develop concrete mitigations, protections, and tripwires to address the high-consequence risk. The ultimate goal ofmore » the CCE effort is to help organizations take the steps necessary to thwart cyber attacks from even top-tier, highly resourced adversaries that would result in a catastrophic physical effect. CCE participants are encouraged to work collaboratively with each other and with key U.S. Government (USG) contributors to establish a coalition, maximizing the positive effect of lessons-learned and further contributing to the protection of critical infrastructure and other national assets.« less
Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment
Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel
2008-01-01
Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979
Green infrastructure is a widely used framework for conservation planning in the United States and elsewhere. The main components of green infrastructure are hubs and corridors. Hubs are large areas of natural vegetation, and corridors are linear features that connect hubs. W...
Power Systems Integration Laboratory | Energy Systems Integration Facility
inverters. Key Infrastructure Grid simulator, load bank, Opal-RT, battery, inverter mounting racks, data , frequency-watt, and grid anomaly ride-through. Key Infrastructure House power, Opal-RT, PV simulator access
NASA Technical Reports Server (NTRS)
Kennedy, Barbara J.
2004-01-01
The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.
Assessing the Climate Resilience of Transport Infrastructure Investments in Tanzania
NASA Astrophysics Data System (ADS)
Hall, J. W.; Pant, R.; Koks, E.; Thacker, S.; Russell, T.
2017-12-01
Whilst there is an urgent need for infrastructure investment in developing countries, there is a risk that poorly planned and built infrastructure will introduce new vulnerabilities. As climate change increases the magnitudes and frequency of natural hazard events, incidence of disruptive infrastructure failures are likely to become more frequent. Therefore, it is important that infrastructure planning and investment is underpinned by climate risk assessment that can inform adaptation planning. Tanzania's rapid economic growth is placing considerable strain on the country's transportation infrastructure (roads, railways, shipping and aviation); especially at the port of Dar es Salaam and its linking transport corridors. A growing number of natural hazard events, in particular flooding, are impacting the reliability of this already over-used network. Here we report on new methodology to analyse vulnerabilities and risks due to failures of key locations in the intermodal transport network of Tanzania, including strategic connectivity to neighboring countries. To perform the national-scale risk analysis we will utilize a system-of-systems methodology. The main components of this general risk assessment, when applied to transportation systems, include: (1) Assembling data on: spatially coherent extreme hazards and intermodal transportation networks; (2) Intersecting hazards with transport network models to initiate failure conditions that trigger failure propagation across interdependent networks; (3) Quantifying failure outcomes in terms of social impacts (customers/passengers disrupted) and/or macroeconomic consequences (across multiple sectors); and (4) Simulating, testing and collecting multiple failure scenarios to perform an exhaustive risk assessment in terms of probabilities and consequences. The methodology is being used to pinpoint vulnerability and reduce climate risks to transport infrastructure investments.
Gioia, Gerard A; Glang, Ann E; Hooper, Stephen R; Brown, Brenda Eagan
To focus attention on building statewide capacity to support students with mild traumatic brain injury (mTBI)/concussion. Consensus-building process with a multidisciplinary group of clinicians, researchers, policy makers, and state Department of Education personnel. The white paper presents the group's consensus on the essential components of a statewide educational infrastructure to support the management of students with mTBI. The nature and recovery process of mTBI are briefly described specifically with respect to its effects on school learning and performance. State and local policy considerations are then emphasized to promote implementation of a consistent process. Five key components to building a statewide infrastructure for students with mTBI are described including (1) definition and training of the interdisciplinary school team, (2) professional development of the school and medical communities, (3) identification, assessment, and progress monitoring protocols, (4) a flexible set of intervention strategies to accommodate students' recovery needs, and (5) systematized protocols for active communication among medical, school, and family team members. The need for a research to guide effective program implementation is stressed. This guiding framework strives to assist the development of support structures for recovering students with mTBI to optimize academic outcomes. Until more evidence is available on academic accommodations and other school-based supports, educational systems should follow current best practice guidelines.
NASA Astrophysics Data System (ADS)
Odbert, H. M.; Aspinall, W.; Phillips, J.; Jenkins, S.; Wilson, T. M.; Scourse, E.; Sheldrake, T.; Tucker, P.; Nakeshree, K.; Bernardara, P.; Fish, K.
2015-12-01
Societies rely on critical services such as power, water, transport networks and manufacturing. Infrastructure may be sited to minimise exposure to natural hazards but not all can be avoided. The probability of long-range transport of a volcanic plume to a site is comparable to other external hazards that must be considered to satisfy safety assessments. Recent advances in numerical models of plume dispersion and stochastic modelling provide a formalized and transparent approach to probabilistic assessment of hazard distribution. To understand the risks to critical infrastructure far from volcanic sources, it is necessary to quantify their vulnerability to different hazard stressors. However, infrastructure assets (e.g. power plantsand operational facilities) are typically complex systems in themselves, with interdependent components that may differ in susceptibility to hazard impact. Usually, such complexity means that risk either cannot be estimated formally or that unsatisfactory simplifying assumptions are prerequisite to building a tractable risk model. We present a new approach to quantifying risk by bridging expertise of physical hazard modellers and infrastructure engineers. We use a joint expert judgment approach to determine hazard model inputs and constrain associated uncertainties. Model outputs are chosen on the basis of engineering or operational concerns. The procedure facilitates an interface between physical scientists, with expertise in volcanic hazards, and infrastructure engineers, with insight into vulnerability to hazards. The result is a joined-up approach to estimating risk from low-probability hazards to critical infrastructure. We describe our methodology and show preliminary results for vulnerability to volcanic hazards at a typical UK industrial facility. We discuss our findings in the context of developing bespoke assessment of hazards from distant sources in collaboration with key infrastructure stakeholders.
Are We Ready for Mass Fatality Incidents? Preparedness of the US Mass Fatality Infrastructure.
Merrill, Jacqueline A; Orr, Mark; Chen, Daniel Y; Zhi, Qi; Gershon, Robyn R
2016-02-01
To assess the preparedness of the US mass fatality infrastructure, we developed and tested metrics for 3 components of preparedness: organizational, operational, and resource sharing networks. In 2014, data were collected from 5 response sectors: medical examiners and coroners, the death care industry, health departments, faith-based organizations, and offices of emergency management. Scores were calculated within and across sectors and a weighted score was developed for the infrastructure. A total of 879 respondents reported highly variable organizational capabilities: 15% had responded to a mass fatality incident (MFI); 42% reported staff trained for an MFI, but only 27% for an MFI involving hazardous contaminants. Respondents estimated that 75% of their staff would be willing and able to respond, but only 53% if contaminants were involved. Most perceived their organization as somewhat prepared, but 13% indicated "not at all." Operational capability scores ranged from 33% (death care industry) to 77% (offices of emergency management). Network capability analysis found that only 42% of possible reciprocal relationships between resource-sharing partners were present. The cross-sector composite score was 51%; that is, half the key capabilities for preparedness were in place. The sectors in the US mass fatality infrastructure report suboptimal capability to respond. National leadership is needed to ensure sector-specific and infrastructure-wide preparedness for a large-scale MFI.
Increasing impacts of climate extremes on critical infrastructures in Europe
NASA Astrophysics Data System (ADS)
Forzieri, Giovanni; Bianchi, Alessandra; Feyen, Luc; Silva, Filipe Batista e.; Marin, Mario; Lavalle, Carlo; Leblois, Antoine
2016-04-01
The projected increases in exposure to multiple climate hazards in many regions of Europe, emphasize the relevance of a multi-hazard risk assessment to comprehensively quantify potential impacts of climate change and develop suitable adaptation strategies. In this context, quantifying the future impacts of climatic extremes on critical infrastructures is crucial due to their key role for human wellbeing and their effects on the overall economy. Critical infrastructures describe the existing assets and systems that are essential for the maintenance of vital societal functions, health, safety, security, economic or social well-being of people, and the disruption or destruction of which would have a significant impact as a result of the failure to maintain those functions. We assess the direct damages of heat and cold waves, river and coastal flooding, droughts, wildfires and windstorms to energy, transport, industry and social infrastructures in Europe along the 21st century. The methodology integrates in a coherent framework climate hazard, exposure and vulnerability components. Overall damage is expected to rise up to 38 billion €/yr, ten time-folds the current climate damage, with drastic variations in risk scenarios. Exemplificative are drought and heat-related damages that could represent 70% of the overall climate damage in 2080s versus the current 12%. Many regions, prominently Southern Europe, will likely suffer multiple stresses and systematic infrastructure failures due to climate extremes if no suitable adaptation measures will be taken.
Li, Yu; Zheng, Ji; Li, Fei; Jin, Xueting; Xu, Chen
2017-01-01
Municipal infrastructure is a fundamental facility for the normal operation and development of an urban city and is of significance for the stable progress of sustainable urbanization around the world, especially in developing countries. Based on the municipal infrastructure data of the prefecture-level cities in China, municipal infrastructure development is assessed comprehensively using a FA (factor analysis) model, and then the stochastic model STIRPAT (stochastic impacts by regression on population, affluence and technology) is examined to investigate key factors that influence municipal infrastructure of cities in various stages of urbanization and economy. This study indicates that the municipal infrastructure development in urban China demonstrates typical characteristics of regional differentiation, in line with the economic development pattern. Municipal infrastructure development in cities is primarily influenced by income, industrialization and investment. For China and similar developing countries under transformation, national public investment remains the primary driving force of economy as well as the key influencing factor of municipal infrastructure. Contribution from urbanization and the relative consumption level, and the tertiary industry is still scanty, which is a crux issue for many developing countries under transformation. With economic growth and the transformation requirements, the influence of the conventional factors such as public investment and industrialization on municipal infrastructure development would be expected to decline, meanwhile, other factors like the consumption and tertiary industry driven model and the innovation society can become key contributors to municipal infrastructure sustainability.
Li, Yu; Zheng, Ji; Li, Fei; Jin, Xueting; Xu, Chen
2017-01-01
Municipal infrastructure is a fundamental facility for the normal operation and development of an urban city and is of significance for the stable progress of sustainable urbanization around the world, especially in developing countries. Based on the municipal infrastructure data of the prefecture-level cities in China, municipal infrastructure development is assessed comprehensively using a FA (factor analysis) model, and then the stochastic model STIRPAT (stochastic impacts by regression on population, affluence and technology) is examined to investigate key factors that influence municipal infrastructure of cities in various stages of urbanization and economy. This study indicates that the municipal infrastructure development in urban China demonstrates typical characteristics of regional differentiation, in line with the economic development pattern. Municipal infrastructure development in cities is primarily influenced by income, industrialization and investment. For China and similar developing countries under transformation, national public investment remains the primary driving force of economy as well as the key influencing factor of municipal infrastructure. Contribution from urbanization and the relative consumption level, and the tertiary industry is still scanty, which is a crux issue for many developing countries under transformation. With economic growth and the transformation requirements, the influence of the conventional factors such as public investment and industrialization on municipal infrastructure development would be expected to decline, meanwhile, other factors like the consumption and tertiary industry driven model and the innovation society can become key contributors to municipal infrastructure sustainability. PMID:28787031
Lincoln Laboratory Journal. Volume 22, Number 1, 2016
2016-06-09
needs cyber ranges and other infrastructure to conduct scal- able, repeatable, scientific, realistic and inexpensive testing, training, and mission...support this mission, infrastructure is being upgraded to make it more efficient and secure. In “Secur- ing the U.S. Transportation Command,” Jeff...using the Electronic Key Management System (EKMS) or over a digital network by using the Key Manage- ment Infrastructure (KMI). The units must then
Anatomic pathology laboratory information systems: a review.
Park, Seung Lyung; Pantanowitz, Liron; Sharma, Gaurav; Parwani, Anil Vasdev
2012-03-01
The modern anatomic pathology laboratory depends on a reliable information infrastructure to register specimens, record gross and microscopic findings, regulate laboratory workflow, formulate and sign out report(s), disseminate them to the intended recipients across the whole health system, and support quality assurance measures. This infrastructure is provided by the Anatomical Pathology Laboratory Information Systems (APLIS), which have evolved over decades and now are beginning to support evolving technologies like asset tracking and digital imaging. As digital pathology transitions from "the way of the future" to "the way of the present," the APLIS continues to be one of the key effective enablers of the scope and practice of pathology. In this review, we discuss the evolution, necessary components, architecture and functionality of the APLIS that are crucial to today's practicing pathologist and address the demands of emerging trends on the future APLIS.
Modeling joint restoration strategies for interdependent infrastructure systems.
Zhang, Chao; Kong, Jingjing; Simonovic, Slobodan P
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems.
Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models
Rao, Nageswara S. V.; Poole, Stephen W.; Ma, Chris Y. T.; ...
2015-04-06
The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical sub-infrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein theirmore » components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. In conclusion, the analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures.« less
Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S. V.; Poole, Stephen W.; Ma, Chris Y. T.
The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical sub-infrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein theirmore » components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. In conclusion, the analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures.« less
Public Key Infrastructure Study
1994-04-01
commerce. This Public Key Infrastructure (PKI) study focuses on the United States Federal Government operations, but also addresses national and global ... issues in order to facilitate the interoperation of protected electronic commerce among the various levels of government in the U.S., private citizens
EPOS-GNSS - Improving the infrastructure for GNSS data and products in Europe
NASA Astrophysics Data System (ADS)
Fernandes, Rui; Bos, Machiel; Bruyninx, Carine; Crocker, Paul; Dousa, Jan; Socquet, Anne; Walpersdorf, Andrea; Avallone, Antonio; Ganas, Athanassios; Gunnar, Benedikt; Ionescu, Constantin; Kenyeres, Ambrus; Ozener, Haluk; Vergnolle, Mathilde; Lidberg, Martin; Liwosz, Tomek; Soehne, Wolfgang
2017-04-01
EPOS-IP WP10 - "GNSS Data & Products" is the Working Package 10 of the European Plate Observing System - Implementation Phase project in charge of implementing services for the geo-sciences community to access existing Pan-European Geodetic Infrastructures. WP10 is currently formed by representatives of participating European institutions but in the operational phase contributions will be solicited from the entire geodetic community. In fact, WP10 also includes members from other institutions/countries that formally are not participating in the EPOS-IP but will be key players in the future services to be provided by EPOS. Additionally, several partners are also key partners at EUREF, which is also actively collaborating with EPOS. The geodetic component of EPOS is dealing essentially with implementing an e-infrastructure to store and disseminate the continuous GNSS data from existing Research Infrastructures. Present efforts are on developing geodetic tools to support Solid Earth research by optimizing the existing resources. However, other research and technical applications (e.g., reference frames, meteorology, space weather) can also benefit in the future from the optimization of the geodetic resources in Europe. We present and discuss the status of the implementation of the thematic and core services (TCS) for GNSS data within EPOS and the related business plan. We explain the tools and web-services being developed towards the implementation of the best solutions that will permit to the end-users, and in particular geo-scientists, to access the geodetic data, derived solutions, and associated metadata using a transparent and standardized processes. We also detail the different DDSS (Data, Data-Products, Services, Software) that will be made available for the Operational Phase of EPOS, which will start to be tested and made available during 2017 and 2018.
CALS Infrastructure Analysis. Draft. Volume 21
DOT National Transportation Integrated Search
1990-03-01
This executive overview to the DoD CALS Infrastructure Analysis Report summarizes the Components' current effort to modernize the DoD technical data infrastructure. This infrastructure includes all existing and planned capabilities to acquire, manage...
GEOSS AIP-2 Climate Change and Biodiversity Use Scenarios: Interoperability Infrastructures
NASA Astrophysics Data System (ADS)
Nativi, Stefano; Santoro, Mattia
2010-05-01
In the last years, scientific community is producing great efforts in order to study the effects of climate change on life on Earth. In this general framework, a key role is played by the impact of climate change on biodiversity. To assess this, several use scenarios require the modeling of climatological change impact on the regional distribution of biodiversity species. Designing and developing interoperability infrastructures which enable scientists to search, discover, access and use multi-disciplinary resources (i.e. datasets, services, models, etc.) is currently one of the main research fields for the Earth and Space Science Informatics. This presentation introduces and discusses an interoperability infrastructure which implements the discovery, access, and chaining of loosely-coupled resources in the climatology and biodiversity domains. This allows to set up and run forecast and processing models. The presented framework was successfully developed and experimented in the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2) Climate Change & Biodiversity thematic Working Group. This interoperability infrastructure is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. This framework was assessed in two use scenarios of GEOSS AIP-2 Climate Change and Biodiversity WG. Both scenarios concern the prediction of species distributions driven by climatological change forecasts. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. The scientific patronage was provided by the University of Colorado and the University of Alaska, respectively. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop.
Transportation of Large Wind Components: A Review of Existing Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooney, Meghan; Maclaurin, Galen
2016-09-01
This report features the geospatial data component of a larger project evaluating logistical and infrastructure requirements for transporting oversized and overweight (OSOW) wind components. The goal of the larger project was to assess the status and opportunities for improving the infrastructure and regulatory practices necessary to transport wind turbine towers, blades, and nacelles from current and potential manufacturing facilities to end-use markets. The purpose of this report is to summarize existing geospatial data on wind component transportation infrastructure and to provide a data gap analysis, identifying areas for further analysis and data collection.
Defense strategies for asymmetric networked systems under composite utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less
Güler, Özgür; Yaniv, Ziv
2012-01-01
Teaching the key technical aspects of image-guided interventions using a hands-on approach is a challenging task. This is primarily due to the high cost and lack of accessibility to imaging and tracking systems. We provide a software and data infrastructure which addresses both challenges. Our infrastructure allows students, patients, and clinicians to develop an understanding of the key technologies by using them, and possibly by developing additional components and integrating them into a simple navigation system which we provide. Our approach requires minimal hardware, LEGO blocks to construct a phantom for which we provide CT scans, and a webcam which when combined with our software provides the functionality of a tracking system. A premise of this approach is that tracking accuracy is sufficient for our purpose. We evaluate the accuracy provided by a consumer grade webcam and show that it is sufficient for educational use. We provide an open source implementation of all the components required for a basic image-guided navigation as part of the Image-Guided Surgery Toolkit (IGSTK). It has long been known that in education there is no substitute for hands-on experience, to quote Sophocles, "One must learn by doing the thing; for though you think you know it, you have no certainty, until you try.". Our work provides this missing capability in the context of image-guided navigation. Enabling a wide audience to learn and experience the use of a navigation system.
Body area network--a key infrastructure element for patient-centered telemedicine.
Norgall, Thomas; Schmidt, Robert; von der Grün, Thomas
2004-01-01
The Body Area Network (BAN) extends the range of existing wireless network technologies by an ultra-low range, ultra-low power network solution optimised for long-term or continuous healthcare applications. It enables wireless radio communication between several miniaturised, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN components via usual network infrastructure. The BAN network protocol maintains dynamic ad-hoc network configuration scenarios and co-existence of multiple networks.BAN is expected to become a basic infrastructure element for electronic health services: By integrating patient-attached sensors and mobile actor units, distributed information and data processing systems, the range of medical workflow can be extended to include applications like wireless multi-parameter patient monitoring and therapy support. Beyond clinical use and professional disease management environments, private personal health assistance scenarios (without financial reimbursement by health agencies / insurance companies) enable a wide range of applications and services in future pervasive computing and networking environments.
Infrastructure and the Virtual Observatory
NASA Astrophysics Data System (ADS)
Dowler, P.; Gaudet, S.; Schade, D.
2011-07-01
The modern data center is faced with architectural and software engineering challenges that grow along with the challenges facing observatories: massive data flow, distributed computing environments, and distributed teams collaborating on large and small projects. By using VO standards as key components of the infrastructure, projects can take advantage of a decade of intellectual investment by the IVOA community. By their nature, these standards are proven and tested designs that already exist. Adopting VO standards saves considerable design effort, allows projects to take advantage of open-source software and test suites to speed development, and enables the use of third party tools that understand the VO protocols. The evolving CADC architecture now makes heavy use of VO standards. We show examples of how these standards may be used directly, coupled with non-VO standards, or extended with custom capabilities to solve real problems and provide value to our users. In the end, we use VO services as major parts of the core infrastructure to reduce cost rather than as an extra layer with additional cost and we can deliver more general purpose and robust services to our user community.
Harrop, Wayne; Matteson, Ashley
This paper presents cyber resilience as key strand of national security. It establishes the importance of critical national infrastructure protection and the growing vicarious nature of remote, well-planned, and well executed cyber attacks on critical infrastructures. Examples of well-known historical cyber attacks are presented, and the emergence of 'internet of things' as a cyber vulnerability issue yet to be tackled is explored. The paper identifies key steps being undertaken by those responsible for detecting, deterring, and disrupting cyber attacks on critical national infrastructure in the United Kingdom and the USA.
The national strategy for the physical protection of critical infrastructures and key assets
DOT National Transportation Integrated Search
2003-02-01
This document defines the road ahead for a core mission area identified in the President's National Strategy for Homeland Security-reducing the Nation's vulnerability to acts of terrorism by protecting our critical infrastructures and key assets from...
NASA Astrophysics Data System (ADS)
Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.
2014-12-01
EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project components are described at the project's website (http://workspace.earthcube.org/cinergi), which also provides access to the initial resource inventory, the inventory metadata model, metadata entry forms and a collection of the community resource viewers.
Operational models of infrastructure resilience.
Alderson, David L; Brown, Gerald G; Carlyle, W Matthew
2015-04-01
We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of "operational resilience" that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience. © 2014 Society for Risk Analysis.
Modeling joint restoration strategies for interdependent infrastructure systems
Simonovic, Slobodan P.
2018-01-01
Life in the modern world depends on multiple critical services provided by infrastructure systems which are interdependent at multiple levels. To effectively respond to infrastructure failures, this paper proposes a model for developing optimal joint restoration strategy for interdependent infrastructure systems following a disruptive event. First, models for (i) describing structure of interdependent infrastructure system and (ii) their interaction process, are presented. Both models are considering the failure types, infrastructure operating rules and interdependencies among systems. Second, an optimization model for determining an optimal joint restoration strategy at infrastructure component level by minimizing the economic loss from the infrastructure failures, is proposed. The utility of the model is illustrated using a case study of electric-water systems. Results show that a small number of failed infrastructure components can trigger high level failures in interdependent systems; the optimal joint restoration strategy varies with failure occurrence time. The proposed models can help decision makers to understand the mechanisms of infrastructure interactions and search for optimal joint restoration strategy, which can significantly enhance safety of infrastructure systems. PMID:29649300
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-04
..., National Protection and Programs Directorate, Office of Infrastructure Protection (IP), will submit the... manner.'' DHS designated IP to lead these efforts. Given that the vast majority of the Nation's critical infrastructure and key resources in most sectors are privately owned or controlled, IP's success in achieving the...
GEMSS: grid-infrastructure for medical service provision.
Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R
2005-01-01
The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.
Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.
Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y
2016-04-01
The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.
Rocha, Roberto A.; Bradshaw, Richard L.; Bigelow, Sharon M.; Hanna, Timothy P.; Fiol, Guilherme Del; Hulse, Nathan C.; Roemer, Lorrie K.; Wilkinson, Steven G.
2006-01-01
Widespread cooperation between domain experts and front-line clinicians is a key component of any successful clinical knowledge management framework. Peer review is an established form of cooperation that promotes the dissemination of new knowledge. The authors describe three peer collaboration scenarios that have been implemented using the knowledge management infrastructure available at Intermountain Healthcare. Utilization results illustrating the early adoption patterns of the proposed scenarios are presented and discussed, along with succinct descriptions of planned enhancements and future implementation efforts. PMID:17238422
A pediatrician's view. Skin manifestations of bioterrorism.
Cross, J T; Altemeier, W A
2000-01-01
The physician must be in contact with the local public health infrastructure as soon as a potential biological agent is perceived as possible. Most states are now setting up contingency plans and means to address these issues in a systematic way. This involves using local health departments, police departments, fire departments, National Guard units, and federal agencies such as the CDC and the FBI. The key component, however, is actually identifying a biological agent in the community and then moving quickly to isolate those who may be at risk of spreading the infection.
Symmetric Key Services Markup Language (SKSML)
NASA Astrophysics Data System (ADS)
Noor, Arshad
Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.
Green Infrastructure Fact Sheet
We briefly describe the environmental issues associated with stormwater runoff, describe the purpose of green infrastructure and key techniques used. We also highlight environmental and economic benefits of green infrastructure through text and tables, as well as provide US wate...
Energy Storage Laboratory | Energy Systems Integration Facility | NREL
technologies. Key Infrastructure Energy storage system inverter, energy storage system simulators, research Plug-In Vehicles/Mobile Storage The plug-in vehicles/mobile storage hub includes connections for small integration. Key Infrastructure Ample house power, REDB access, charging stations, easy vehicle parking access
Report #2006-P-00022, April 26, 2006. Assignment of formal authority and more accountability is required to ensure the initiatives in the Critical Infrastructure and Key Resources Protection Plan are accomplished in a timely manner.
Green Infrastructure Checklists and Renderings
Materials and checklists for Denver, CO to review development project plans for green infrastructure components, best practices for inspecting and maintaining installed green infrastructure. Also includes renderings of streetscape projects.
Making Temporal Search More Central in Spatial Data Infrastructures
NASA Astrophysics Data System (ADS)
Corti, P.; Lewis, B.
2017-10-01
A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.
Strategies for Validation Testing of Ground Systems
NASA Technical Reports Server (NTRS)
Annis, Tammy; Sowards, Stephanie
2009-01-01
In order to accomplish the full Vision for Space Exploration announced by former President George W. Bush in 2004, NASA will have to develop a new space transportation system and supporting infrastructure. The main portion of this supporting infrastructure will reside at the Kennedy Space Center (KSC) in Florida and will either be newly developed or a modification of existing vehicle processing and launch facilities, including Ground Support Equipment (GSE). This type of large-scale launch site development is unprecedented since the time of the Apollo Program. In order to accomplish this successfully within the limited budget and schedule constraints a combination of traditional and innovative strategies for Verification and Validation (V&V) have been developed. The core of these strategies consists of a building-block approach to V&V, starting with component V&V and ending with a comprehensive end-to-end validation test of the complete launch site, called a Ground Element Integration Test (GEIT). This paper will outline these strategies and provide the high level planning for meeting the challenges of implementing V&V on a large-scale development program. KEY WORDS: Systems, Elements, Subsystem, Integration Test, Ground Systems, Ground Support Equipment, Component, End Item, Test and Verification Requirements (TVR), Verification Requirements (VR)
EPA Research Highlights: EPA Studies Aging Water Infrastructure
The nation's extensive water infrastructure has the capacity to treat, store, and transport trillions of gallons of water and wastewater per day through millions of miles of pipelines. However, some infrastructure components are more than 100 years old, and as the infrastructure ...
DOT National Transportation Integrated Search
2013-06-01
The proposed study involves investigating long carbon fiber reinforced concrete as a method of mitigating earthquake damage to : bridges and other infrastructure components. Long carbon fiber reinforced concrete has demonstrated significant resistanc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sperling, Joshua; Fisher, Stephen; Reiner, Mark B.
The term 'leapfrogging' has been applied to cities and nations that have adopted a new form of infrastructure by bypassing the traditional progression of development, e.g., from no phones to cell phones - bypassing landlines all together. However, leapfrogging from unreliable infrastructure systems to 'smart' cities is too large a jump resulting in unsustainable and unhealthy infrastructure systems. In the Global South, a baseline of unreliable infrastructure is a prevalent problem. The push for sustainable and 'smart' [re]development tends to ignore many of those already living with failing, unreliable infrastructure. Without awareness of baseline conditions, uninformed projects run the riskmore » of returning conditions to the status quo, keeping many urban populations below targets of the United Nations' Sustainable Development Goals. A key part of understanding the baseline is to identify how citizens have long learned to adjust their expectations of basic services. To compensate for poor infrastructure, most residents in the Global South invest in remedial secondary infrastructure (RSI) at the household and business levels. The authors explore three key 'smart' city transformations that address RSI within a hierarchical planning pyramid known as the comprehensive resilient and reliable infrastructure systems (CRISP) planning framework.« less
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
Eric Kuehler; Jon Hathaway; Andrew Tirpak
2017-01-01
The use of green infrastructure for reducing stormwater runoff is increasingly common. One underâstudied component of the green infrastructure network is the urban forest system. Trees can play an important role as the âfirst line of defenseâ for restoring more natural hydrologic regimes in urban watersheds by intercepting rainfall, delaying runoff, infiltrating, and...
Highway Funding: It's Time to Think Seriously About Operations. A Policy Framework
DOT National Transportation Integrated Search
1998-09-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
The role of the chief information officer in the health care organization in the 1990s.
Glaser, J P
1993-02-01
During the next decade, the role of the CIO will change in two major areas: 1. The relative importance of the CIO as the person who translates business and clinical needs into information technology ideas will diminish. Although this portion of the CIO role will not disappear, this role will be increasingly filled by senior management, clinicians, and other members of the hospital staff. 2. The CIO role will need to shift from an emphasis on managing implementations and projects to developing and advancing the infrastructure. CIOs need to distinguish between the expression of the asset (the application portfolio) and the information technology infrastructure (the remaining four components of the asset). While being pressured to deliver more applications, they can fail to invest in and manage the infrastructure. This is a mistake. By neglecting management of and investment in the infrastructure (e.g., staff training and data quality) or by failing to take advantage of new technologies, they can hinder the ability of an organization to deliver superior applications. Poor data quality will cripple an executive information system and a too-permissive stance toward hardware and operating system heterogeneity will hinder the ability to deliver a computerized patient record. Although some management of the infrastructure is in place, in general it is insufficient. Few organizations have both a distinct data management function and a technical architecture plan, and also develop and enforce key technical, data, and development standards. This insufficiency will hinder their ability to effectively and efficiently apply their information technology infrastructure. The role of the CIO will evolve due to several powerful forces.(ABSTRACT TRUNCATED AT 250 WORDS)
Development of Network-based Communications Architectures for Future NASA Missions
NASA Technical Reports Server (NTRS)
Slywczak, Richard A.
2007-01-01
Since the Vision for Space Exploration (VSE) announcement, NASA has been developing a communications infrastructure that combines existing terrestrial techniques with newer concepts and capabilities. The overall goal is to develop a flexible, modular, and extensible architecture that leverages and enhances terrestrial networking technologies that can either be directly applied or modified for the space regime. In addition, where existing technologies leaves gaps, new technologies must be developed. An example includes dynamic routing that accounts for constrained power and bandwidth environments. Using these enhanced technologies, NASA can develop nodes that provide characteristics, such as routing, store and forward, and access-on-demand capabilities. But with the development of the new infrastructure, challenges and obstacles will arise. The current communications infrastructure has been developed on a mission-by-mission basis rather than an end-to-end approach; this has led to a greater ground infrastructure, but has not encouraged communications between space-based assets. This alone provides one of the key challenges that NASA must encounter. With the development of the new Crew Exploration Vehicle (CEV), NASA has the opportunity to provide an integration path for the new vehicles and provide standards for their development. Some of the newer capabilities these vehicles could include are routing, security, and Software Defined Radios (SDRs). To meet these needs, the NASA/Glenn Research Center s (GRC) Network Emulation Laboratory (NEL) has been using both simulation and emulation to study and evaluate these architectures. These techniques provide options to NASA that directly impact architecture development. This paper identifies components of the infrastructure that play a pivotal role in the new NASA architecture, develops a scheme using simulation and emulation for testing these architectures and demonstrates how NASA can strengthen the new infrastructure by implementing these concepts.
Fagbeja, Mofoluso A; Hill, Jennifer L; Chatterton, Tim J; Longhurst, James W S; Akpokodje, Joseph E; Agbaje, Ganiy I; Halilu, Shaba A
2017-03-01
Environmental monitoring in middle- and low-income countries is hampered by many factors which include enactment and enforcement of legislations; deficiencies in environmental data reporting and documentation; inconsistent, incomplete and unverifiable data; a lack of access to data; and technical expertise. This paper describes the processes undertaken and the major challenges encountered in the construction of the first Niger Delta Emission Inventory (NDEI) for criteria air pollutants and CO 2 released from the anthropogenic activities in the region. This study focused on using publicly available government and research data. The NDEI has been designed to provide a Geographic Information System-based component of an air quality and carbon management framework. The NDEI infrastructure was designed and constructed at 1-, 10- and 20-km grid resolutions for point, line and area sources using industry standard processes and emission factors derived from activities similar to those in the Niger Delta. Due to inadequate, incomplete, potentially inaccurate and unavailable data, the infrastructure was populated with data based on a series of best possible assumptions for key emission sources. This produces outputs with variable levels of certainty, which also highlights the critical challenges in the estimation of emissions from a developing country. However, the infrastructure is functional and has the ability to produce spatially resolved emission estimates.
TCIA Secure Cyber Critical Infrastructure Modernization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keliiaa, Curtis M.
The Sandia National Laboratories (Sandia Labs) tribal cyber infrastructure assurance initiative was developed in response to growing national cybersecurity concerns in the the sixteen Department of Homeland Security (DHS) defined critical infrastructure sectors1. Technical assistance is provided for the secure modernization of critical infrastructure and key resources from a cyber-ecosystem perspective with an emphasis on enhanced security, resilience, and protection. Our purpose is to address national critical infrastructure challenges as a shared responsibility.
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
Information infrastructure for consumer health: a health information exchange stakeholder study.
Thornewill, Judah; Dowling, Alan F; Cox, Barbara A; Esterhay, Robert J
2011-05-01
An enabling infrastructure for population-wide health information capture and transfer is beginning to emerge in the U.S. However, the essential infrastructure component that is still missing is effective health information exchange (HIE). Health record banks (HRBs) are one of several possible approaches to achieving HIE. Is the approach viable? If so, what requirements must be satisfied in order for it to succeed? The research, conducted in 2007-2008, explored HRB-related interests, concerns, benefits, payment preferences, design requirements, value propositions, and challenges for 12 healthcare stakeholder groups and the consumers they serve in a U.S. metropolitan area of 1.3 million people. A mixed-methods design was developed in a community action research context. Data were gathered and analyzed through 23 focus groups, 13 web surveys, a consumer phone survey (nonstratified random sample) and follow-up meetings. Recruiting goals for leaders representing targeted groups were achieved using a multi-channel communications strategy. Key themes were identified through data triangulation. Then, requirements, value propositions and challenges were developed through iterative processes of interaction with community members. Results include key themes, design requirements, value propositions, and challenges for 12 stakeholder groups and consumers. The research provides a framework for developing a consumer permission-driven, financially sustainable, community HRB model. However, for such a model to flourish, it will need to be part of a nationwide network of HIEs with compatible HRB approaches able to overcome a number of challenges. Copyright © 2011 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
A Framework and Metric for resilience concept in water infrastructure
NASA Astrophysics Data System (ADS)
Karamouz, M.; Olyaei, M.
2017-12-01
The collaborators of water industries are looking for ways and means to bring resilience into our water infrastructure systems. The key to this conviction is to develop a shared vision among the engineers, builders and decision makers of our water executive branch and policy makers, utilities, community leaders, players, end users and other stakeholders of our urban environment. Among water infrastructures, wastewater treatment plants (WWTP) have a significant role on urban systems' serviceability. These facilities, especially when located in coastal regions, are vulnerable to heavy rain, surface runoff, storm surges and coastal flooding. Flooding can cause overflows from treatment facilities into the natural water bodies and result in environmental predicament of significant proportions. In order to minimize vulnerability to flood, a better understanding of flood risk must be realized. Vulnerability to floods frequency and intensity is increasing by external forcing such as climate change, as well as increased interdependencies in urban systems. Therefore, to quantify the extent of efforts for flood risk management, a unified index is needed for evaluating resiliency of infrastructure. Resiliency is a key concept in understanding vulnerability in dealing with flood. New York City based on its geographic location, its urbanized nature, densely populated area, interconnected water bodies and history of the past flooding events is extremely vulnerable to flood and was selected as the case study. In this study, a framework is developed to evaluate resiliency of WWTPs. An analysis of the current understanding of vulnerability is performed and a new perspective utilizing different components of resiliency including resourcefulness, robustness, rapidity and redundancy is presented. To quantify resiliency and rank the wastewater treatment plants in terms of how resilient they are, an index is developed using Multi Criteria Decision Making (MCDM) technique. Moreover, Improvement of WWTPs' performance is investigated by allocating financial resources to attain a desirable level of resiliency. The result of this study shows the significant value of quantifying and improving flood resiliency of WWTPs that could be used for other water infrastructure and in planning of investment strategies for a region
Productivity-based approach to valuation of transportation infrastructure.
DOT National Transportation Integrated Search
2014-10-01
Transportation infrastructure, a vital component to sustain economic prosperity, represents the largest public-owned : infrastructure asset in the U.S. With over a trillion invested dollars invested into long-lived physical assets such as : roads and...
Virtualized Networks and Virtualized Optical Line Terminal (vOLT)
NASA Astrophysics Data System (ADS)
Ma, Jonathan; Israel, Stephen
2017-03-01
The success of the Internet and the proliferation of the Internet of Things (IoT) devices is forcing telecommunications carriers to re-architecture a central office as a datacenter (CORD) so as to bring the datacenter economics and cloud agility to a central office (CO). The Open Network Operating System (ONOS) is the first open-source software-defined network (SDN) operating system which is capable of managing and controlling network, computing, and storage resources to support CORD infrastructure and network virtualization. The virtualized Optical Line Termination (vOLT) is one of the key components in such virtualized networks.
Materials Research With Neutrons at NIST
Cappelletti, R. L.; Glinka, C. J.; Krueger, S.; Lindstrom, R. A.; Lynn, J. W.; Prask, H. J.; Prince, E.; Rush, J. J.; Rowe, J. M.; Satija, S. K.; Toby, B. H.; Tsai, A.; Udovic, T. J.
2001-01-01
The NIST Materials Science and Engineering Laboratory works with industry, standards bodies, universities, and other government laboratories to improve the nation’s measurements and standards infrastructure for materials. An increasingly important component of this effort is carried out at the NIST Center for Neutron Research (NCNR), at present the most productive center of its kind in the United States. This article gives a brief historical account of the growth and activities of the Center with examples of its work in major materials research areas and describes the key role the Center can expect to play in future developments. PMID:27500021
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armstrong, Robert C.; Ray, Jaideep; Malony, A.
2003-11-01
We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.
Shackleton Energy enabling Space Resources Exploitation on the Moon within a Decade
NASA Astrophysics Data System (ADS)
Keravala, J.; Stone, B.; Tietz, D.; Frischauf, N.
2013-09-01
Access to in-space natural resources is a key requirement for increasing exploration and expansion of humanity off Earth. In particular, making use of the Moon's resources in the form of lunar polar ice to fuel propellant depots at key locations in near Earth space enables dramatic reductions in the cost of access and operations in space, while simultaneously leveraging reusable in-space transporters essential to opening the newspace highway system. Success of this private venture will provide for a sustained balance of our terrestrial economy and the growth of our civilisation. Establishing the cis-Lunar highway required to access lunar sourced water from the cold traps of the polar craters provides the backbone infrastructure for an exponential growth of a space-based economy. With that core infrastructure in place, space-based solar power generation systems, debris mitigation capabilities and planetary protection systems plus scientific and exploratory missions, among others, can become commercial realities in our lifetime. Shackleton Energy was founded from the space, mining, energy and exploration sectors to meet this challenge as a fully private venture. Following successful robotic precursor missions, our industrial astronauts combined with a robotic mining capability will make first landings at the South Pole of the Moon and begin deliveries of propellant to our depots in within a decade. Customers, partners, technologies and most importantly, the investor classes aligned with the risk profiles involved, have been identified and all the components for a viable business are available. Infrastructure investment in space programs has traditionally been the province of governments, but sustainable expansion requires commercial leadership and this is now the responsibility of a dynamic new industry. The technologies and know-how are ready to be applied. Launch services to LEO are available and the industrial capability exists in the aerospace, mining and energy sectors to enable Shackleton Energy to build an in-orbit and Lunar infrastructure on a fully commercial basis.
Nevada Infrastructure for Climate Change Science, Education, and Outreach
NASA Astrophysics Data System (ADS)
Dana, G. L.; Lancaster, N.; Mensing, S. A.; Piechota, T.
2008-12-01
The Great Basin is characterized by complex basin and range topography, arid to semiarid climate, and a history of sensitivity to climate change. Mountain areas comprise about 10% of the landscape, yet are the areas of highest precipitation and generate 85% of groundwater recharge and most surface runoff. These characteristics provide an ideal natural laboratory to study the effects of climate change. The Nevada system of Higher Education, including the University of Nevada, Las Vegas, the University of Nevada, Reno, the Desert Research Institute, and Nevada State College have begun a five year research and infrastructure building program, funded by the National Science Foundation Experimental Program to Stimulate Competitive Research (NSF EPSCoR) with the vision "to create a statewide interdisciplinary program and virtual climate change center that will stimulate transformative research, education, and outreach on the effects of regional climate change on ecosystem resources (especially water) and support use of this knowledge by policy makers and stakeholders." Six major strategies are proposed to develop infrastructure needs and attain our vision: 1) Develop a capability to model climate change at a regional and sub-regional scale(Climate Modeling Component) 2) Analyze effects on ecosystems and disturbance regimes (Ecological Change Component) 3) Quantify and model changes in water balance and resources under climate change (Water Resources Component) 4) Assess effects on human systems and enhance policy making and outreach to communities and stakeholders (Policy, Decision-Making, and Outreach Component) 5) Develop a data portal and software to support interdisciplinary research via integration of data from observational networks and modeling (Cyberinfrastructure Component) and 6) Train teachers and students at all levels and provide public outreach in climate change issues (Education Component). Two new climate observational transects will be established across Great Basin Ranges, one anticipated on a mountain range in southern Nevada and the second to be located in north-central Nevada. Climatic, hydrologic and ecological data from these transects will be downloaded into high capacity data storage units and made available to researchers through creation of the Nevada climate change portal. Our research will aim to answer two interdisciplinary science questions key to understanding the effects of future climate change on Great Basin mountain ecosystems and the potential management strategies for responding to these changes: 1) How will climate change affect water resources and linked ecosystem resources and human systems? And 2) How will climate change affect disturbance regimes (e.g., wildland fires, invasive species, insect outbreaks, droughts) and linked systems? Infrastructure developed through this project will provide new interdisciplinary capability to detect, analyze, and model effects of regional climate change in mountainous regions of the west and provide a major contribution to existing climate change research and monitoring networks.
Distributed Data Networks That Support Public Health Information Needs.
Tabano, David C; Cole, Elizabeth; Holve, Erin; Davidson, Arthur J
Data networks, consisting of pooled electronic health data assets from health care providers serving different patient populations, promote data sharing, population and disease monitoring, and methods to assess interventions. Better understanding of data networks, and their capacity to support public health objectives, will help foster partnerships, expand resources, and grow learning health systems. We conducted semistructured interviews with 16 key informants across the United States, identified as network stakeholders based on their respective experience in advancing health information technology and network functionality. Key informants were asked about their experience with and infrastructure used to develop data networks, including each network's utility to identify and characterize populations, usage, and sustainability. Among 11 identified data networks representing hundreds of thousands of patients, key informants described aggregated health care clinical data contributing to population health measures. Key informant interview responses were thematically grouped to illustrate how networks support public health, including (1) infrastructure and information sharing; (2) population health measures; and (3) network sustainability. Collaboration between clinical data networks and public health entities presents an opportunity to leverage infrastructure investments to support public health. Data networks can provide resources to enhance population health information and infrastructure.
Connecting Learners: The South Carolina Educational Technology Plan.
ERIC Educational Resources Information Center
South Carolina State Dept. of Education, Columbia.
This educational technology plan for South Carolina contains the following sections: (1) statewide progress related to the telecommunications infrastructure, professional development, video infrastructure, administrative infrastructure, and funding; (2) introduction to educational technology concepts, including major components and factors…
Aging Water Infrastructure Research Program Update: Innovation & Research for the 21st Century
This slide presentation summarizes key elements of the EOA, Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI)) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall...
The Infrastructure of Open Educational Resources
ERIC Educational Resources Information Center
Smith, Marshall S.; Wang, Phoenix M.
2007-01-01
The success of OER is likely to depend on a flexible, extendable infrastructure that will meet the challenges of an evolving World Wide Web. In this article, the authors examine three key dimensions of this infrastructure--technical, legal/cultural/social/political, and research--and discuss possible directions for development. (Contains 1 table…
NASA Astrophysics Data System (ADS)
Allison, M. Lee; Davis, Rowena
2016-04-01
An e-infrastructure that supports data-intensive, multidisciplinary research is needed to accelerate the pace of science to address 21st century global change challenges. Data discovery, access, sharing and interoperability collectively form core elements of an emerging shared vision of e-infrastructure for scientific discovery. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. An 18-month long process involving ~120 experts in domain, computer, and social sciences from more than a dozen countries resulted in a formal set of recommendations that were adopted in fall, 2015 by the Belmont Forum collaboration of national science funding agencies and international bodies on what they are best suited to implement for development of an e-infrastructure in support of global change research, including: • adoption of data principles that promote a global, interoperable e-infrastructure, that can be enforced • establishment of information and data officers for coordination of global data management and e-infrastructure efforts • promotion of effective data planning and stewardship • determination of international and community best practices for adoption • development of a cross-disciplinary training curriculum on data management and curation The implementation plan is being executed under four internationally-coordinated Action Themes towards a globally organized, internationally relevant e-infrastructure and data management capability drawn from existing components, protocols, and standards. The Belmont Forum anticipates opportunities to fund additional projects to fill key gaps and to integrate best practices into an e-infrastructure to support their programs but that can also be scaled up and deployed more widely. Background The Belmont Forum is a global consortium established in 2009 to build on the work of the International Group of Funding Agencies for Global Change Research toward furthering collaborative efforts to deliver knowledge needed for action to avoid and adapt to detrimental environmental change, including extreme hazardous events.
Failure Impact Analysis of Key Management in AMI Using Cybernomic Situational Assessment (CSA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R
2013-01-01
In earlier work, we presented a computational framework for quantifying the security of a system in terms of the average loss a stakeholder stands to sustain as a result of threats to the system. We named this system, the Cyberspace Security Econometrics System (CSES). In this paper, we refine the framework and apply it to cryptographic key management within the Advanced Metering Infrastructure (AMI) as an example. The stakeholders, requirements, components, and threats are determined. We then populate the matrices with justified values by addressing the AMI at a higher level, rather than trying to consider every piece of hardwaremore » and software involved. We accomplish this task by leveraging the recently established NISTR 7628 guideline for smart grid security. This allowed us to choose the stakeholders, requirements, components, and threats realistically. We reviewed the literature and selected an industry technical working group to select three representative threats from a collection of 29 threats. From this subset, we populate the stakes, dependency, and impact matrices, and the threat vector with realistic numbers. Each Stakeholder s Mean Failure Cost is then computed.« less
A cyber infrastructure for the SKA Telescope Manager
NASA Astrophysics Data System (ADS)
Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul
2016-07-01
The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.
Defense strategies for cloud computing multi-site server infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less
Game-theoretic strategies for asymmetric networked systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell
Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less
Algorithms for Lightweight Key Exchange.
Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio
2017-06-27
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.
NASA Astrophysics Data System (ADS)
Riddick, Andrew; Glaves, Helen; Crompton, Shirley; Giaretta, David; Ritchie, Brian; Pepler, Sam; De Smet, Wim; Marelli, Fulvio; Mantovani, Pier-Luca
2014-05-01
The ability to preserve earth science data for the long-term is a key requirement to support on-going research and collaboration within and between earth science disciplines. A number of critically important current research initiatives (e.g. understanding climate change or ensuring sustainability of natural resources) typically rely on the continuous availability of data collected over several decades in a form which can be easily accessed and used by scientists. In many earth science disciplines the capture of key observational data may be difficult or even impossible to repeat. For example, a specific geological exposure or subsurface borehole may be only temporarily available, and earth observation data derived from a particular satellite mission is often unique. Another key driver for long-term data preservation is that the grand challenges of the kind described above frequently involve cross-disciplinary research utilising raw and interpreted data from a number of related earth science disciplines. Adopting effective data preservation strategies supports this requirement for interoperability as well as ensuring long term usability of earth science data, and has the added potential for stimulating innovative earth science research. The EU-funded SCIDIP-ES project seeks to address these challenges by developing a Europe-wide e-infrastructure for long-term data preservation by providing appropriate software tools and infrastructure services to enable and promote long-term preservation of earth science data. This poster will describe the current status of this e-infrastructure and outline the integration of the prototype SCIDIP-ES software components into the existing systems used by earth science archives and data providers. These prototypes utilise a system architecture which stores preservation information in a standardised OAIS-compliant way, and connects and adds value to existing earth science archives. A SCIDIP-ES test-bed has been implemented by the National Geoscience Data Centre (NGDC) and the British Atmospheric Data Centre (BADC) in the UK, which allows datasets to be more easily integrated and preserved for future use. Many of the data preservation requirements of these two key Natural Environment Research Council (NERC) data centres are common to other earth science data providers and are therefore more widely applicable. The capability for interoperability between datasets stored in different formats is a common requirement for the long-term preservation of data, and the way in which this is supported by the SCIDIP-ES tools and services will be explained.
Standard development at the Human Variome Project.
Smith, Timothy D; Vihinen, Mauno
2015-01-01
The Human Variome Project (HVP) is a world organization working towards facilitating the collection, curation, interpretation and free and open sharing of genetic variation information. A key component of HVP activities is the development of standards and guidelines. HVP Standards are systems, procedures and technologies that the HVP Consortium has determined must be used by HVP-affiliated data sharing infrastructure and should be used by the broader community. HVP guidelines are considered to be beneficial for HVP affiliated data sharing infrastructure and the broader community to adopt. The HVP also maintains a process for assessing systems, processes and tools that implement HVP Standards and Guidelines. Recommended System Status is an accreditation process designed to encourage the adoption of HVP Standards and Guidelines. Here, we describe the HVP standards development process and discuss the accepted standards, guidelines and recommended systems as well as those under acceptance. Certain HVP Standards and Guidelines are already widely adopted by the community and there are committed users for the others. © The Author(s) 2015. Published by Oxford University Press.
Standard development at the Human Variome Project
Smith, Timothy D.; Vihinen, Mauno
2015-01-01
The Human Variome Project (HVP) is a world organization working towards facilitating the collection, curation, interpretation and free and open sharing of genetic variation information. A key component of HVP activities is the development of standards and guidelines. HVP Standards are systems, procedures and technologies that the HVP Consortium has determined must be used by HVP-affiliated data sharing infrastructure and should be used by the broader community. HVP guidelines are considered to be beneficial for HVP affiliated data sharing infrastructure and the broader community to adopt. The HVP also maintains a process for assessing systems, processes and tools that implement HVP Standards and Guidelines. Recommended System Status is an accreditation process designed to encourage the adoption of HVP Standards and Guidelines. Here, we describe the HVP standards development process and discuss the accepted standards, guidelines and recommended systems as well as those under acceptance. Certain HVP Standards and Guidelines are already widely adopted by the community and there are committed users for the others. PMID:25818894
DOT National Transportation Integrated Search
2000-01-01
This report describes the results of a major data gathering effort aimed at tracking deployment of nine infrastructure components of the metropolitan ITS infrastructure in 78 of the largest metropolitan areas in the nation. The nine components are: F...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.
Enhancing infrastructure resilience through business continuity planning.
Fisher, Ronald; Norman, Michael; Klett, Mary
2017-01-01
Critical infrastructure is crucial to the functionality and wellbeing of the world around us. It is a complex network that works together to create an efficient society. The core components of critical infrastructure are dependent on one another to function at their full potential. Organisations face unprecedented environmental risks such as increased reliance on information technology and telecommunications, increased infrastructure interdependencies and globalisation. Successful organisations should integrate the components of cyber-physical and infrastructure interdependencies into a holistic risk framework. Physical security plans, cyber security plans and business continuity plans can help mitigate environmental risks. Cyber security plans are becoming the most crucial to have, yet are the least commonly found in organisations. As the reliance on cyber continues to grow, it is imperative that organisations update their business continuity and emergency preparedness activities to include this.
A Sustained Proximity Network for Multi-Mission Lunar Exploration
NASA Technical Reports Server (NTRS)
Soloff, Jason A.; Noreen, Gary; Deutsch, Leslie; Israel, David
2005-01-01
Tbe Vision for Space Exploration calls for an aggressive sequence of robotic missions beginning in 2008 to prepare for a human return to the Moon by 2020, with the goal of establishing a sustained human presence beyond low Earth orbit. A key enabler of exploration is reliable, available communication and navigation capabilities to support both human and robotic missions. An adaptable, sustainable communication and navigation architecture has been developed by Goddard Space Flight Center and the Jet Propulsion Laboratory to support human and robotic lunar exploration through the next two decades. A key component of the architecture is scalable deployment, with the infrastructure evolving as needs emerge, allowing NASA and its partner agencies to deploy an interoperable communication and navigation system in an evolutionary way, enabling cost effective, highly adaptable systems throughout the lunar exploration program.
This slide presentation summarizes key elements of the EPA Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall r...
National Infrastructure Protection Plan: Partnering to Enhance Protection and Resiliency
ERIC Educational Resources Information Center
US Department of Homeland Security, 2009
2009-01-01
The overarching goal of the National Infrastructure Protection Plan (NIPP) is to build a safer, more secure, and more resilient America by preventing, deterring, neutralizing, or mitigating the effects of deliberate efforts by terrorists to destroy, incapacitate, or exploit elements of our Nation's critical infrastructure and key resources (CIKR)…
Distributed generation of shared RSA keys in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Liu, Yi-Liang; Huang, Qin; Shen, Ying
2005-12-01
Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.
Fabrication Infrastructure to Enable Efficient Exploration and Utilization of Space
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Fikes, John C.; McLemore, Carole A.; Manning, Curtis W.; Good, Jim
2007-01-01
Unlike past one-at-a-time mission approaches, system-of-systems infrastructures will be needed to enable ambitious scenarios for sustainable future space exploration and utilization. Fabrication infrastructure will be needed to support habitat structure development, tools and mechanical part fabrication, as well as repair and replacement of ground support and space mission hardware such as life support items, vehicle components and crew systems. The fabrication infrastructure will need the In Situ Fabrication and Repair (ISFR) element, which is working in conjunction with the In Situ Resources Utilization (ISRU) element, to live off the land. The ISFR Element supports the entire life cycle of Exploration by: reducing downtime due to failed components; decreasing risk to crew by recovering quickly from degraded operation of equipment; improving system functionality with advanced geometry capabilities; and enhancing mission safety by reducing assembly part counts of original designs where possible. This paper addresses the fabrication infrastructures that support efficient, affordable, reliable infrastructures for both space exploration systems and logistics; these infrastructures allow sustained, affordable and highly effective operations on the Moon, Mars and beyond.
UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies
NASA Astrophysics Data System (ADS)
Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.
2007-12-01
Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced to spend considerable amount of time to solve basic data management and preprocessing tasks and deal with low level computational design problems like parallelization of model codes. Our modeling infrastructure is designed to take care the bulk of the common tasks found in complex earth system models like I/O handling, computational domain and time management, parallel execution of the modeling tasks, etc. The modeling infrastructure allows scientists to focus on the numerical implementation of the physical processes on a single computational objects(typically grid cells) while the framework takes care of the preprocessing of input data, establishing of the data exchange between computation objects and the execution of the science code. In our presentation, we will discuss the key concepts of our modeling infrastructure. We will demonstrate integration of our modeling framework with data services offered by the UNH Earth System Data Collaborative via web interfaces. We will layout the road map to turn our prototype modeling environment into a truly community framework for wide range of earth system scientists and environmental managers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, David M.; Hoffman, Michael G.; Niemeyer, Jackie M.
This report examines the information and communications technology (ICT) services industry in response to an inquiry by the Department of Energy’s (DOE’s) Office of Energy Policy and Systems Analysis. The report answers several key questions: •How has the reliance on ICT services evolved in recent years for key infrastructure services such as air travel, freight transport, electricity and natural gas distribution, financial services, and critical health care, and for the household sector? •What ICT industry trends explain continued strong linkage to and reliance upon ICT? •What is the ICT industry’s reliance on grid-supplied power, uninterruptible power supplies, emergency generators andmore » back-up energy storage technologies? •What are the observed direct effects of ICT disruptions induced by electrical system failures in recent history and how resilient are the components of the ICT industry?« less
Secure communications using nonlinear silicon photonic keys.
Grubel, Brian C; Bosworth, Bryan T; Kossey, Michael R; Cooper, A Brinton; Foster, Mark A; Foster, Amy C
2018-02-19
We present a secure communication system constructed using pairs of nonlinear photonic physical unclonable functions (PUFs) that harness physical chaos in integrated silicon micro-cavities. Compared to a large, electronically stored one-time pad, our method provisions large amounts of information within the intrinsically complex nanostructure of the micro-cavities. By probing a micro-cavity with a rapid sequence of spectrally-encoded ultrafast optical pulses and measuring the lightwave responses, we experimentally demonstrate the ability to extract 2.4 Gb of key material from a single micro-cavity device. Subsequently, in a secure communication experiment with pairs of devices, we achieve bit error rates below 10 -5 at code rates of up to 0.1. The PUFs' responses are never transmitted over the channel or stored in digital memory, thus enhancing the security of the system. Additionally, the micro-cavity PUFs are extremely small, inexpensive, robust, and fully compatible with telecommunications infrastructure, components, and electronic fabrication. This approach can serve one-time pad or public key exchange applications where high security is required.
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
OOI CyberInfrastructure - Next Generation Oceanographic Research
NASA Astrophysics Data System (ADS)
Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.
2008-12-01
Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.
Pennsylvania Reaches Infrastructure Milestone
With a series of “aye” votes, the Pennsylvania agency that turns EPA funding and state financing into water infrastructure projects crossed a key threshold recently – $8 billion in investment over nearly three decades
Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.
2009-01-01
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830
Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S
2009-11-14
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.
Design principles in the development of (public) health information infrastructures.
Neame, Roderick
2012-01-01
In this article the author outlines the key issues in the development of a regional health information infrastructure suitable for public health data collections. A set of 10 basic design and development principles as used and validated in the development of the successful New Zealand National Health Information Infrastructure in 1993 are put forward as a basis for future developments. The article emphasises the importance of securing clinical input into any health data that is collected, and suggests strategies whereby this may be achieved, including creating an information economy alongside the care economy. It is suggested that the role of government in such developments is to demonstrate leadership, to work with the sector to develop data, messaging and security standards, to establish key online indexes, to develop data warehouses and to create financial incentives for adoption of the infrastructure and the services it delivers to users. However experience suggests that government should refrain from getting involved in local care services data infrastructure, technology and management issues.
NASA Astrophysics Data System (ADS)
Peng, Xiang; Zhang, Peng; Cai, Lilong
In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.
Satellite Communications for Aeronautical Applications: Recent research and Development Results
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.
2001-01-01
Communications systems have always been a critical element in aviation. Until recently, nearly all communications between the ground and aircraft have been based on analog voice technology. But the future of global aviation requires a more sophisticated "information infrastructure" which not only provides more and better communications, but integrates the key information functions (communications, navigation, and surveillance) into a modern, network-based infrastructure. Satellite communications will play an increasing role in providing information infrastructure solutions for aviation. Developing and adapting satellite communications technologies for aviation use is now receiving increased attention as the urgency to develop information infrastructure solutions grows. The NASA Glenn Research Center is actively involved in research and development activities for aeronautical satellite communications, with a key emphasis on air traffic management communications needs. This paper describes the recent results and status of NASA Glenn's research program.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Hoyt, David B; Schneidman, Diane S
2014-01-01
Throughout its 100-year history of working to ensure that surgical patients receive safe, high-quality, cost-effective care, the American College of Surgeons has adhered to four key principles: (1) Set the standards to identify and set the highest clinical standards based on the collection of outcomes data and other scientific evidence that can be customized to each patient's condition so that surgeons can offer the right care, at the right time, in the right setting. (2) Build the right infrastructure to provide the highest quality care with surgical facilities having in place appropriate and adequate staffing levels, a reasonable mix of specialists, and the right equipment. Checklists and health information technology, such as the electronic health record, are components of this infrastructure. (3) Collect robust data so that surgical decisions are based on clinical data drawn from medical charts that track patients after discharge from the hospital. Data should be risk-adjusted and collected in nationally benchmarked registries to allow institutions to compare their care with other providers. (4) Verify processes and infrastructure by having an external authority periodically affirm that the right systems are in place at health care institutions, that outcomes are being measured and benchmarked, and that hospitals and providers are proactively responding to these findings. © 2014.
Service Modeling Language Applied to Critical Infrastructure
NASA Astrophysics Data System (ADS)
Baldini, Gianmarco; Fovino, Igor Nai
The modeling of dependencies in complex infrastructure systems is still a very difficult task. Many methodologies have been proposed, but a number of challenges still remain, including the definition of the right level of abstraction, the presence of different views on the same critical infrastructure and how to adequately represent the temporal evolution of systems. We propose a modeling methodology where dependencies are described in terms of the service offered by the critical infrastructure and its components. The model provides a clear separation between services and the underlying organizational and technical elements, which may change in time. The model uses the Service Modeling Language proposed by the W3 consortium for describing critical infrastructure in terms of interdependent services nodes including constraints, behavior, information flows, relations, rules and other features. Each service node is characterized by its technological, organizational and process components. The model is then applied to a real case of an ICT system for users authentication.
Algorithms for Lightweight Key Exchange †
Santonja, Juan; Zamora, Antonio
2017-01-01
Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks. PMID:28654006
Crowdsourced Contributions to the Nation's Geodetic Elevation Infrastructure
NASA Astrophysics Data System (ADS)
Stone, W. A.
2014-12-01
NOAA's National Geodetic Survey (NGS), a United States Department of Commerce agency, is engaged in providing the nation's fundamental positioning infrastructure - the National Spatial Reference System (NSRS) - which includes the framework for latitude, longitude, and elevation determination as well as various geodetic models, tools, and data. Capitalizing on Global Navigation Satellite System (GNSS) technology for improved access to the nation's precise geodetic elevation infrastructure requires use of a geoid model, which relates GNSS-derived heights (ellipsoid heights) with traditional elevations (orthometric heights). NGS is facilitating the use of crowdsourced GNSS observations collected at published elevation control stations by the professional surveying, geospatial, and scientific communities to help improve NGS' geoid modeling capability. This collocation of published elevation data and newly collected GNSS data integrates together the two height systems. This effort in turn supports enhanced access to accurate elevation information across the nation, thereby benefiting all users of geospatial data. By partnering with the public in this collaborative effort, NGS is not only helping facilitate improvements to the elevation infrastructure for all users but also empowering users of NSRS with the capability to do their own high-accuracy positioning. The educational outreach facet of this effort helps inform the public, including the scientific community, about the utility of various NGS tools, including the widely used Online Positioning User Service (OPUS). OPUS plays a key role in providing user-friendly and high accuracy access to NSRS, with optional sharing of results with NGS and the public. All who are interested in helping evolve and improve the nationwide elevation determination capability are invited to participate in this nationwide partnership and to learn more about the geodetic infrastructure which is a vital component of viable spatial data for many disciplines, including the geosciences.
Roadmap for Developing of Brokering as a Component of EarthCube
NASA Astrophysics Data System (ADS)
Pearlman, J.; Khalsa, S. S.; Browdy, S.; Duerr, R. E.; Nativi, S.; Parsons, M. A.; Pearlman, F.; Robinson, E. M.
2012-12-01
The goal of NSF's EarthCube is to create a sustainable infrastructure that enables the sharing of all geosciences data, information, and knowledge in an open, transparent and inclusive manner. Key to achieving the EarthCube vision is establishing a process that will guide the evolution of the infrastructure through community engagement and appropriate investment so that the infrastructure is embraced and utilized by the entire geosciences community. In this presentation we describe a roadmap, developed through the EarthCube Brokering Concept Award, for an evolutionary process of infrastructure and interoperability development. All geoscience communities already have, to a greater or lesser degree, elements of an information infrastructure in place. These elements include resources such as data archives, catalogs, and portals as well as vocabularies, data models, protocols, best practices and other community conventions. What is necessary now is a process for consolidating these diverse infrastructure elements into an overall infrastructure that provides easy discovery, access and utilization of resources across disciplinary boundaries. This process of consolidation will be achieved by creating "interfaces," what we call "brokers," between systems. Brokers connect disparate systems without imposing new burdens upon those systems, and enable the infrastructure to adjust to new technical developments and scientific requirements as they emerge. Robust cyberinfrastructure will arise only when social, organizational, and cultural issues are resolved in tandem with the creation of technology-based services. This is best done through use-case-driven requirements and agile, iterative development methods. It is important to start by solving real (not hypothetical) information access and use problems via small pilot projects that develop capabilities targeted to specific communities. These pilots can then grow into larger prototypes addressing intercommunity problems working towards a full-scale socio-technical infrastructure vision. Brokering, as a critical capability for connecting systems, evolves over time through more connections and increased functionality. This adaptive process allows for continual evaluation as to how well science-driven use cases are being met. Several NSF infrastructure projects are underway and beginning to shape the next generation of information sharing. There is a near term, and possibly unique, opportunity to increase the impact and interconnectivity of these projects, and further improve science research collaboration through brokering. Brokering has been demonstrated to be an essential part of a robust, adaptive infrastructure, but critical questions of governance and detailed implementation remain. Our roadmap proposes the expansion of brokering pilots into fully operational prototypes that work with the broader science and informatics communities to answer these questions, connect existing and emerging systems, and evolve the EarthCube infrastructure.
The new ATLAS Fast Calorimeter Simulation
NASA Astrophysics Data System (ADS)
Schaarschmidt, J.; ATLAS Collaboration
2017-10-01
Current and future need for large scale simulated samples motivate the development of reliable fast simulation techniques. The new Fast Calorimeter Simulation is an improved parameterized response of single particles in the ATLAS calorimeter that aims to accurately emulate the key features of the detailed calorimeter response as simulated with Geant4, yet approximately ten times faster. Principal component analysis and machine learning techniques are used to improve the performance and decrease the memory need compared to the current version of the ATLAS Fast Calorimeter Simulation. A prototype of this new Fast Calorimeter Simulation is in development and its integration into the ATLAS simulation infrastructure is ongoing.
NASA Technical Reports Server (NTRS)
Chow, Edward; Spence, Matthew Chew; Pell, Barney; Stewart, Helen; Korsmeyer, David; Liu, Joseph; Chang, Hsin-Ping; Viernes, Conan; Gogorth, Andre
2003-01-01
This paper discusses the challenges and security issues inherent in building complex cross-organizational collaborative projects and software systems within NASA. By applying the design principles of compartmentalization, organizational hierarchy and inter-organizational federation, the Secured Advanced Federated Environment (SAFE) is laying the foundation for a collaborative virtual infrastructure for the NASA community. A key element of SAFE is the Micro Security Domain (MSD) concept, which balances the need to collaborate and the need to enforce enterprise and local security rules. With the SAFE approach, security is an integral component of enterprise software and network design, not an afterthought.
High-Rate Wireless Airborne Network Demonstration (HiWAND) Flight Test Results
NASA Technical Reports Server (NTRS)
Franz, Russell
2008-01-01
An increasing number of flight research and airborne science experiments now contain network-ready systems that could benefit from a high-rate bidirectional air-to-ground network link. A prototype system, the High-Rate Wireless Airborne Network Demonstration, was developed from commercial off-the-shelf components while leveraging the existing telemetry infrastructure on the Western Aeronautical Test Range. This approach resulted in a cost-effective, long-range, line-of-sight network link over the S and the L frequency bands using both frequency modulation and shaped-offset quadrature phase-shift keying modulation. This report discusses system configuration and the flight test results.
Approaches to Sustainable Capacity Building for Cardiovascular Disease Care in Kenya.
Barasa, Felix A; Vedanthan, Rajesh; Pastakia, Sonak D; Crowe, Susie J; Aruasa, Wilson; Sugut, Wilson K; White, Russ; Ogola, Elijah S; Bloomfield, Gerald S; Velazquez, Eric J
2017-02-01
Cardiovascular diseases are approaching epidemic levels in Kenya and other low- and middle-income countries without accompanying effective preventive and therapeutic strategies. This is happening in the background of residual and emerging infections and other diseases of poverty, and increasing physical injuries from traffic accidents and noncommunicable diseases. Investments to create a skilled workforce and health care infrastructure are needed. Improving diagnostic capacity, access to high-quality medications, health care, appropriate legislation, and proper coordination are key components to ensuring the reversal of the epidemic and a healthy citizenry. Strong partnerships with the developed countries also crucial. Copyright © 2016 Elsevier Inc. All rights reserved.
High-Rate Wireless Airborne Network Demonstration (HiWAND) Flight Test Results
NASA Technical Reports Server (NTRS)
Franz, Russell
2007-01-01
An increasing number of flight research and airborne science experiments now contain network-ready systems that could benefit from a high-rate bidirectional air-to-ground network link. A prototype system, the High-Rate Wireless Airborne Network Demonstration, was developed from commercial off-the-shelf components while leveraging the existing telemetry infrastructure on the Western Aeronautical Test Range. This approach resulted in a cost-effective, long-range, line-of-sight network link over the S and the L frequency bands using both frequency modulation and shaped-offset quadrature phase-shift keying modulation. This paper discusses system configuration and the flight test results.
Tempest: Tools for Addressing the Needs of Next-Generation Climate Models
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.; Pinheiro, M. C.; Fong, J.
2015-12-01
Tempest is a comprehensive simulation-to-science infrastructure that tackles the needs of next-generation, high-resolution, data intensive climate modeling activities. This project incorporates three key components: TempestDynamics, a global modeling framework for experimental numerical methods and high-performance computing; TempestRemap, a toolset for arbitrary-order conservative and consistent remapping between unstructured grids; and TempestExtremes, a suite of detection and characterization tools for identifying weather extremes in large climate datasets. In this presentation, the latest advances with the implementation of this framework will be discussed, and a number of projects now utilizing these tools will be featured.
Operable Data Management for Ocean Observing Systems
NASA Astrophysics Data System (ADS)
Chavez, F. P.; Graybeal, J. B.; Godin, M. A.
2004-12-01
As oceanographic observing systems become more numerous and complex, data management solutions must follow. Most existing oceanographic data management systems fall into one of three categories: they have been developed as dedicated solutions, with limited application to other observing systems; they expect that data will be pre-processed into well-defined formats, such as netCDF; or they are conceived as robust, generic data management solutions, with complexity (high) and maturity and adoption rates (low) to match. Each approach has strengths and weaknesses; no approach yet fully addresses, nor takes advantage of, the sophistication of ocean observing systems as they are now conceived. In this presentation we describe critical data management requirements for advanced ocean observing systems, of the type envisioned by ORION and IOOS. By defining common requirements -- functional, qualitative, and programmatic -- for all such ocean observing systems, the performance and nature of the general data management solution can be characterized. Issues such as scalability, maintaining metadata relationships, data access security, visualization, and operational flexibility suggest baseline architectural characteristics, which may in turn lead to reusable components and approaches. Interoperability with other data management systems, with standards-based solutions in metadata specification and data transport protocols, and with the data management infrastructure envisioned by IOOS and ORION, can also be used to define necessary capabilities. Finally, some requirements for the software infrastructure of ocean observing systems can be inferred. Early operational results and lessons learned, from development and operations of MBARI ocean observing systems, are used to illustrate key requirements, choices, and challenges. Reference systems include the Monterey Ocean Observing System (MOOS), its component software systems (Software Infrastructure and Applications for MOOS, and the Shore Side Data System), and the Autonomous Ocean Sampling Network (AOSN).
Cyberwarfare on the Electricity Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murarka, N.; Ramesh, V.C.
2000-03-20
The report analyzes the possibility of cyberwarfare on the electricity infrastructure. The ongoing deregulation of the electricity industry makes the power grid all the more vulnerable to cyber attacks. The report models the power system information system components, models potential threats and protective measures. It therefore offers a framework for infrastructure protection.
FOSS Tools for Research Infrastructures - A Success Story?
NASA Astrophysics Data System (ADS)
Stender, V.; Schroeder, M.; Wächter, J.
2015-12-01
Established initiatives and mandated organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. The basic idea behind these infrastructures is the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. Especially the management of research data is gaining more and more importance. In geosciences these developments have to be merged with the enhanced data management approaches of Spatial Data Infrastructures (SDI). The Centre for GeoInformationTechnology (CeGIT) at the GFZ German Research Centre for Geosciences has the objective to establish concepts and standards of SDIs as an integral part of research infrastructure architectures. In different projects, solutions to manage research data for land- and water management or environmental monitoring have been developed based on a framework consisting of Free and Open Source Software (FOSS) components. The framework provides basic components supporting the import and storage of data, discovery and visualization as well as data documentation (metadata). In our contribution, we present our data management solutions developed in three projects, Central Asian Water (CAWa), Sustainable Management of River Oases (SuMaRiO) and Terrestrial Environmental Observatories (TERENO) where FOSS components build the backbone of the data management platform. The multiple use and validation of tools helped to establish a standardized architectural blueprint serving as a contribution to Research Infrastructures. We examine the question of whether FOSS tools are really a sustainable choice and whether the increased efforts of maintenance are justified. Finally it should help to answering the question if the use of FOSS for Research Infrastructures is a success story.
Scholz, Stefan; Ngoli, Baltazar; Flessa, Steffen
2015-05-01
Health care infrastructure constitutes a major component of the structural quality of a health system. Infrastructural deficiencies of health services are reported in literature and research. A number of instruments exist for the assessment of infrastructure. However, no easy-to-use instruments to assess health facility infrastructure in developing countries are available. Present tools are not applicable for a rapid assessment by health facility staff. Therefore, health information systems lack data on facility infrastructure. A rapid assessment tool for the infrastructure of primary health care facilities was developed by the authors and pilot-tested in Tanzania. The tool measures the quality of all infrastructural components comprehensively and with high standardization. Ratings use a 2-1-0 scheme which is frequently used in Tanzanian health care services. Infrastructural indicators and indices are obtained from the assessment and serve for reporting and tracing of interventions. The tool was pilot-tested in Tanga Region (Tanzania). The pilot test covered seven primary care facilities in the range between dispensary and district hospital. The assessment encompassed the facilities as entities as well as 42 facility buildings and 80 pieces of technical medical equipment. A full assessment of facility infrastructure was undertaken by health care professionals while the rapid assessment was performed by facility staff. Serious infrastructural deficiencies were revealed. The rapid assessment tool proved a reliable instrument of routine data collection by health facility staff. The authors recommend integrating the rapid assessment tool in the health information systems of developing countries. Health authorities in a decentralized health system are thus enabled to detect infrastructural deficiencies and trace the effects of interventions. The tool can lay the data foundation for district facility infrastructure management.
Quantifying economic benefits for rail infrastructure projects.
DOT National Transportation Integrated Search
2014-10-01
This project identifies metrics for measuring the benefit of rail infrastructure projects for key : stakeholders. It is important that stakeholders with an interest in community economic development play an active : role in the development of the rai...
A framework to support human factors of automation in railway intelligent infrastructure.
Dadashi, Nastaran; Wilson, John R; Golightly, David; Sharples, Sarah
2014-01-01
Technological and organisational advances have increased the potential for remote access and proactive monitoring of the infrastructure in various domains and sectors - water and sewage, oil and gas and transport. Intelligent Infrastructure (II) is an architecture that potentially enables the generation of timely and relevant information about the state of any type of infrastructure asset, providing a basis for reliable decision-making. This paper reports an exploratory study to understand the concepts and human factors associated with II in the railway, largely drawing from structured interviews with key industry decision-makers and attachment to pilot projects. Outputs from the study include a data-processing framework defining the key human factors at different levels of the data structure within a railway II system and a system-level representation. The framework and other study findings will form a basis for human factors contributions to systems design elements such as information interfaces and role specifications.
Nevada Infrastructure for Climate Change Science, Education, and Outreach
NASA Astrophysics Data System (ADS)
Dana, G. L.; Piechota, T. C.; Lancaster, N.; Mensing, S. A.
2009-12-01
The Nevada system of Higher Education, including the University of Nevada, Las Vegas, the University of Nevada, Reno, the Desert Research Institute, and Nevada State College have begun a five year research and infrastructure building program, funded by the National Science Foundation Experimental Program to Stimulate Competitive Research (NSF EPSCoR) with the vision “to create a statewide interdisciplinary program and virtual climate change center that will stimulate transformative research, education, and outreach on the effects of regional climate change on ecosystem resources (especially water) and support use of this knowledge by policy makers and stakeholders.” Six major strategies are proposed: 1) Develop a capability to model climate change and its effects at a regional and sub-regional scales to evaluate different future scenarios and strategies (Climate Modeling Component) 2) Develop data collection, modeling, and visualization infrastructure to determine and analyze effects on ecosystems and disturbance regimes (Ecological Change Component) 3) Develop data collection, modeling, and visualization infrastructure to better quantify and model changes in water balance and resources under climate change (Water Resources Component) 4) Develop data collection and modeling infrastructure to assess effects on human systems, responses to institutional and societal aspects, and enhance policy making and outreach to communities and stakeholders (Policy, Decision-Making, and Outreach Component) 5) Develop a data portal and software to support interdisciplinary research via integration of data from observational networks and modeling (Cyberinfrastructure Component) and 6) Develop educational infrastructure to train students at all levels and provide public outreach in climate change issues (Education Component). As part of the new infrastructure, two observational transects will be established across Great Basin Ranges, one in southern Nevada in the Spring Mountains, and the second to be located in the Snake Range of eastern Nevada which will reach bristlecone pine stands. Climatic, hydrologic and ecological data from these transects will be downloaded into high capacity data storage units and made available to researchers through creation of the Nevada climate change portal. Our research will aim to answer two interdisciplinary science questions: 1) How will climate change affect water resources and linked ecosystem resources and human systems? And 2) How will climate change affect disturbance regimes (e.g., wildland fires, invasive species, insect outbreaks, droughts) and linked systems?
Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure
NASA Astrophysics Data System (ADS)
Perez Lespier, L. M.; Long, S.; Shoberg, T.
2016-12-01
This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.
Duarte, Afonso M. S.; Psomopoulos, Fotis E.; Blanchet, Christophe; Bonvin, Alexandre M. J. J.; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C.; de Lucas, Jesus M.; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B.
2015-01-01
With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community. PMID:26157454
Duarte, Afonso M S; Psomopoulos, Fotis E; Blanchet, Christophe; Bonvin, Alexandre M J J; Corpas, Manuel; Franc, Alain; Jimenez, Rafael C; de Lucas, Jesus M; Nyrönen, Tommi; Sipos, Gergely; Suhr, Stephanie B
2015-01-01
With the increasingly rapid growth of data in life sciences we are witnessing a major transition in the way research is conducted, from hypothesis-driven studies to data-driven simulations of whole systems. Such approaches necessitate the use of large-scale computational resources and e-infrastructures, such as the European Grid Infrastructure (EGI). EGI, one of key the enablers of the digital European Research Area, is a federation of resource providers set up to deliver sustainable, integrated and secure computing services to European researchers and their international partners. Here we aim to provide the state of the art of Grid/Cloud computing in EU research as viewed from within the field of life sciences, focusing on key infrastructures and projects within the life sciences community. Rather than focusing purely on the technical aspects underlying the currently provided solutions, we outline the design aspects and key characteristics that can be identified across major research approaches. Overall, we aim to provide significant insights into the road ahead by establishing ever-strengthening connections between EGI as a whole and the life sciences community.
Consideration of an Applied Model of Public Health Program Infrastructure
Lavinghouze, Rene; Snyder, Kimberly; Rieker, Patricia; Ottoson, Judith
2015-01-01
Systemic infrastructure is key to public health achievements. Individual public health program infrastructure feeds into this larger system. Although program infrastructure is rarely defined, it needs to be operationalized for effective implementation and evaluation. The Ecological Model of Infrastructure (EMI) is one approach to defining program infrastructure. The EMI consists of 5 core (Leadership, Partnerships, State Plans, Engaged Data, and Managed Resources) and 2 supporting (Strategic Understanding and Tactical Action) elements that are enveloped in a program’s context. We conducted a literature search across public health programs to determine support for the EMI. Four of the core elements were consistently addressed, and the other EMI elements were intermittently addressed. The EMI provides an initial and partial model for understanding program infrastructure, but additional work is needed to identify evidence-based indicators of infrastructure elements that can be used to measure success and link infrastructure to public health outcomes, capacity, and sustainability. PMID:23411417
The role of self-management in designing care for people with osteoarthritis of the hip and knee.
Brand, Caroline A
2008-11-17
Osteoarthritis of the hip and knee is an increasingly common condition that is managed principally with lifestyle behaviour changes. Osteoarthritis management can be complex, as it typically affects older patients with multiple comorbidities. There is evidence that opportunities exist to improve uptake of evidence-based recommendations for care, especially for non-pharmacological interventions. The National Chronic Disease Strategy (NCDS) defines key components of programs designed to meet the needs of people with chronic conditions; one component is patient self-management. NCDS principles have been effectively integrated into chronic disease management programs for other conditions, but there is limited evidence of effectiveness for osteoarthritis programs. A comprehensive osteoarthritis management model that reflects NCDS policy is needed. Barriers to implementing such a model include poor integration of decision support, a lack of national infrastructure, workforce constraints and limited funding.
Multiphysics Application Coupling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Michael T.
2013-12-02
This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems;more » with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, Dave; Garzoglio, Gabriele; Kim, Hyunwoo
As of 2012, a number of US Department of Energy (DOE) National Laboratories have access to a 100 Gb/s wide-area network backbone. The ESnet Advanced Networking Initiative (ANI) project is intended to develop a prototype network, based on emerging 100 Gb/s Ethernet technology. The ANI network will support DOE's science research programs. A 100 Gb/s network test bed is a key component of the ANI project. The test bed offers the opportunity for early evaluation of 100Gb/s network infrastructure for supporting the high impact data movement typical of science collaborations and experiments. In order to make effective use of thismore » advanced infrastructure, the applications and middleware currently used by the distributed computing systems of large-scale science need to be adapted and tested within the new environment, with gaps in functionality identified and corrected. As a user of the ANI test bed, Fermilab aims to study the issues related to end-to-end integration and use of 100 Gb/s networks for the event simulation and analysis applications of physics experiments. In this paper we discuss our findings from evaluating existing HEP Physics middleware and application components, including GridFTP, Globus Online, etc. in the high-speed environment. These will include possible recommendations to the system administrators, application and middleware developers on changes that would make production use of the 100 Gb/s networks, including data storage, caching and wide area access.« less
Lean Six Sigma implementation and organizational culture.
Knapp, Susan
2015-01-01
The purpose of this paper is to examine the relationship between four organizational cultural types defined by the Competing Values Framework and three Lean Six Sigma implementation components - management involvement, use of Lean Six Sigma methods and Lean Six Sigma infrastructure. The study involved surveying 446 human resource and quality managers from 223 hospitals located in Maine, New Hampshire, Vermont, Massachusetts and Rhode Island using the Organizational Culture Assessment Instrument. Findings - In total, 104 completed responses were received and analyzed using multivariate analysis of variance. Follow-up analysis of variances showed management support was significant, F(3, 100)=4.89, p < 0.01, η2=1.28; infrastructure was not significant, F(3, 100)=1.55, p=0.21, η2=0.05; and using Lean Six Sigma methods was also not significant, F(3, 100)=1.34, p=0.26, η2=0.04. Post hoc analysis identified group and development cultures having significant interactions with management support. The relationship between organizational culture and Lean Six Sigma in hospitals provides information on how specific cultural characteristics impact the Lean Six Sigma initiative key components. This information assists hospital staff who are considering implementing quality initiatives by providing an understanding of what cultural values correspond to effective Lean Six Sigma implementation. Managers understanding the quality initiative cultural underpinnings, are attentive to the culture-shared values and norm's influence can utilize strategies to better implement Lean Six Sigma.
Integration of Mobil Satellite and Cellular Systems
NASA Technical Reports Server (NTRS)
Drucker, E. H.; Estabrook, P.; Pinck, D.; Ekroot, L.
1993-01-01
By integrating the ground based infrastructure component of a mobile satellite system with the infrastructure systems of terrestrial 800 MHz cellular service providers, a seamless network of universal coverage can be established.
49 CFR 15.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2012 CFR
2012-10-01
... DOT or DHS component or agency. (d) Additional requirements for critical infrastructure information. In the case of information that is both SSI and has been designated as critical infrastructure...
49 CFR 15.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2013 CFR
2013-10-01
... DOT or DHS component or agency. (d) Additional requirements for critical infrastructure information. In the case of information that is both SSI and has been designated as critical infrastructure...
49 CFR 15.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2014 CFR
2014-10-01
... DOT or DHS component or agency. (d) Additional requirements for critical infrastructure information. In the case of information that is both SSI and has been designated as critical infrastructure...
DOT National Transportation Integrated Search
2015-12-01
In the coastal zone, seaports and their intermodal connectors are key types of infrastructure that support the global : supply chain, provide regional economic activity, local transportation system services, and community jobs. The : protection of co...
Developing Your Evaluation Plans: A Critical Component of Public Health Program Infrastructure.
Lavinghouze, S Rene; Snyder, Kimberly
A program's infrastructure is often cited as critical to public health success. The Component Model of Infrastructure (CMI) identifies evaluation as essential under the core component of engaged data. An evaluation plan is a written document that describes how to monitor and evaluate a program, as well as how to use evaluation results for program improvement and decision making. The evaluation plan clarifies how to describe what the program did, how it worked, and why outcomes matter. We use the Centers for Disease Control and Prevention's (CDC) "Framework for Program Evaluation in Public Health" as a guide for developing an evaluation plan. Just as using a roadmap facilitates progress on a long journey, a well-written evaluation plan can clarify the direction your evaluation takes and facilitate achievement of the evaluation's objectives.
Green Infrastructure Opportunities that Arise During Municipal Operations
This document provides approaches that local government officials and municipal program managers in small to midsize communities can use to incorporate green infrastructure components into work they are doing in public spaces.
Policy model for space economy infrastructure
NASA Astrophysics Data System (ADS)
Komerath, Narayanan; Nally, James; Zilin Tang, Elizabeth
2007-12-01
Extraterrestrial infrastructure is key to the development of a space economy. Means for accelerating transition from today's isolated projects to a broad-based economy are considered. A large system integration approach is proposed. The beginnings of an economic simulation model are presented, along with examples of how interactions and coordination bring down costs. A global organization focused on space infrastructure and economic expansion is proposed to plan, coordinate, fund and implement infrastructure construction. This entity also opens a way to raise low-cost capital and solve the legal and public policy issues of access to extraterrestrial resources.
Flood Vulnerability Assessment Map
Maps of energy infrastructure with real-time storm and emergency information by fuel type and by state. Flood hazard information from FEMA has been combined with EIA's energy infrastructure layers as a tool to help state, county, city, and private sector planners assess which key energy infrastructure assets are vulnerable to rising sea levels, storm surges, and flash flooding. Note that flood hazard layers must be zoomed-in to street level before they become visible.
The social impacts of dams: A new framework for scholarly analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchherr, Julian, E-mail: julian.kirchherr@sant.ox.ac.uk; Charles, Katrina J., E-mail: katrina.charles@ouce.ox.ac.uk
No commonly used framework exists in the scholarly study of the social impacts of dams. This hinders comparisons of analyses and thus the accumulation of knowledge. The aim of this paper is to unify scholarly understanding of dams' social impacts via the analysis and aggregation of the various frameworks currently used in the scholarly literature. For this purpose, we have systematically analyzed and aggregated 27 frameworks employed by academics analyzing dams' social impacts (found in a set of 217 articles). A key finding of the analysis is that currently used frameworks are often not specific to dams and thus omitmore » key impacts associated with them. The result of our analysis and aggregation is a new framework for scholarly analysis (which we call ‘matrix framework’) specifically on dams' social impacts, with space, time and value as its key dimensions as well as infrastructure, community and livelihood as its key components. Building on the scholarly understanding of this topic enables us to conceptualize the inherently complex and multidimensional issues of dams' social impacts in a holistic manner. If commonly employed in academia (and possibly in practice), this framework would enable more transparent assessment and comparison of projects.« less
Multi-user quantum key distribution with entangled photons from an AlGaAs chip
NASA Astrophysics Data System (ADS)
Autebert, C.; Trapateau, J.; Orieux, A.; Lemaître, A.; Gomez-Carbonell, C.; Diamanti, E.; Zaquine, I.; Ducci, S.
2016-12-01
In view of real-world applications of quantum information technologies, the combination of miniature quantum resources with existing fibre networks is a crucial issue. Among such resources, on-chip entangled photon sources play a central role for applications spanning quantum communications, computing and metrology. Here, we use a semiconductor source of entangled photons operating at room temperature in conjunction with standard telecom components to demonstrate multi-user quantum key distribution, a core protocol for securing communications in quantum networks. The source consists of an AlGaAs chip-emitting polarisation entangled photon pairs over a large bandwidth in the main telecom band around 1550 nm without the use of any off-chip compensation or interferometric scheme; the photon pairs are directly launched into a dense wavelength division multiplexer (DWDM) and secret keys are distributed between several pairs of users communicating through different channels. We achieve a visibility measured after the DWDM of 87% and show long-distance key distribution using a 50-km standard telecom fibre link between two network users. These results illustrate a promising route to practical, resource-efficient implementations adapted to quantum network infrastructures.
Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H
2005-01-01
Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.
Acoustic emission safety monitoring of intermodal transportation infrastructure.
DOT National Transportation Integrated Search
2015-09-01
Safety and integrity of the national transportation infrastructure are of paramount importance and highway bridges are critical components of the highway system network. This network provides an immense contribution to the industry productivity and e...
Ultrasonic imaging for concrete infrastructure condition assessment and quality assurance.
DOT National Transportation Integrated Search
2017-04-01
This report describes work on laboratory and field performance reviews of an ultrasonic shear wave imaging device called MIRA : for application to plain and reinforced concrete infrastructure components. Potential applications investigated included b...
The Chandra Source Catalog: Processing and Infrastructure
NASA Astrophysics Data System (ADS)
Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duren, Mike; Aldridge, Hal; Abercrombie, Robert K
2013-01-01
Compromises attributable to the Advanced Persistent Threat (APT) highlight the necessity for constant vigilance. The APT provides a new perspective on security metrics (e.g., statistics based cyber security) and quantitative risk assessments. We consider design principals and models/tools that provide high assurance for energy delivery systems (EDS) operations regardless of the state of compromise. Cryptographic keys must be securely exchanged, then held and protected on either end of a communications link. This is challenging for a utility with numerous substations that must secure the intelligent electronic devices (IEDs) that may comprise complex control system of systems. For example, distribution andmore » management of keys among the millions of intelligent meters within the Advanced Metering Infrastructure (AMI) is being implemented as part of the National Smart Grid initiative. Without a means for a secure cryptographic key management system (CKMS) no cryptographic solution can be widely deployed to protect the EDS infrastructure from cyber-attack. We consider 1) how security modeling is applied to key management and cyber security concerns on a continuous basis from design through operation, 2) how trusted models and key management architectures greatly impact failure scenarios, and 3) how hardware-enabled trust is a critical element to detecting, surviving, and recovering from attack.« less
ERIC Educational Resources Information Center
Estache, Antonio; Foster, Vivien; Wodon, Quentin
This book explores the connections between infrastructure reform and poverty alleviation in Latin America based on a detailed analysis of the effects of a decade of reforms. The book demonstrates that because the access to, and affordability of, basic services is still a major problem, infrastructure investment will be a core component of poverty…
The TENCompetence Infrastructure: A Learning Network Implementation
NASA Astrophysics Data System (ADS)
Vogten, Hubert; Martens, Harrie; Lemmers, Ruud
The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.
The future of infrastructure security :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Pablo; Turnley, Jessica Glicken; Parrott, Lori K.
2013-05-01
Sandia National Laboratories hosted a workshop on the future of infrastructure security on February 27-28, 2013, in Albuquerque, NM. The 17 participants came from backgrounds as diverse as federal policy, the insurance industry, infrastructure management, and technology development. The purpose of the workshop was to surface key issues, identify directions forward, and lay groundwork for cross-sectoral and cross-disciplinary collaborations. The workshop addressed issues such as the problem space (what is included in infrastructure problems?), the general types of threats to infrastructure (such as acute or chronic, system-inherent or exogenously imposed) and definitions of secure and resilient infrastructures. The workshop concludedmore » with a consideration of stakeholders and players in the infrastructure world, and identification of specific activities that could be undertaken by the Department of Homeland Security (DHS) and other players.« less
NASA Astrophysics Data System (ADS)
Santoro, M.; Dubois, G.; Schulz, M.; Skøien, J. O.; Nativi, S.; Peedell, S.; Boldrini, E.
2012-04-01
The number of interoperable research infrastructures has increased significantly with the growing awareness of the efforts made by the Global Earth Observation System of Systems (GEOSS). One of the Social Benefit Areas (SBA) that is benefiting most from GEOSS is biodiversity, given the costs of monitoring the environment and managing complex information, from space observations to species records including their genetic characteristics. But GEOSS goes beyond the simple sharing of the data as it encourages the connectivity of models (the GEOSS Model Web), an approach easing the handling of often complex multi-disciplinary questions such as understanding the impact of environmental and climatological factors on ecosystems and habitats. In the context of GEOSS Architecture Implementation Pilot - Phase 3 (AIP-3), the EC-funded EuroGEOSS and GENESIS projects have developed and successfully demonstrated the "eHabitat" use scenario dealing with Climate Change and Biodiversity domains. Based on the EuroGEOSS multidisciplinary brokering infrastructure and on the DOPA (Digital Observatory for Protected Areas, see http://dopa.jrc.ec.europa.eu/), this scenario demonstrated how a GEOSS-based interoperability infrastructure can aid decision makers to assess and possibly forecast the irreplaceability of a given protected area, an essential indicator for assessing the criticality of threats this protected area is exposed to. The "eHabitat" use scenario was advanced in the GEOSS Sprint to Plenary activity; the advanced scenario will include the "EuroGEOSS Data Access Broker" and a new version of the eHabitat model in order to support the use of uncertain data. The multidisciplinary interoperability infrastructure which is used to demonstrate the "eHabitat" use scenario is composed of the following main components: a) A Discovery Broker: this component is able to discover resources from a plethora of different and heterogeneous geospatial services, presenting them on a single and standard discovery service; b) A Discovery Augmentation Component (DAC): this component builds on existing discovery and semantic services in order to provide the infrastructure with semantics enabled queries; c) A Data Access Broker: this component provides a seamless access of heterogeneous remote resources via a unique and standard service; d) Environmental Modeling Components (i.e. OGC WPS): these implement algorithms to predict evolution of protected areas This presentation introduces the advanced infrastructure developed to enhance the "eHabitat" use scenario. The presented infrastructure will be accessible through the GEO Portal and was used for demonstrating the "eHabitat" model at the last GEO Plenary Meeting - Istanbul, November 2011.
Strengthening the capacity for health promotion in South Africa through international collaboration.
Van den Broucke, Stephan; Jooste, Heila; Tlali, Maki; Moodley, Vimla; Van Zyl, Greer; Nyamwaya, David; Tang, Kwok-Cho
2010-06-01
This paper describes a project to strengthen the capacity for health promotion in two Provinces in South Africa. The project draws on the key health promotion capacity dimensions of partnership and networking, infrastructure, problem-solving capacity, and knowledge transfer. The project was carried out in a partnership between the Provinces, the Ministry of Health of South Africa, the government of Flanders, Belgium, and the World Health Organization (WHO). The project aimed to: (i) integrate health promotion into national, Provincial and district level health policy plans (ii) strengthen the health promotion capacity in the two Provinces; and (iii) support the development of tools to monitor and evaluate health promotion interventions. Starting from a situation analysis and identification of priority health issues and existing actions in each Province, capacity-building workshops were organized for senior participants from various sectors. Community-based health promotion interventions were then planned and implemented in both Provinces. A systematic evaluation of the project involving an internal audit of project activities and results based on document analysis, site visits, focus groups and interviews with key persons demonstrated that stakeholders in both Provinces saw an increase of capacity in terms of networking, knowledge transfer, problem solving, and to a lesser extent infrastructure. Health promotion had been well integrated in the Provincial health plans, and roll-out processes with local stakeholders had started after the conclusion of the project. The development of tools for monitoring and evaluation of health promotion was less well achieved. The project illustrates how capacities to deliver health promotion interventions in a developing country can be enhanced through international collaboration. The conceptual model of capacity building that served as a basis for the project provided a useful framework to plan, identify and assess the key components of health promotion capacity in an African context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, John Galloway; Salisbury, Shawn Douglas
2015-07-01
This report summarizes key findings in two national plug-in electric vehicle charging infrastructure demonstrations: The EV Project and ChargePoint America. It will be published to the INL/AVTA website for the general public.
Reddy, Madhu C; Purao, Sandeep; Kelly, Mary
2008-01-01
This article presents a study identifying benefits and challenges of a novel hospital-to-hospital information technology (IT) outsourcing partnership (HHP). The partnership is an innovative response to the problem that many smaller, rural hospitals face: to modernize their IT infrastructure in spite of a severe shortage of resources. The investigators studied three rural hospitals that outsourced their IT infrastructure, through an HHP, to a larger, more technologically advanced hospital in the region. The study design was based on purposive sampling and interviews of senior managers from the four hospitals. The results highlight the HHP's benefits and challenges from both the rural hospitals' and vendor hospital's perspectives. The HHP was considered a success: a key outcome was that it has improved the rural hospitals' IT infrastructure at an affordable cost. The investigators discuss key elements for creating a successful HHP and offer preliminary answers to the question of what it takes for an HHP to be successful.
Reddy, Madhu C.; Purao, Sandeep; Kelly, Mary
2008-01-01
This article presents a study identifying benefits and challenges of a novel hospital-to-hospital information technology (IT) outsourcing partnership (HHP). The partnership is an innovative response to the problem that many smaller, rural hospitals face: to modernize their IT infrastructure in spite of a severe shortage of resources. The investigators studied three rural hospitals that outsourced their IT infrastructure, through an HHP, to a larger, more technologically advanced hospital in the region. The study design was based on purposive sampling and interviews of senior managers from the four hospitals. The results highlight the HHP's benefits and challenges from both the rural hospitals' and vendor hospital's perspectives. The HHP was considered a success: a key outcome was that it has improved the rural hospitals' IT infrastructure at an affordable cost. The investigators discuss key elements for creating a successful HHP and offer preliminary answers to the question of what it takes for an HHP to be successful. PMID:18436901
Cafe: A Generic Configurable Customizable Composite Cloud Application Framework
NASA Astrophysics Data System (ADS)
Mietzner, Ralph; Unger, Tobias; Leymann, Frank
In this paper we present Cafe (Composite Application Framework) an approach to describe configurable composite service-oriented applications and to automatically provision them across different providers. Cafe enables independent software vendors to describe their composite service-oriented applications and the components that are used to assemble them. Components can be internal to the application or external and can be deployed in any of the delivery models present in the cloud. The components are annotated with requirements for the infrastructure they later need to be run on. Providers on the other hand advertise their infrastructure services by describing them as infrastructure capabilities. The separation of software vendors and providers enables end users and providers to follow a best-of-breed strategy by combining arbitrary applications with arbitrary providers. We show how such applications can be automatically provisioned and present an architecture and a prototype that implements the concepts.
Kania-Richmond, Ania; Menard, Martha B; Barberree, Beth; Mohring, Marvin
2017-04-01
Conducting research on massage therapy (MT) continues to be a significant challenge. To explore and identify the structures, processes, and resources required to enable viable, sustainable and high quality MT research activities in the Canadian context. Academically-based researchers and MT professionals involved in research. Formative evaluation and a descriptive qualitative approach were applied. Five main themes regarding the requirements of a productive and sustainable MT research infrastructure in Canada were identified: 1) core components, 2) variable components, 3) varying perspectives of stakeholder groups, 4) barriers to creating research infrastructure, and 5) negative metaphors. In addition, participants offered a number of recommendations on how to develop such an infrastructure. While barriers exist that require attention, participants' insights suggest there are various pathways through which a productive and sustainable MT research infrastructure can be achieved. Copyright © 2016 Elsevier Ltd. All rights reserved.
A General Purpose High Performance Linux Installation Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wachsmann, Alf
2002-06-17
With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then usesmore » kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation.« less
The ATLAS Simulation Infrastructure
Aad, G.; Abbott, B.; Abdallah, J.; ...
2010-09-25
The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, andmore » the validation of the simulated output against known physics processes.« less
Minimizing Overhead for Secure Computation and Fully Homomorphic Encryption: Overhead
2015-11-01
many inputs. We also improved our compiler infrastructure to handle very large circuits in a more scalable way. In Jan’13, we employed the AESNI and...Amazon’s elastic compute infrastructure , and is running under a Xen hypervisor. Since we do not have direct access to the bare metal, we cannot...creating novel opportunities for compressing au- thentication overhead. It is especially compelling that existing public key infrastructures can be used
Virtual-optical information security system based on public key infrastructure
NASA Astrophysics Data System (ADS)
Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben
2005-01-01
A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.
New Geodetic Infrastructure for Australia: The NCRIS / AuScope Geospatial Component
NASA Astrophysics Data System (ADS)
Tregoning, P.; Watson, C. S.; Coleman, R.; Johnston, G.; Lovell, J.; Dickey, J.; Featherstone, W. E.; Rizos, C.; Higgins, M.; Priebbenow, R.
2009-12-01
In November 2006, the Australian Federal Government announced AUS15.8M in funding for geospatial research infrastructure through the National Collaborative Research Infrastructure Strategy (NCRIS). Funded within a broader capability area titled ‘Structure and Evolution of the Australian Continent’, NCRIS has provided a significant investment across Earth imaging, geochemistry, numerical simulation and modelling, the development of a virtual core library, and geospatial infrastructure. Known collectively as AuScope (www.auscope.org.au), this capability area has brought together Australian’s leading Earth scientists to decide upon the most pressing scientific issues and infrastructure needs for studying Earth systems and their impact on the Australian continent. Importantly and at the same time, the investment in geospatial infrastructure offers the opportunity to raise Australian geodetic science capability to the highest international level into the future. The geospatial component of AuScope builds onto the AUS15.8M of direct funding through the NCRIS process with significant in-kind and co-investment from universities and State/Territory and Federal government departments. The infrastructure to be acquired includes an FG5 absolute gravimeter, three gPhone relative gravimeters, three 12.1 m radio telescopes for geodetic VLBI, a continent-wide network of continuously operating geodetic quality GNSS receivers, a trial of a mobile SLR system and access to updated cluster computing facilities. We present an overview of the AuScope geospatial capability, review the current status of the infrastructure procurement and discuss some examples of the scientific research that will utilise the new geospatial infrastructure.
Application of GIS in exploring spatial dimensions of Efficiency in Competitiveness of Regions
NASA Astrophysics Data System (ADS)
Rahmat, Shahid; Sen, Joy
2017-04-01
Infrastructure is an important component in building competitiveness of a region. Present global scenario of economic slowdown that is led by slump in demand of goods and services and decreasing capacity of government institutions in investing public infrastructure. Strategy of augmenting competitiveness of a region can be built around improving efficient distribution of public infrastructure in the region. This efficiency in the distribution of infrastructure will reduce the burden of government institution and improve the relative output of the region in relative lesser investment. A rigorous literature study followed by an expert opinion survey (RIDIT scores) reveals that Railway, Road, ICTs and Electricity infrastructure is very crucial for better competitiveness of a region. Discussion with Experts in ICTs, Railways and Electricity sectors were conducted to find the issues, hurdles and possible solution for the development of these sectors. In an underdeveloped country like India, there is a large constrain of financial resources, for investment in infrastructure sector. Judicious planning for allocation of resources for infrastructure provisions becomes very important for efficient and sustainable development. Data Envelopment Analysis (DEA) is the mathematical programming optimization tool that measure technical efficiency of the multiple-input and/or multiple-output case by constructing a relative technical efficiency score. This paper tries to utilize DEA to identify the efficiency at which present level of selected components of Infrastructure (Railway, Road, ICTs and Electricity) is utilized in order to build competitiveness of the region. This paper tries to identify a spatial pattern of efficiency of Infrastructure with the help of spatial auto-correlation and Hot-spot analysis in Arc GIS. This analysis leads to policy implications for efficient allocation of financial resources for the provision of infrastructure in the region and building a prerequisite to boost an efficient Regional Competitiveness.
Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data
The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...
Initial implementation of a comparative data analysis ontology.
Prosdocimi, Francisco; Chisham, Brandon; Pontelli, Enrico; Thompson, Julie D; Stoltzfus, Arlin
2009-07-03
Comparative analysis is used throughout biology. When entities under comparison (e.g. proteins, genomes, species) are related by descent, evolutionary theory provides a framework that, in principle, allows N-ary comparisons of entities, while controlling for non-independence due to relatedness. Powerful software tools exist for specialized applications of this approach, yet it remains under-utilized in the absence of a unifying informatics infrastructure. A key step in developing such an infrastructure is the definition of a formal ontology. The analysis of use cases and existing formalisms suggests that a significant component of evolutionary analysis involves a core problem of inferring a character history, relying on key concepts: "Operational Taxonomic Units" (OTUs), representing the entities to be compared; "character-state data" representing the observations compared among OTUs; "phylogenetic tree", representing the historical path of evolution among the entities; and "transitions", the inferred evolutionary changes in states of characters that account for observations. Using the Web Ontology Language (OWL), we have defined these and other fundamental concepts in a Comparative Data Analysis Ontology (CDAO). CDAO has been evaluated for its ability to represent token data sets and to support simple forms of reasoning. With further development, CDAO will provide a basis for tools (for semantic transformation, data retrieval, validation, integration, etc.) that make it easier for software developers and biomedical researchers to apply evolutionary methods of inference to diverse types of data, so as to integrate this powerful framework for reasoning into their research.
Elemental Concentrations in Urban Green Stormwater Infrastructure Soils
Michelle C. Kondo; Raghav Sharma; Alain F. Plante; Yunwen Yang; Igor Burstyn
2016-01-01
Green stormwater infrastructure (GSI) is designed to capture stormwater for infiltration, detention, evapotranspiration, or reuse. Soils play a key role in stormwater interception at these facilities. It is important to assess whether contamination is occurring in GSI soils because urban stormwater drainage areas often accumulate elements of concern. Soil contamination...
Current Practice and Infrastructures for Campus Centers of Community Engagement
ERIC Educational Resources Information Center
Welch, Marshall; Saltmarsh, John
2013-01-01
This article provides an overview of current practice and essential infrastructure of campus community engagement centers in their efforts to establish and advance community engagement as part of the college experience. The authors identified key characteristics and the prevalence of activities of community engagement centers at engaged campuses…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-20
...), National Protection and Programs Directorate (NPPD), Office of Infrastructure Protection (IP.../IP/IICD, 245 Murray Lane SW., Mailstop 0602, Arlington, VA 20598-0602. Email requests should go to... Technical Assistance Program (CAPTAP) is offered jointly by the NPPD/IP and the Federal Emergency Management...
Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership
NASA Astrophysics Data System (ADS)
Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya
CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-05
...This Request for Information (RFI) notice informs the public that the Department of Homeland Security's (DHS) Science and Technology Directorate (S&T) is currently developing a National Critical Infrastructure Security and Resilience Research and Development Plan (NCISR R&D Plan) to conform to the requirements of Presidential Policy Directive 21, Critical Infrastructure Security and Resilience. As part of a comprehensive national review process, DHS solicits public comment on issues or language in the NCISR R&D Plan that need to be included. Critical infrastructure includes both cyber and physical components, systems, and networks for the sixteen established ``critical infrastructures''.
Geospatial Data as a Service: Towards planetary scale real-time analytics
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Larraondo, P. R.; Antony, J.; Richards, C. J.
2017-12-01
The rapid growth of earth systems, environmental and geophysical datasets poses a challenge to both end-users and infrastructure providers. For infrastructure and data providers, tasks like managing, indexing and storing large collections of geospatial data needs to take into consideration the various use cases by which consumers will want to access and use the data. Considerable investment has been made by the Earth Science community to produce suitable real-time analytics platforms for geospatial data. There are currently different interfaces that have been defined to provide data services. Unfortunately, there is considerable difference on the standards, protocols or data models which have been designed to target specific communities or working groups. The Australian National University's National Computational Infrastructure (NCI) is used for a wide range of activities in the geospatial community. Earth observations, climate and weather forecasting are examples of these communities which generate large amounts of geospatial data. The NCI has been carrying out significant effort to develop a data and services model that enables the cross-disciplinary use of data. Recent developments in cloud and distributed computing provide a publicly accessible platform where new infrastructures can be built. One of the key components these technologies offer is the possibility of having "limitless" compute power next to where the data is stored. This model is rapidly transforming data delivery from centralised monolithic services towards ubiquitous distributed services that scale up and down adapting to fluctuations in the demand. NCI has developed GSKY, a scalable, distributed server which presents a new approach for geospatial data discovery and delivery based on OGC standards. We will present the architecture and motivating use-cases that drove GSKY's collaborative design, development and production deployment. We show our approach offers the community valuable exploratory analysis capabilities, for dealing with petabyte-scale geospatial data collections.
17 CFR 39.18 - System safeguards.
Code of Federal Regulations, 2012 CFR
2012-04-01
... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...
17 CFR 39.18 - System safeguards.
Code of Federal Regulations, 2014 CFR
2014-04-01
... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...
17 CFR 39.18 - System safeguards.
Code of Federal Regulations, 2013 CFR
2013-04-01
... physical infrastructure or personnel necessary for it to conduct activities necessary to the clearing and... transportation, telecommunications, power, water, or other critical infrastructure components in a relevant area... Division of Clearing and Risk promptly of: (1) Any hardware or software malfunction, cyber security...
Green infrastructure approaches leverage vegetation and soil to improve environmental quality. Municipal street trees are crucial components of urban green infrastructure because they provide stormwater interception benefits and other ecosystem services. Thus, it is important to ...
DOT National Transportation Integrated Search
2013-10-01
Creating transportation infrastructure, which can clean up itself and contaminated air surrounding it, can be a : groundbreaking approach in addressing environmental challenges of our time. This project has explored a possibility of : depositing coat...
Advancing antimicrobial stewardship: Summary of the 2015 CIDSC Report.
Khan, F; Arthur, J; Maidment, L; Blue, D
2016-11-03
Antimicrobial resistance (AMR) is recognized as an important global public health concern that has a cross-cutting impact on human health, animal health, food and agriculture and the environment. The Communicable and Infectious Disease Steering Committee (CIDSC) of the Pan-Canadian Public Health Network (PHN) created a Task Group on Antimicrobial Stewardship to look at this issue from a Canadian perspective. To summarize the key findings of the Task Group Report that identified core components of antimicrobial stewardship programs, best practices, key challenges, gaps and recommendations to advance stewardship across jurisdictions. Search strategies were developed to identify scientific literature, grey literature and relevant websites on antimicrobial stewardship. The information was reviewed and based on this evidence, expert opinion and consensus-building, the Task Group identified core components, best practices, key challenges and gaps and developed recommendations to advance stewardship in Canada. The four components of a promising antimicrobial stewardship initiative were: leadership, interventions, monitoring/evaluation and future research. Best practices include a multi-sectoral/multipronged approach involving a wide range of stakeholders at the national, provincial/territorial, local and health care organizational levels. Key challenges and gaps identified were: the success and sustainability of stewardship undertakings require appropriate and sustained resourcing and expertise; there is limited evidence about how to effectively implement treatment guidance; and there is a challenge in ensuring accessibility, standardization and consistency of use among professionals. : Recommendations to the CIDSC about how to advance stewardship across jurisdictions included the following: institute a national infrastructure; develop best practices to implement stewardship programs; develop education and promote awareness; establish consistent evidence-based guidance, resources, tools and training; mandate the incorporation of stewardship education; develop audit and feedback tools; establish benchmarks and performance targets for stewardship; and conduct timely evaluation of stewardship programs. Findings of this report will inform a more systematic approach to addressing antimicrobial stewardship Canada-wide.
FOSS Tools for Research Data Management
NASA Astrophysics Data System (ADS)
Stender, Vivien; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim
2017-04-01
Established initiatives and organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. These infrastructures aim the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. In this regard, Research Data Management (RDM) gains importance and thus requires the support by appropriate tools integrated in these infrastructures. Different projects provide arbitrary solutions to manage research data. However, within two projects - SUMARIO for land and water management and TERENO for environmental monitoring - solutions to manage research data have been developed based on Free and Open Source Software (FOSS) components. The resulting framework provides essential components for harvesting, storing and documenting research data, as well as for discovering, visualizing and downloading these data on the basis of standardized services stimulated considerably by enhanced data management approaches of Spatial Data Infrastructures (SDI). In order to fully exploit the potentials of these developments for enhancing data management in Geosciences the publication of software components, e.g. via GitHub, is not sufficient. We will use our experience to move these solutions into the cloud e.g. as PaaS or SaaS offerings. Our contribution will present data management solutions for the Geosciences developed in two projects. A sort of construction kit with FOSS components build the backbone for the assembly and implementation of projects specific platforms. Furthermore, an approach is presented to stimulate the reuse of FOSS RDM solutions with cloud concepts. In further projects specific RDM platforms can be set-up much faster, customized to the individual needs and tools can be added during the run-time.
Simulating Impacts of Disruptions to Liquid Fuels Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Michael; Corbet, Thomas F.; Baker, Arnold B.
This report presents a methodology for estimating the impacts of events that damage or disrupt liquid fuels infrastructure. The impact of a disruption depends on which components of the infrastructure are damaged, the time required for repairs, and the position of the disrupted components in the fuels supply network. Impacts are estimated for seven stressing events in regions of the United States, which were selected to represent a range of disruption types. For most of these events the analysis is carried out using the National Transportation Fuels Model (NTFM) to simulate the system-level liquid fuels sector response. Results are presentedmore » for each event, and a brief cross comparison of event simulation results is provided.« less
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
A Case Study Based Analysis of Performance Metrics for Green Infrastructure
NASA Astrophysics Data System (ADS)
Gordon, B. L.; Ajami, N.; Quesnel, K.
2017-12-01
Aging infrastructure, population growth, and urbanization are demanding new approaches to management of all components of the urban water cycle, including stormwater. Traditionally, urban stormwater infrastructure was designed to capture and convey rainfall-induced runoff out of a city through a network of curbs, gutters, drains, and pipes, also known as grey infrastructure. These systems were planned with a single-purpose and designed under the assumption of hydrologic stationarity, a notion that no longer holds true in the face of a changing climate. One solution gaining momentum around the world is green infrastructure (GI). Beyond stormwater quality improvement and quantity reduction (or technical benefits), GI solutions offer many environmental, economic, and social benefits. Yet many practical barriers have prevented the widespread adoption of these systems worldwide. At the center of these challenges is the inability of stakeholders to know how to monitor, measure, and assess the multi-sector performance of GI systems. Traditional grey infrastructure projects require different monitoring strategies than natural systems; there are no overarching policies on how to best design GI monitoring and evaluation systems and measure performance. Previous studies have attempted to quantify the performance of GI, mostly using one evaluation method on a specific case study. We use a case study approach to address these knowledge gaps and develop a conceptual model of how to evaluate the performance of GI through the lens of financing. First, we examined many different case studies of successfully implemented GI around the world. Then we narrowed in on 10 exemplary case studies. For each case studies, we determined what performance method the project developer used such as LCA, TBL, Low Impact Design Assessment (LIDA) and others. Then, we determined which performance metrics were used to determine success and what data was needed to calculate those metrics. Finally, we examine risk priorities of both public and private actors to see how they varied and how risk was overcome. We synthesized these results to pull out key themes and lessons for the future. If project implementers are able to quantify the benefits and show investors how beneficial these systems can be, more will be implemented in the future.
The Department of Energy Nuclear Criticality Safety Program
NASA Astrophysics Data System (ADS)
Felty, James R.
2005-05-01
This paper broadly covers key events and activities from which the Department of Energy Nuclear Criticality Safety Program (NCSP) evolved. The NCSP maintains fundamental infrastructure that supports operational criticality safety programs. This infrastructure includes continued development and maintenance of key calculational tools, differential and integral data measurements, benchmark compilation, development of training resources, hands-on training, and web-based systems to enhance information preservation and dissemination. The NCSP was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 97-2, Criticality Safety, and evolved from a predecessor program, the Nuclear Criticality Predictability Program, that was initiated in response to Defense Nuclear Facilities Safety Board Recommendation 93-2, The Need for Critical Experiment Capability. This paper also discusses the role Dr. Sol Pearlstein played in helping the Department of Energy lay the foundation for a robust and enduring criticality safety infrastructure.
Fuzzy architecture assessment for critical infrastructure resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, George
2012-12-01
This paper presents an approach for the selection of alternative architectures in a connected infrastructure system to increase resilience of the overall infrastructure system. The paper begins with a description of resilience and critical infrastructure, then summarizes existing approaches to resilience, and presents a fuzzy-rule based method of selecting among alternative infrastructure architectures. This methodology includes considerations which are most important when deciding on an approach to resilience. The paper concludes with a proposed approach which builds on existing resilience architecting methods by integrating key system aspects using fuzzy memberships and fuzzy rule sets. This novel approach aids the systemsmore » architect in considering resilience for the evaluation of architectures for adoption into the final system architecture.« less
NASA Astrophysics Data System (ADS)
Nidziy, Elena
2017-10-01
Dependence of the regional economic development from efficiency of financing of the construction of transport infrastructure is analyzed and proved in this article. Effective mechanism for infrastructure projects financing, public and private partnership, is revealed and its concrete forms are formulated. Here is proposed an optimal scenario for financing for the transport infrastructure, which can lead to positive transformations in the economy. Paper considers the advantages and risks of public and private partnership for subjects of contractual relations. At that, components for the assessment of economic effect of the implementation of infrastructure projects were proposed simultaneously with formulation of conditions for minimization risks. Results of the research could be used for solution of persistent problems in the development of transport infrastructure, issues of financial assurance of construction of infrastructure projects at the regional level.
Measuring the pulse of urban green infrastructure: vegetation dynamics across residential landscapes
Vegetation can be an important component of urban green infrastructure. Its structure is a complex result of the socio-ecological milieu and management decisions, and it can influence numerous ecohydrological processes such as stormwater interception and evapotranspiration. Despi...
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Astrophysics Data System (ADS)
Behnke, J.; Lindsay, F. E.; Lowe, D. R.; Mitchell, A. E.; Lynnes, C.
2016-12-01
NASA's Earth Observing System Data and Information System (EOSDIS) has been a central component of the NASA Earth observation program since the 1990's. The data collected by NASA's remote sensing instruments represent a significant public investment in research. EOSDIS provides free and open access to this data to a worldwide public research community. From the very beginning, EOSDIS was conceived as a system built on partnerships between NASA Centers, US agencies and academia. EOSDIS manages a wide range of Earth science discipline data that include cryosphere, land cover change, polar processes, field campaigns, ocean surface, digital elevation, atmosphere dynamics and composition, and inter-disciplinary research, among many others. Over the years, EOSDIS has evolved to support increasingly complex and diverse NASA Earth Science data collections. EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities/connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. . EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users. While the separation into domain-specific science archives helps to manage the wide variety of missions and datasets, the common services and practices serve to knit the overall system together into a coherent whole, with sharing of data, metadata, information and software making EOSDIS more than the simple sum of its parts. This paper will describe those parts and how the whole system works together to deliver Earth science data to millions of users.
Surgical and anaesthetic capacity of hospitals in Malawi: key insights
Henry, Jaymie Ang; Frenkel, Erica; Borgstein, Eric; Mkandawire, Nyengo; Goddia, Cyril
2015-01-01
Background Surgery is increasingly recognized as an important driver for health systems strengthening, especially in developing countries. To facilitate quality improvement initiatives, baseline knowledge of capacity for surgical, anaesthetic, emergency and obstetric care is critical. In partnership with the Malawi Ministry of Health, we quantified government hospitals’ surgical capacity through workforce, infrastructure and health service delivery components. Methods From November 2012 to January 2013, we surveyed district and mission hospital administrators and clinical staff onsite using a modified version of the Personnel, Infrastructure, Procedures, Equipment and Supplies (PIPES) tool from Surgeons OverSeas. We calculated percentage of facilities demonstrating adequacy of the assessed components, surgical case rates, operating theatre density and surgical workforce density. Results Twenty-seven government hospitals were surveyed (90% of the district hospitals, all central hospitals). Of the surgical workforce surveyed (n = 370), 92.7% were non-surgeons and 77% were clinical officers (COs). Of the 109 anaesthesia providers, 95.4% were non-physician anaesthetists (anaesthesia COs or ACOs). Non-surgeons and ACOs were the only providers of surgical services and anaesthetic services in 85% and 88.9% of hospitals, respectively. No specialists served the district hospitals. All of the hospitals experienced periods without external electricity. Most did not always have a functioning generator (78.3% district, 25% central) or running water (82.6%, 50%). None of the district hospitals had an Intensive Care Unit (ICU). Cricothyroidotomy, bowel resection and cholecystectomy were not done in over two-thirds of hospitals. Every hospital provided general anaesthesia but some did not always have a functioning anaesthesia machine (52.2%, 50%). Surgical rate, operating theatre density and surgical workforce density per 100 000 population was 289.48–747.38 procedures, 0.98 and 5.41 and 3.68 surgical providers, respectively. Conclusion COs form the backbone of Malawi’s surgical and anaesthetic workforce and should be supported with improvements in infrastructure as well as training and mentorship by specialist surgeons and anaesthetists. PMID:25261799
Martin, J B; Wilkins, A S; Stawski, S K
1998-08-01
The evolving health care environment demands that health care organizations fully utilize information technologies (ITs). The effective deployment of IT requires the development and implementation of a comprehensive IT strategic plan. A number of approaches to health care IT strategic planning exist, but they are outdated or incomplete. The component alignment model (CAM) introduced here recognizes the complexity of today's health care environment, emphasizing continuous assessment and realignment of seven basic components: external environment, emerging ITs, organizational infrastructure, mission, IT infrastructure, business strategy, and IT strategy. The article provides a framework by which health care organizations can develop an effective IT strategic planning process.
Critical Infrastructure Protection- Los Alamos National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bofman, Ryan K.
Los Alamos National Laboratory (LANL) has been a key facet of Critical National Infrastructure since the nuclear bombing of Hiroshima exposed the nature of the Laboratory’s work in 1945. Common knowledge of the nature of sensitive information contained here presents a necessity to protect this critical infrastructure as a matter of national security. This protection occurs in multiple forms beginning with physical security, followed by cybersecurity, safeguarding of classified information, and concluded by the missions of the National Nuclear Security Administration.
Secondary Use of Clinical Data: the Vanderbilt Approach
Danciu, Ioana; Cowan, James D.; Basford, Melissa; Wang, Xiaoming; Saip, Alexander; Osgood, Susan; Shirey-Rice, Jana; Kirby, Jacqueline; Harris, Paul A.
2014-01-01
The last decade has seen an exponential growth in the quantity of clinical data collected nationwide, triggering an increase in opportunities to reuse the data for biomedical research. The Vanderbilt research data warehouse framework consists of identified and de-identified clinical data repositories, fee-for-service custom services, and tools built atop the data layer to assist researchers across the enterprise. Providing resources dedicated to research initiatives benefits not only the research community, but also clinicians, patients and institutional leadership. This work provides a summary of our approach in the secondary use of clinical data for research domain, including a description of key components and a list of lessons learned, designed to assist others assembling similar services and infrastructure. PMID:24534443
Mihelcic, James R; Ren, Zhiyong Jason; Cornejo, Pablo K; Fisher, Aaron; Simon, A J; Snyder, Seth W; Zhang, Qiong; Rosso, Diego; Huggins, Tyler M; Cooper, William; Moeller, Jeff; Rose, Bob; Schottel, Brandi L; Turgeon, Jason
2017-07-18
This Feature examines significant challenges and opportunities to spur innovation and accelerate adoption of reliable technologies that enhance integrated resource recovery in the wastewater sector through the creation of a national testbed network. The network is a virtual entity that connects appropriate physical testing facilities, and other components needed for a testbed network, with researchers, investors, technology providers, utilities, regulators, and other stakeholders to accelerate the adoption of innovative technologies and processes that are needed for the water resource recovery facility of the future. Here we summarize and extract key issues and developments, to provide a strategy for the wastewater sector to accelerate a path forward that leads to new sustainable water infrastructures.
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Mid-year report FY17 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Pugmire, David; Rogers, David
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem. Mid-year report FY16 Q2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY15 Q4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth D.; Sewell, Christopher; Childs, Hank
The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressingmore » four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.« less
Future CO2 Emissions and Climate Change from Existing Energy Infrastructure
NASA Astrophysics Data System (ADS)
Davis, S. J.; Caldeira, K.; Matthews, D.
2010-12-01
If current greenhouse gas (GHG) concentrations remain constant, the world would be committed to several centuries of increasing global mean temperatures and sea level rise. By contrast, near elimination of anthropogenic CO2 emissions would be required to produce diminishing GHG concentrations consistent with stabilization of mean temperatures. Yet long-lived energy and transportation infrastructure now operating can be expected to contribute substantial CO2 emissions over the next 50 years. Barring widespread retrofitting of existing power plants with carbon capture and storage (CCS) technologies or the early decommissioning of serviceable infrastructure, these “committed emissions” represent infrastructural inertia which may be the primary contributor to total future warming commitment. With respect to GHG emissions, infrastructural inertia may be thought of as having two important and overlapping components: (i) infrastructure that directly releases GHGs to the atmosphere, and (ii) infrastructure that contributes to the continued production of devices that emit GHGs to the atmosphere. For example, the interstate highway and refueling infrastructure in the United States facilitates continued production of gasoline-powered automobiles. Here, we focus only on the warming commitment from infrastructure that directly releases CO2 to the atmosphere. Essentially, we answer the question: What if no additional CO2-emitting devices (e.g., power plants, motor vehicles) were built, but all the existing CO2-emitting devices were allowed to live out their normal lifetimes? What CO2 levels and global mean temperatures would we attain? Of course, the actual lifetime of devices may be strongly influenced by economic and policy constraints. For instance, a ban on new CO2-emitting devices would create tremendous incentive to prolong the lifetime of existing devices. Thus, our scenarios are not realistic, but offer a means of gauging the threat of climate change from existing devices relative to those devices that have yet to be built. We developed scenarios of global CO2 emissions from the energy sector using datasets of power plants and motor vehicles worldwide, as well as estimates of fossil fuel emissions produced directly by industry, households, businesses, and other forms of transport. We estimated lifetimes and annual emissions of infrastructure from historical data. We projected changes in CO2 and temperature in response to our calculated emissions using an intermediate-complexity coupled climate-carbon model (UVic ESCM). We calculate cumulative future emissions of 496 (282 to 701) gigatonnes of CO2 from combustion of fossil fuels by existing infrastructure between 2010 and 2060, forcing mean warming of 1.3°C (1.1 to 1.4°C) above the preindustrial era and atmospheric concentrations of CO2 less than 430 parts per million (ppm). Because these conditions would likely avoid many key impacts of climate change, we conclude that sources of the most threatening emissions have yet to be built. However, CO2-emitting infrastructure will expand unless extraordinary efforts are undertaken to develop alternatives.
Energy Systems Integration Laboratory | Energy Systems Integration Facility
systems test hub includes a Class 1, Division 2 space for performing tests of high-pressure hydrogen Laboratory offers the following capabilities. High-Pressure Hydrogen Systems The high-pressure hydrogen infrastructure. Key Infrastructure Robotic arm; high-pressure hydrogen; natural gas supply; standalone SCADA
ERIC Educational Resources Information Center
Schenck-Hamlin, Donna; Pierquet, Jennifer; McClellan, Chuck
2011-01-01
In the wake of the September 2001 attacks, the U.S. government founded the Department of Homeland Security (DHS) with responsibility to develop a National Infrastructure Protection Plan for securing critical infrastructures and key resources. DHS established interdisciplinary networks of academic expertise administered through Centers of…
ERIC Educational Resources Information Center
Thigpen, Kamila
2014-01-01
While connecting the nation's schools and libraries to the internet by modernizing and expanding the federal E-rate program currently dominates education technology efforts, a new report from the Alliance for Excellent Education urges that adequate broadband access be accompanied by a comprehensive "digital infrastructure" that unlocks…
DOT National Transportation Integrated Search
2002-05-01
ITS is typically considered an operational detail to be worked out after infrastructure planning is complete. This approach ignores the potential for the introduction of ITS to change the decisions made during infrastructure planning, or even the ove...
Climate Indicators for Energy and Infrastructure
NASA Astrophysics Data System (ADS)
Wilbanks, T. J.
2014-12-01
Two of the key categories of climate indicators are energy and infrastructure. For energy supply and use, many indicators are available for energy supply and consumption; and some indicators are available to assess implications of climate change, such as changes over time in heating and cooling days. Indicators of adaptation and adaptive capacity are more elusive. For infrastructure, which includes more than a dozen different sectors, general indicators are not available, beyond counts of major disasters and such valuable contributions as the ASCE "report cards." In this case, research is needed, for example to develop credible metrics for assessing the resilience of built infrastructures to climate change and other stresses.
The Neuronal Infrastructure of Speaking
ERIC Educational Resources Information Center
Menenti, Laura; Segaert, Katrien; Hagoort, Peter
2012-01-01
Models of speaking distinguish producing meaning, words and syntax as three different linguistic components of speaking. Nevertheless, little is known about the brain's integrated neuronal infrastructure for speech production. We investigated semantic, lexical and syntactic aspects of speaking using fMRI. In a picture description task, we…
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
2016-04-01
infrastructure . The work is motivated by the fact that today’s clouds are very static, uniform, and predictable, allowing attackers who identify a...vulnerability in one of the services or infrastructure components to spread their effect to other, mission-critical services. Our goal is to integrate into...clouds by elevating continuous change, evolution, and misinformation as first-rate design principles of the cloud’s infrastructure . Our work is
Research infrastructure support to address ecosystem dynamics
NASA Astrophysics Data System (ADS)
Los, Wouter
2014-05-01
Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.
Building Stronger State Partnerships with the US Department of Energy (Energy Assurance)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mike Keogh
2011-09-30
From 2007 until 2011, the National Association of Regulatory Utility Commissioners (NARUC) engaged in a partnership with the National Energy Technology Lab (NETL) to improve State-Federal coordination on electricity policy and energy assurance issues. This project allowed State Public Utility Commissioners and their staffs to engage on the most cutting-edge level in the arenas of energy assurance and electricity policy. Four tasks were outlined in the Statement of Performance Objectives: Task 1 - Training for Commissions on Critical Infrastructure Topics; Task 2 - Analyze and Implement Recommendations on Energy Assurance Issues; Task 3 - Ongoing liaison activities & outreach tomore » build stronger networks between federal agencies and state regulators; and Task 4 - Additional Activities. Although four tasks were prescribed, in practice these tasks were carried out under two major activity areas: the critical infrastructure and energy assurance partnership with the US Department of Energy's Infrastructure Security and Emergency Response office, and the National Council on Electricity Policy, a collaborative which since 1994 has brought together State and Federal policymakers to address the most pressing issues facing the grid from restructuring to smart grid implementation. On Critical Infrastructure protection, this cooperative agreement helped State officials yield several important advances. The lead role on NARUC's side was played by our Committee on Critical Infrastructure Protection. Key lessons learned in this arena include the following: (1) Tabletops and exercises work - They improve the capacity of policymakers and their industry counterparts to face the most challenging energy emergencies, and thereby equip these actors with the capacity to face everything up to that point as well. (2) Information sharing is critical - Connecting people who need information with people who have information is a key success factor. However, exposure of critical infrastructure information to bad actors also creates new vulnerabilities. (3) Tensions exist between the transparency-driven basis of regulatory activity and the information-protection requirements of asset protection. (4) Coordination between states is a key success factor - Because comparatively little federal authority exists over electricity and other energy infrastructure, the interstate nature of these energy grids defy centralized command and control governance. Patchwork responses are a risk when addressed at a state-by-state level. Coordination is the key to ensuring consistent response to shared threats. In Electricity Policy, the National Council on Electricity Policy continued to make important strides forward. Coordinated electricity policy among States remains the best surrogate for an absent national electricity policy. In every area from energy efficiency to clean coal, State policies are driving the country's electricity policy, and regional responses to climate change, infrastructure planning, market operation, and new technology deployment depend on a forum for bringing the States together.« less
The national response for preventing healthcare-associated infections: infrastructure development.
Mendel, Peter; Siegel, Sari; Leuschner, Kristin J; Gall, Elizabeth M; Weinberg, Daniel A; Kahn, Katherine L
2014-02-01
In 2009, the US Department of Health and Human Services (HHS) launched the Action Plan to Prevent Healthcare-associated Infections (HAIs). The Action Plan adopted national targets for reduction of specific infections, making HHS accountable for change across the healthcare system over which federal agencies have limited control. This article examines the unique infrastructure developed through the Action Plan to support adoption of HAI prevention practices. Interviews of federal (n=32) and other stakeholders (n=38), reviews of agency documents and journal articles (n=260), and observations of interagency meetings (n=17) and multistakeholder conferences (n=17) over a 3-year evaluation period. We extract key progress and challenges in the development of national HAI prevention infrastructure--1 of the 4 system functions in our evaluation framework encompassing regulation, payment systems, safety culture, and dissemination and technical assistance. We then identify system properties--for example, coordination and alignment, accountability and incentives, etc.--that enabled or hindered progress within each key development. The Action Plan has developed a model of interagency coordination (including a dedicated "home" and culture of cooperation) at the federal level and infrastructure for stimulating change through the wider healthcare system (including transparency and financial incentives, support of state and regional HAI prevention capacity, changes in safety culture, and mechanisms for stakeholder engagement). Significant challenges to infrastructure development included many related to the same areas of progress. The Action Plan has built a foundation of infrastructure to expand prevention of HAIs and presents useful lessons for other large-scale improvement initiatives.
Preliminary Identification of Urban Park Infrastructure Resilience in Semarang Central Java
NASA Astrophysics Data System (ADS)
Muzdalifah, Aji Uhfatun; Maryono
2018-02-01
Park is one of the spot green infrastructure. There are two major characteristic of park, first Active parks and second passive park. Those of two open spaces have been significant on the fulfillment of urban environment. To maintenance the urban park, it is very importance to identify the characteristic of active and passive park. The identification also needs to fostering stakeholder effort to increase quality of urban park infrastructure. This study aims to explore and assess the characteristic of urban park infrastructure in Semarang City, Central Java. Data collection methods conduct by review formal document, field observation and interview with key government officer. The study founded that urban active parks infrastructure resilience could be defined by; Park Location, Garden Shape, Vegetation, Support Element, Park Function, and Expected Benefit from Park Existence. Moreover, the vegetation aspect and the supporting elements are the most importance urban park infrastructure in Semarang.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T.
Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less
Robustness and Recovery of Lifeline Infrastructure and Ecosystem Networks
NASA Astrophysics Data System (ADS)
Bhatia, U.; Ganguly, A. R.
2015-12-01
Disruptive events, both natural and man-made, can have widespread impacts on both natural systems and lifeline infrastructure networks leading to the loss of biodiversity and essential functionality, respectively. Projected sea-level rise and climate change can further increase the frequency and severity of large-scale floods on urban-coastal megacities. Nevertheless, Failure in infrastructure systems can trigger cascading impacts on dependent ecosystems, and vice-versa. An important consideration in the behavior of the isolated networks and inter-connected networks following disruptive events is their resilience, or the ability of the network to "bounce back" to a pre-disaster state. Conventional risk analysis and subsequent risk management frameworks have focused on identifying the components' vulnerability and strengthening of the isolated components to withstand these disruptions. But high interconnectedness of these systems, and evolving nature of hazards, particularly in the context of climate extremes, make the component level analysis unrealistic. In this study, we discuss the complex network-based resilience framework to understand fragility and recovery strategies for infrastructure systems impacted by climate-related hazards. We extend the proposed framework to assess the response of ecological networks to multiple species loss and design the restoration management framework to identify the most efficient restoration sequence of species, which can potentially lead to disproportionate gains in biodiversity.
Managing Mission-Critical Infrastructure
ERIC Educational Resources Information Center
Breeding, Marshall
2012-01-01
In the library context, they depend on sophisticated business applications specifically designed to support their work. This infrastructure consists of such components as integrated library systems, their associated online catalogs or discovery services, and self-check equipment, as well as a Web site and the various online tools and services…
COMMUNITY-ORIENTED DESIGN AND EVALUATION PROCESS FOR SUSTAINABLE INFRASTRUCTURE
We met our first objective by completing the physical infrastructure of the La Fortuna-Tule water and sanitation project using the CODE-PSI method. This physical component of the project was important in providing a real, relevant, community-scale test case for the methods ...
Nasir, Zaheer Ahmad; Campos, Luiza Cintra; Christie, Nicola; Colbeck, Ian
2016-08-01
Exposure to airborne biological hazards in an ever expanding urban transport infrastructure and highly diverse mobile population is of growing concern, in terms of both public health and biosecurity. The existing policies and practices on design, construction and operation of these infrastructures may have severe implications for airborne disease transmission, particularly, in the event of a pandemic or intentional release of biological of agents. This paper reviews existing knowledge on airborne disease transmission in different modes of transport, highlights the factors enhancing the vulnerability of transport infrastructures to airborne disease transmission, discusses the potential protection measures and identifies the research gaps in order to build a bioresilient transport infrastructure. The unification of security and public health research, inclusion of public health security concepts at the design and planning phase, and a holistic system approach involving all the stakeholders over the life cycle of transport infrastructure hold the key to mitigate the challenges posed by biological hazards in the twenty-first century transport infrastructure.
Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment
NASA Astrophysics Data System (ADS)
Bowker, Geoffrey C.; Baker, Karen; Millerand, Florence; Ribes, David
This article presents Information Infrastructure Studies, a research area that takes up some core issues in digital information and organization research. Infrastructure Studies simultaneously addresses the technical, social, and organizational aspects of the development, usage, and maintenance of infrastructures in local communities as well as global arenas. While infrastructure is understood as a broad category referring to a variety of pervasive, enabling network resources such as railroad lines, plumbing and pipes, electrical power plants and wires, this article focuses on information infrastructure, such as computational services and help desks, or federating activities such as scientific data repositories and archives spanning the multiple disciplines needed to address such issues as climate warming and the biodiversity crisis. These are elements associated with the internet and, frequently today, associated with cyberinfrastructure or e-science endeavors. We argue that a theoretical understanding of infrastructure provides the context for needed dialogue between design, use, and sustainability of internet-based infrastructure services. This article outlines a research area and outlines overarching themes of Infrastructure Studies. Part one of the paper presents definitions for infrastructure and cyberinfrastructure, reviewing salient previous work. Part two portrays key ideas from infrastructure studies (knowledge work, social and political values, new forms of sociality, etc.). In closing, the character of the field today is considered.
Halpern, Pinchas; Goldberg, Scott A; Keng, Jimmy G; Koenig, Kristi L
2012-04-01
The Emergency Department (ED) is the triage, stabilization and disposition unit of the hospital during a mass-casualty incident (MCI). With most EDs already functioning at or over capacity, efficient management of an MCI requires optimization of all ED components. While the operational aspects of MCI management have been well described, the architectural/structural principles have not. Further, there are limited reports of the testing of ED design components in actual MCI events. The objective of this study is to outline the important infrastructural design components for optimization of ED response to an MCI, as developed, implemented, and repeatedly tested in one urban medical center. In the authors' experience, the most important aspects of ED design for MCI have included external infrastructure and promoting rapid lockdown of the facility for security purposes; an ambulance bay permitting efficient vehicle flow and casualty discharge; strategic placement of the triage location; patient tracking techniques; planning adequate surge capacity for both patients and staff; sufficient command, control, communications, computers, and information; well-positioned and functional decontamination facilities; adequate, well-located and easily distributed medical supplies; and appropriately built and functioning essential services. Designing the ED to cope well with a large casualty surge during a disaster is not easy, and it may not be feasible for all EDs to implement all the necessary components. However, many of the components of an appropriate infrastructural design add minimal cost to the normal expenditures of building an ED. This study highlights the role of design and infrastructure in MCI preparedness in order to assist planners in improving their ED capabilities. Structural optimization calls for a paradigm shift in the concept of structural and operational ED design, but may be necessary in order to maximize surge capacity, department resilience, and patient and staff safety.
Witt, Michael; Krefting, Dagmar
2016-01-01
Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.
Integrating Water, Actors, and Structure to Study Socio-Hydro-Ecological Systems
NASA Astrophysics Data System (ADS)
Hale, R. L.; Armstrong, A.; Baker, M. A.; Bedingfield, S.; Betts, D.; Buahin, C. A.; Buchert, M.; Crowl, T.; Dupont, R.; Endter-Wada, J.; Flint, C.; Grant, J.; Hinners, S.; Horns, D.; Horsburgh, J. S.; Jackson-Smith, D.; Jones, A. S.; Licon, C.; Null, S. E.; Odame, A.; Pataki, D. E.; Rosenberg, D. E.; Runburg, M.; Stoker, P.; Strong, C.
2014-12-01
Urbanization, climate uncertainty, and ecosystem change represent major challenges for managing water resources. Water systems and the forces acting upon them are complex, and there is a need to understand and generically represent the most important system components and linkages. We developed a framework to facilitate understanding of water systems including potential vulnerabilities and opportunities for sustainability. Our goal was to produce an interdisciplinary framework for water resources research to address water issues across scales (e.g., city to region) and domains (e.g., water supply and quality, urban and transitioning landscapes). An interdisciplinary project (iUTAH - innovative Urban Transitions and Aridregion Hydro-sustainability) with a large (N=~100), diverse team having expertise spanning the hydrologic, biological, ecological, engineering, social, planning, and policy sciences motivated the development of this framework. The framework was developed through review of the literature, meetings with individual researchers, and workshops with participants. The Structure-Water-Actor Framework (SWAF) includes three main components: water (quality and quantity), structure (natural, built, and social), and actors (individual and organizational). Key linkages include: 1) ecological and hydrological processes, 2) ecosystem and geomorphic change, 3) planning, design, and policy, 4) perceptions, information, and experience, 5) resource access, and 6) operational water use and management. Our expansive view of structure includes natural, built, and social components, allowing us to examine a broad set of tools and levers for water managers and decision-makers to affect system sustainability and understand system outcomes. We validate the SWAF and illustrate its flexibility to generate insights for three research and management problems: green stormwater infrastructure in an arid environment, regional water supply and demand, and urban river restoration. These applications show that the framework can help identify key components and linkages across diverse water systems.
Graduates' Perceptions towards UKM's Infrastructure
ERIC Educational Resources Information Center
Omar, Ramli; Khoon, Koh Aik; Hamzah, Mohd Fauzi; Ahmadan, Siti Rohayu
2009-01-01
This paper reports on the surveys which were conducted between 2006 and 2008 on graduates' perceptions towards the infrastructure at Universiti Kebangsaan Malaysia (UKM). It covered three major aspects pertaining to learning, living and leisure on campus. Eight out of 14 components received overwhelming approval from our graduates. (Contains 1…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billings, Jay J.; Bonior, Jason D.; Evans, Philip G.
Securely transferring timing information in the electrical grid is a critical component of securing the nation's infrastructure from cyber attacks. One solution to this problem is to use quantum information to securely transfer the timing information across sites. This software provides such an infrastructure using a standard Java webserver that pulls the quantum information from associated hardware.
Boutin, Natalie; Holzbach, Ana; Mahanta, Lisa; Aldama, Jackie; Cerretani, Xander; Embree, Kevin; Leon, Irene; Rathi, Neeta; Vickers, Matilde
2016-01-01
The Biobank and Translational Genomics core at Partners Personalized Medicine requires robust software and hardware. This Information Technology (IT) infrastructure enables the storage and transfer of large amounts of data, drives efficiencies in the laboratory, maintains data integrity from the time of consent to the time that genomic data is distributed for research, and enables the management of complex genetic data. Here, we describe the functional components of the research IT infrastructure at Partners Personalized Medicine and how they integrate with existing clinical and research systems, review some of the ways in which this IT infrastructure maintains data integrity and security, and discuss some of the challenges inherent to building and maintaining such infrastructure. PMID:26805892
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brigantic, Robert T.; Betzsold, Nick J.; Bakker, Craig KR
In this presentation we overview a methodology for dynamic security risk quantification and optimal resource allocation of security assets for high profile venues. This methodology is especially applicable to venues that require security screening operations such as mass transit (e.g., train or airport terminals), critical infrastructure protection (e.g., government buildings), and largescale public events (e.g., concerts or professional sports). The method starts by decomposing the three core components of risk -- threat, vulnerability, and consequence -- into their various subcomponents. For instance, vulnerability can be decomposed into availability, accessibility, organic security, and target hardness and each of these can bemore » evaluated against the potential threats of interest for the given venue. Once evaluated, these subcomponents are rolled back up to compute the specific value for the vulnerability core risk component. Likewise, the same is done for consequence and threat, and then risk is computed as the product of these three components. A key aspect of our methodology is dynamically quantifying risk. That is, we incorporate the ability to uniquely allow the subcomponents and core components, and in turn, risk, to be quantified as a continuous function of time throughout the day, week, month, or year as appropriate.« less
Strategies of Educational Decentralization: Key Questions and Core Issues.
ERIC Educational Resources Information Center
Hanson, E. Mark
1998-01-01
Explains key issues and forces that shape organization and management strategies of educational decentralization, using examples from Colombia, Venezuela, Argentina, Nicaragua, and Spain. Core decentralization issues include national and regional goals, planning, political stress, resource distribution, infrastructure development, and job…
Collaboration-Centred Cities through Urban Apps Based on Open and User-Generated Data
Aguilera, Unai; López-de-Ipiña, Diego; Pérez, Jorge
2016-01-01
This paper describes the IES Cities platform conceived to streamline the development of urban apps that combine heterogeneous datasets provided by diverse entities, namely, government, citizens, sensor infrastructure and other information data sources. This work pursues the challenge of achieving effective citizen collaboration by empowering them to prosume urban data across time. Particularly, this paper focuses on the query mapper; a key component of the IES Cities platform devised to democratize the development of open data-based mobile urban apps. This component allows developers not only to use available data, but also to contribute to existing datasets with the execution of SQL sentences. In addition, the component allows developers to create ad hoc storages for their applications, publishable as new datasets accessible by other consumers. As multiple users could be contributing and using a dataset, our solution also provides a data level permission mechanism to control how the platform manages the access to its datasets. We have evaluated the advantages brought forward by IES Cities from the developers’ perspective by describing an exemplary urban app created on top of it. In addition, we include an evaluation of the main functionalities of the query mapper. PMID:27376300
Collaboration-Centred Cities through Urban Apps Based on Open and User-Generated Data.
Aguilera, Unai; López-de-Ipiña, Diego; Pérez, Jorge
2016-07-01
This paper describes the IES Cities platform conceived to streamline the development of urban apps that combine heterogeneous datasets provided by diverse entities, namely, government, citizens, sensor infrastructure and other information data sources. This work pursues the challenge of achieving effective citizen collaboration by empowering them to prosume urban data across time. Particularly, this paper focuses on the query mapper; a key component of the IES Cities platform devised to democratize the development of open data-based mobile urban apps. This component allows developers not only to use available data, but also to contribute to existing datasets with the execution of SQL sentences. In addition, the component allows developers to create ad hoc storages for their applications, publishable as new datasets accessible by other consumers. As multiple users could be contributing and using a dataset, our solution also provides a data level permission mechanism to control how the platform manages the access to its datasets. We have evaluated the advantages brought forward by IES Cities from the developers' perspective by describing an exemplary urban app created on top of it. In addition, we include an evaluation of the main functionalities of the query mapper.
Saravanan, V S; Ayessa Idenal, Marissa; Saiyed, Shahin; Saxena, Deepak; Gerke, Solvay
2016-10-01
Diseases are rapidly urbanizing. Ageing infrastructures, high levels of inequality, poor urban governance, rapidly growing economies and highly dense and mobile populations all create environments rife for water-borne diseases. This article analyzes the role of institutions as crosscutting entities among a myriad of factors that breed water-borne diseases in the city of Ahmedabad, India. It applies 'path dependency' and a 'rational choice' perspective to understand the factors facilitating the breeding of diseases. This study is based on household surveys of approximately 327 households in two case study wards and intermittent interviews with key informants over a period of 2 years. Principle component analysis is applied to reduce the data and convert a set of observations, which potentially correlate with each other, into components. Institutional analyses behind these components reveal the role of social actors in exploiting the deeply rooted inefficiencies affecting urban health. This has led to a vicious cycle; breaking this cycle requires understanding the political dynamics that underlie the exposure and prevalence of diseases to improve urban health. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Reflections on the role of open source in health information system interoperability.
Sfakianakis, S; Chronaki, C E; Chiarugi, F; Conforti, F; Katehakis, D G
2007-01-01
This paper reflects on the role of open source in health information system interoperability. Open source is a driving force in computer science research and the development of information systems. It facilitates the sharing of information and ideas, enables evolutionary development and open collaborative testing of code, and broadens the adoption of interoperability standards. In health care, information systems have been developed largely ad hoc following proprietary specifications and customized design. However, the wide deployment of integrated services such as Electronic Health Records (EHRs) over regional health information networks (RHINs) relies on interoperability of the underlying information systems and medical devices. This reflection is built on the experiences of the PICNIC project that developed shared software infrastructure components in open source for RHINs and the OpenECG network that offers open source components to lower the implementation cost of interoperability standards such as SCP-ECG, in electrocardiography. Open source components implementing standards and a community providing feedback from real-world use are key enablers of health care information system interoperability. Investing in open source is investing in interoperability and a vital aspect of a long term strategy towards comprehensive health services and clinical research.
NASA Astrophysics Data System (ADS)
Zhang, Jianguo; Chen, Xiaomeng; Zhuang, Jun; Jiang, Jianrong; Zhang, Xiaoyan; Wu, Dongqing; Huang, H. K.
2003-05-01
In this paper, we presented a new security approach to provide security measures and features in both healthcare information systems (PACS, RIS/HIS), and electronic patient record (EPR). We introduced two security components, certificate authoring (CA) system and patient record digital signature management (DSPR) system, as well as electronic envelope technology, into the current hospital healthcare information infrastructure to provide security measures and functions such as confidential or privacy, authenticity, integrity, reliability, non-repudiation, and authentication for in-house healthcare information systems daily operating, and EPR exchanging among the hospitals or healthcare administration levels, and the DSPR component manages the all the digital signatures of patient medical records signed through using an-symmetry key encryption technologies. The electronic envelopes used for EPR exchanging are created based on the information of signers, digital signatures, and identifications of patient records stored in CAS and DSMS, as well as the destinations and the remote users. The CAS and DSMS were developed and integrated into a RIS-integrated PACS, and the integration of these new security components is seamless and painless. The electronic envelopes designed for EPR were used successfully in multimedia data transmission.
A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system
NASA Astrophysics Data System (ADS)
Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.
2014-06-01
The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.
ERIC Educational Resources Information Center
Special Libraries Association, New York, NY.
These conference proceedings address the key issues relating to the National Information Infrastructure, including social policy, cultural issues, government policy, and technological applications. The goal is to provide the knowledge and resources needed to conceptualize and think clearly about this topic. Proceedings include: "Opening…
Miniaturization as a key factor to the development and application of advanced metrology systems
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Dobrev, Ivo; Harrington, Ellery; Hefti, Peter; Khaleghi, Morteza
2012-10-01
Recent technological advances of miniaturization engineering are enabling the realization of components and systems with unprecedented capabilities. Such capabilities, which are significantly beneficial to scientific and engineering applications, are impacting the development and the application of optical metrology systems for investigations under complex boundary, loading, and operating conditions. In this paper, and overview of metrology systems that we are developing is presented. Systems are being developed and applied to high-speed and high-resolution measurements of shape and deformations under actual operating conditions for such applications as sustainability, health, medical diagnosis, security, and urban infrastructure. Systems take advantage of recent developments in light sources and modulators, detectors, microelectromechanical (MEMS) sensors and actuators, kinematic positioners, rapid prototyping fabrication technologies, as well as software engineering.
Fog-computing concept usage as means to enhance information and control system reliability
NASA Astrophysics Data System (ADS)
Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya
2018-05-01
This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2011 CFR
2011-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2013 CFR
2013-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2014 CFR
2014-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
49 CFR 1520.9 - Restrictions on the disclosure of SSI.
Code of Federal Regulations, 2012 CFR
2012-10-01
... inform TSA or the applicable DOT or DHS component or agency. (d) Additional Requirements for Critical Infrastructure Information. In the case of information that is both SSI and has been designated as critical infrastructure information under section 214 of the Homeland Security Act, any covered person who is a Federal...
Map Matching and Real World Integrated Sensor Data Warehousing (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, E.
2014-02-01
The inclusion of interlinked temporal and spatial elements within integrated sensor data enables a tremendous degree of flexibility when analyzing multi-component datasets. The presentation illustrates how to warehouse, process, and analyze high-resolution integrated sensor datasets to support complex system analysis at the entity and system levels. The example cases presented utilizes in-vehicle sensor system data to assess vehicle performance, while integrating a map matching algorithm to link vehicle data to roads to demonstrate the enhanced analysis possible via interlinking data elements. Furthermore, in addition to the flexibility provided, the examples presented illustrate concepts of maintaining proprietary operational information (Fleet DNA)more » and privacy of study participants (Transportation Secure Data Center) while producing widely distributed data products. Should real-time operational data be logged at high resolution across multiple infrastructure types, map matched to their associated infrastructure, and distributed employing a similar approach; dependencies between urban environment infrastructures components could be better understood. This understanding is especially crucial for the cities of the future where transportation will rely more on grid infrastructure to support its energy demands.« less
Optimal Resource Allocation in Electrical Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Y; Edmunds, T; Papageorgiou, D
2004-01-15
Infrastructure networks supplying electricity, natural gas, water, and other commodities are at risk of disruption due to well-engineered and coordinated terrorist attacks. Countermeasures such as hardening targets, acquisition of spare critical components, and surveillance can be undertaken to detect and deter these attacks. Allocation of available countermeasures resources to sites or activities in a manner that maximizes their effectiveness is a challenging problem. This allocation must take into account the adversary's response after the countermeasure assets are in place and consequence mitigation measures the infrastructure operation can undertake after the attack. The adversary may simply switch strategies to avoid countermeasuresmore » when executing the attack. Stockpiling spares of critical energy infrastructure components has been identified as a key element of a grid infrastructure defense strategy in a recent National Academy of Sciences report [1]. Consider a scenario where an attacker attempts to interrupt the service of an electrical network by disabling some of its facilities while a defender wants to prevent or minimize the effectiveness of any attack. The interaction between the attacker and the defender can be described in three stages: (1) The defender deploys countermeasures, (2) The attacker disrupts the network, and (3) The defender responds to the attack by rerouting power to maintain service while trying to repair damage. In the first stage, the defender considers all possible attack scenarios and deploys countermeasures to defend against the worst scenarios. Countermeasures can include hardening targets, acquiring spare critical components, and installing surveillance devices. In the second stage, the attacker, with full knowledge of the deployed countermeasures, attempts to disable some nodes or links in the network to inflict the greatest loss on the defender. In the third stage, the defender re-dispatches power and restores disabled nodes or links to minimize the loss. The loss can be measured in costs, including the costs of using more expensive generators and the economic losses that can be attributed to loss of load. The defender's goal is to minimize the loss while the attacker wants to maximize it. Assuming some level of budget constraint, each side can only defend or attack a limited number of network elements. When an element is attacked, it is assumed that it will be totally disabled. It is assumed that when an element is defended it cannot be disabled, which may mean that it will be restored in a very short time after being attacked. The rest of the paper is organized as follows. Section 2 will briefly review literature related to multilevel programming and network defense. Section 3 presents a mathematical formulation of the electrical network defense problem. Section 4 describes the solution algorithms. Section 5 discusses computational results. Finally, Sec. 6 explores future research directions.« less
Building analytical platform with Big Data solutions for log files of PanDA infrastructure
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Barreiro Megino, F. G.; Klimentov, A. A.; Korchuganova, T. A.; Maendo, T.; Padolski, S. V.
2018-05-01
The paper describes the implementation of a high-performance system for the processing and analysis of log files for the PanDA infrastructure of the ATLAS experiment at the Large Hadron Collider (LHC), responsible for the workload management of order of 2M daily jobs across the Worldwide LHC Computing Grid. The solution is based on the ELK technology stack, which includes several components: Filebeat, Logstash, ElasticSearch (ES), and Kibana. Filebeat is used to collect data from logs. Logstash processes data and export to Elasticsearch. ES are responsible for centralized data storage. Accumulated data in ES can be viewed using a special software Kibana. These components were integrated with the PanDA infrastructure and replaced previous log processing systems for increased scalability and usability. The authors will describe all the components and their configuration tuning for the current tasks, the scale of the actual system and give several real-life examples of how this centralized log processing and storage service is used to showcase the advantages for daily operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, Bruce Duncan
The objective of the report is to provide an assessment of the domestic supply chain and manufacturing infrastructure supporting the U.S. offshore wind market. The report provides baseline information and develops a strategy for future development of the supply chain required to support projected offshore wind deployment levels. A brief description of each of the key chapters includes: » Chapter 1: Offshore Wind Plant Costs and Anticipated Technology Advancements. Determines the cost breakdown of offshore wind plants and identifies technical trends and anticipated advancements in offshore wind manufacturing and construction. » Chapter 2: Potential Supply Chain Requirements and Opportunities. Providesmore » an organized, analytical approach to identifying and bounding the uncertainties associated with a future U.S. offshore wind market. It projects potential component-level supply chain needs under three demand scenarios and identifies key supply chain challenges and opportunities facing the future U.S. market as well as current suppliers of the nation’s land-based wind market. » Chapter 3: Strategy for Future Development. Evaluates the gap or competitive advantage of adding manufacturing capacity in the U.S. vs. overseas, and evaluates examples of policies that have been successful . » Chapter 4: Pathways for Market Entry. Identifies technical and business pathways for market entry by potential suppliers of large-scale offshore turbine components and technical services. The report is intended for use by the following industry stakeholder groups: (a) Industry participants who seek baseline cost and supplier information for key component segments and the overall U.S. offshore wind market (Chapters 1 and 2). The component-level requirements and opportunities presented in Section 2.3 will be particularly useful in identifying market sizes, competition, and risks for the various component segments. (b) Federal, state, and local policymakers and economic development agencies, to assist in identifying policies with low effort and high impact (Chapter 3). Section 3.3 provides specific policy examples that have been demonstrated to be effective in removing barriers to development. (c) Current and potential domestic suppliers in the offshore wind market, in evaluating areas of opportunity and understanding requirements for participation (Chapter 4). Section 4.4 provides a step-by-step description of the qualification process that suppliers looking to sell components into a future U.S. offshore wind market will need to follow.« less
Using Cloud Computing infrastructure with CloudBioLinux, CloudMan and Galaxy
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-01-01
Cloud computing has revolutionized availability and access to computing and storage resources; making it possible to provision a large computational infrastructure with only a few clicks in a web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this protocol, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatics analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to setup the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command line interface, and the web-based Galaxy interface. PMID:22700313
Modeling and Managing Risk in Billing Infrastructures
NASA Astrophysics Data System (ADS)
Baiardi, Fabrizio; Telmon, Claudio; Sgandurra, Daniele
This paper discusses risk modeling and risk management in information and communications technology (ICT) systems for which the attack impact distribution is heavy tailed (e.g., power law distribution) and the average risk is unbounded. Systems with these properties include billing infrastructures used to charge customers for services they access. Attacks against billing infrastructures can be classified as peripheral attacks and backbone attacks. The goal of a peripheral attack is to tamper with user bills; a backbone attack seeks to seize control of the billing infrastructure. The probability distribution of the overall impact of an attack on a billing infrastructure also has a heavy-tailed curve. This implies that the probability of a massive impact cannot be ignored and that the average impact may be unbounded - thus, even the most expensive countermeasures would be cost effective. Consequently, the only strategy for managing risk is to increase the resilience of the infrastructure by employing redundant components.
Using cloud computing infrastructure with CloudBioLinux, CloudMan, and Galaxy.
Afgan, Enis; Chapman, Brad; Jadan, Margita; Franke, Vedran; Taylor, James
2012-06-01
Cloud computing has revolutionized availability and access to computing and storage resources, making it possible to provision a large computational infrastructure with only a few clicks in a Web browser. However, those resources are typically provided in the form of low-level infrastructure components that need to be procured and configured before use. In this unit, we demonstrate how to utilize cloud computing resources to perform open-ended bioinformatic analyses, with fully automated management of the underlying cloud infrastructure. By combining three projects, CloudBioLinux, CloudMan, and Galaxy, into a cohesive unit, we have enabled researchers to gain access to more than 100 preconfigured bioinformatics tools and gigabytes of reference genomes on top of the flexible cloud computing infrastructure. The protocol demonstrates how to set up the available infrastructure and how to use the tools via a graphical desktop interface, a parallel command-line interface, and the Web-based Galaxy interface.
"Tactic": Traffic Aware Cloud for Tiered Infrastructure Consolidation
ERIC Educational Resources Information Center
Sangpetch, Akkarit
2013-01-01
Large-scale enterprise applications are deployed as distributed applications. These applications consist of many inter-connected components with heterogeneous roles and complex dependencies. Each component typically consumes 5-15% of the server capacity. Deploying each component as a separate virtual machine (VM) allows us to consolidate the…
Sea level rise impacts on wastewater treatment systems along the U.S. coasts
NASA Astrophysics Data System (ADS)
Hummel, M.; Berry, M.; Stacey, M. T.
2017-12-01
As sea levels rise, coastal communities will experience more frequent and persistent nuisance flooding, and some low-lying areas may be permanently inundated. Critical components of lifeline infrastructure networks in these areas are also at risk of flooding, which could cause significant service disruptions that extend beyond the flooded zone. Thus, identifying critical infrastructure components that are vulnerable to sea level rise is an important first step in developing targeted investment in protective actions and enhancing the overall resilience of coastal communities. Wastewater treatment plants are typically located at low elevations near the coastline to minimize the cost of collecting consumed water and discharging treated effluent, which makes them particularly susceptible to coastal flooding. For this analysis, we used geographic information systems to assess the vulnerability of wastewater infrastructure to various sea level rise projections at the national level. We then estimated the number of people who would lose wastewater services, which could be more than three times as high as previous predictions of the number of people at risk of direct flooding due to sea level rise. We also considered several case studies of wastewater infrastructure in mid-sized cities to determine how topography and system configuration (centralized versus distributed) impact vulnerability. Overall, this analysis highlights the widespread vulnerability of wastewater infrastructure in the U.S. and demonstrates that local disruptions to infrastructure networks may have far-ranging impacts on areas that do not experience direct flooding.
Importance of biometrics to addressing vulnerabilities of the U.S. infrastructure
NASA Astrophysics Data System (ADS)
Arndt, Craig M.; Hall, Nathaniel A.
2004-08-01
Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.
NASA Astrophysics Data System (ADS)
Rarasati, A. D.; Octoria, N. B.
2018-03-01
Sustainable infrastructure is the key to development success. At the same time, transportation infrastructure development will involve social and environmental conditions of the local surroundings. Assessment of the availability of such transport infrastructure is one of the solutions adapted from social and environmental impacts. By conducting a correlation test, the presence of transportation infrastructure and the social conditions of the environment can be identified. The results obtained show that the accessibility, the level of security, and the level of equality are correlated to social and environmental sustainability in Karawang. In terms of environment, the availability of transportation infrastructure is not directly related to the impact of environmental sustainability. The impact of the perceived environment also has no effect on the journey. Correlation results indicate that the length of travel time and congestion level do not make the perceived impact greater. The impact of the perceived environment is merely due to the high utilization of private vehicles in Karawang which subsequently leads to higher energy consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, Marc; Muratori, Matteo; McLaren, Joyce
Increased interest in the use of alternative transportation fuels, such as natural gas, hydrogen, and electricity, is being driven by heightened concern about the climate impacts of gasoline and diesel emissions and our dependence on finite oil resources. A key barrier to widespread adoption of low- and zero-emission passenger vehicles is the availability of refueling infrastructure. Recalling the 'chicken and egg' conundrum, limited adoption of alternative fuel vehicles increases the perceived risk of investments in refueling infrastructure, while lack of refueling infrastructure inhibits vehicle adoption. In this paper, we present the results of a study of the perceived risks andmore » barriers to investment in alternative fuels infrastructure, based on interviews with industry experts and stakeholders. We cover barriers to infrastructure development for three alternative fuels for passenger vehicles: compressed natural gas, hydrogen, and electricity. As an early-mover in zero emission passenger vehicles, California provides the early market experience necessary to map the alternative fuel infrastructure business space. Results and insights identified in this study can be used to inform investment decisions, formulate incentive programs, and guide deployment plans for alternative fueling infrastructure in the U.S. and elsewhere.« less
NASA Astrophysics Data System (ADS)
Filippi, G.; Ibsen, J.; Jaque, S.; Liello, F.; Ovando, N.; Astudillo, A.; Parra, J.; Saldias, Christian
2016-07-01
Announced in 2012, started in 2013 and completed in 2015, the ALMA high bandwidth communication system has become a key factor to achieve the operational and scientific goals of ALMA. This paper summarizes the technical, organizational, and operational goals of the ALMA Optical Link Project, focused in the creation and operation of an effective and sustainable communication infrastructure to connect the ALMA Operations Support Facility and Array Operations Site, both located in the Atacama Desert in the Northern region of Chile, with the point of presence of REUNA in Antofagasta, about 400km away, and from there to the Santiago Central Office in the Chilean capital through the optical infrastructure created by the EC-funded EVALSO project and now an integral part of the REUNA backbone. This new infrastructure completed in 2014 and now operated on behalf of ALMA by REUNA, the Chilean National Research and Education Network, uses state of the art technologies, like dark fiber from newly built cables and DWDM transmission, allowing extending the reach of high capacity communication to the remote region where the Observatory is located. The paper also reports on the results obtained during the first year and a half testing and operation period, where different operational set ups have been experienced for data transfer, remote collaboration, etc. Finally, the authors will present a forward look of the impact of it to both the future scientific development of the Chajnantor Plateau, where many installations area are (and will be) located, as well as the potential Chilean scientific backbone long term development.
Cultural and Technological Issues and Solutions for Geodynamics Software Citation
NASA Astrophysics Data System (ADS)
Heien, E. M.; Hwang, L.; Fish, A. E.; Smith, M.; Dumit, J.; Kellogg, L. H.
2014-12-01
Computational software and custom-written codes play a key role in scientific research and teaching, providing tools to perform data analysis and forward modeling through numerical computation. However, development of these codes is often hampered by the fact that there is no well-defined way for the authors to receive credit or professional recognition for their work through the standard methods of scientific publication and subsequent citation of the work. This in turn may discourage researchers from publishing their codes or making them easier for other scientists to use. We investigate the issues involved in citing software in a scientific context, and introduce features that should be components of a citation infrastructure, particularly oriented towards the codes and scientific culture in the area of geodynamics research. The codes used in geodynamics are primarily specialized numerical modeling codes for continuum mechanics problems; they may be developed by individual researchers, teams of researchers, geophysicists in collaboration with computational scientists and applied mathematicians, or by coordinated community efforts such as the Computational Infrastructure for Geodynamics. Some but not all geodynamics codes are open-source. These characteristics are common to many areas of geophysical software development and use. We provide background on the problem of software citation and discuss some of the barriers preventing adoption of such citations, including social/cultural barriers, insufficient technological support infrastructure, and an overall lack of agreement about what a software citation should consist of. We suggest solutions in an initial effort to create a system to support citation of software and promotion of scientific software development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-08-01
An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less
An ICT Adoption Framework for Education: A Case Study in Public Secondary School of Indonesia
NASA Astrophysics Data System (ADS)
Nurjanah, S.; Santoso, H. B.; Hasibuan, Z. A.
2017-01-01
This paper presents preliminary research findings on the ICT adoption framework for education. Despite many studies have been conducted on ICT adoption framework in education at various countries, they are lack of analysis on the degree of component contribution to the success to the framework. In this paper a set of components that link to ICT adoption in education is observed based on literatures and explorative analysis. The components are Infrastructure, Application, User Skills, Utilization, Finance, and Policy. The components are used as a basis to develop a questionnaire to capture the current ICT adoption condition in schools. The data from questionnaire are processed using Structured Equation Model (SEM). The results show that each component contributes differently to the ICT adoption framework. Finance provides the strongest affect to Infrastructure readiness, whilst User Skills provides the strongest affect to Utilization. The study concludes that development of ICT adoption framework should consider components contribution weights among the components that can be used to guide the implementation of ICT adoption in education.
A technological infrastructure to sustain Internetworked Enterprises
NASA Astrophysics Data System (ADS)
La Mattina, Ernesto; Savarino, Vincenzo; Vicari, Claudia; Storelli, Davide; Bianchini, Devis
In the Web 3.0 scenario, where information and services are connected by means of their semantics, organizations can improve their competitive advantage by publishing their business and service descriptions. In this scenario, Semantic Peer to Peer (P2P) can play a key role in defining dynamic and highly reconfigurable infrastructures. Organizations can share knowledge and services, using this infrastructure to move towards value networks, an emerging organizational model characterized by fluid boundaries and complex relationships. This chapter collects and defines the technological requirements and architecture of a modular and multi-Layer Peer to Peer infrastructure for SOA-based applications. This technological infrastructure, based on the combination of Semantic Web and P2P technologies, is intended to sustain Internetworked Enterprise configurations, defining a distributed registry and enabling more expressive queries and efficient routing mechanisms. The following sections focus on the overall architecture, while describing the layers that form it.
Expecting the Unexpected: Towards Robust Credential Infrastructure
NASA Astrophysics Data System (ADS)
Xu, Shouhuai; Yung, Moti
Cryptographic credential infrastructures, such as Public key infrastructure (PKI), allow the building of trust relationships in electronic society and electronic commerce. At the center of credential infrastructures is the methodology of digital signatures. However, methods that assure that credentials and signed messages possess trustworthiness and longevity are not well understood, nor are they adequately addressed in both literature and practice. We believe that, as a basic engineering principle, these properties have to be built into the credential infrastructure rather than be treated as an after-thought since they are crucial to the long term success of this notion. In this paper we present a step in the direction of dealing with these issues. Specifically, we present the basic engineering reasoning as well as a model that helps understand (somewhat formally) the trustworthiness and longevity of digital signatures, and then we give basic mechanisms that help improve these notions.
23 CFR 500.204 - TMS components for highway traffic data.
Code of Federal Regulations, 2010 CFR
2010-04-01
... INFRASTRUCTURE MANAGEMENT MANAGEMENT AND MONITORING SYSTEMS Traffic Monitoring System § 500.204 TMS components for highway traffic data. (a) General. Each State's TMS, including those using alternative procedures... 23 Highways 1 2010-04-01 2010-04-01 false TMS components for highway traffic data. 500.204 Section...
NASA Astrophysics Data System (ADS)
Papa, Mauricio; Shenoi, Sujeet
The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.
Abstracting application deployment on Cloud infrastructures
NASA Astrophysics Data System (ADS)
Aiftimiei, D. C.; Fattibene, E.; Gargana, R.; Panella, M.; Salomoni, D.
2017-10-01
Deploying a complex application on a Cloud-based infrastructure can be a challenging task. In this contribution we present an approach for Cloud-based deployment of applications and its present or future implementation in the framework of several projects, such as “!CHAOS: a cloud of controls” [1], a project funded by MIUR (Italian Ministry of Research and Education) to create a Cloud-based deployment of a control system and data acquisition framework, “INDIGO-DataCloud” [2], an EC H2020 project targeting among other things high-level deployment of applications on hybrid Clouds, and “Open City Platform”[3], an Italian project aiming to provide open Cloud solutions for Italian Public Administrations. We considered to use an orchestration service to hide the complex deployment of the application components, and to build an abstraction layer on top of the orchestration one. Through Heat [4] orchestration service, we prototyped a dynamic, on-demand, scalable platform of software components, based on OpenStack infrastructures. On top of the orchestration service we developed a prototype of a web interface exploiting the Heat APIs. The user can start an instance of the application without having knowledge about the underlying Cloud infrastructure and services. Moreover, the platform instance can be customized by choosing parameters related to the application such as the size of a File System or the number of instances of a NoSQL DB cluster. As soon as the desired platform is running, the web interface offers the possibility to scale some infrastructure components. In this contribution we describe the solution design and implementation, based on the application requirements, the details of the development of both the Heat templates and of the web interface, together with possible exploitation strategies of this work in Cloud data centers.
Signature scheme based on bilinear pairs
NASA Astrophysics Data System (ADS)
Tong, Rui Y.; Geng, Yong J.
2013-03-01
An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.
McDonald, Elizabeth; Bailie, Ross; Grace, Jocelyn; Brewster, David
2009-01-01
Background Despite Australia's wealth, poor growth is common among Aboriginal children living in remote communities. An important underlying factor for poor growth is the unhygienic state of the living environment in these communities. This study explores the physical and social barriers to achieving safe levels of hygiene for these children. Methods A mixed qualitative and quantitative approach included a community level cross-sectional housing infrastructure survey, focus groups, case studies and key informant interviews in one community. Results We found that a combination of crowding, non-functioning essential housing infrastructure and poor standards of personal and domestic hygiene underlie the high burden of infection experienced by children in this remote community. Conclusion There is a need to address policy and the management of infrastructure, as well as key parenting and childcare practices that allow the high burden of infection among children to persist. The common characteristics of many remote Aboriginal communities in Australia suggest that these findings may be more widely applicable. PMID:19761623
Quantum communication and information processing
NASA Astrophysics Data System (ADS)
Beals, Travis Roland
Quantum computers enable dramatically more efficient algorithms for solving certain classes of computational problems, but, in doing so, they create new problems. In particular, Shor's Algorithm allows for efficient cryptanalysis of many public-key cryptosystems. As public key cryptography is a critical component of present-day electronic commerce, it is crucial that a working, secure replacement be found. Quantum key distribution (QKD), first developed by C.H. Bennett and G. Brassard, offers a partial solution, but many challenges remain, both in terms of hardware limitations and in designing cryptographic protocols for a viable large-scale quantum communication infrastructure. In Part I, I investigate optical lattice-based approaches to quantum information processing. I look at details of a proposal for an optical lattice-based quantum computer, which could potentially be used for both quantum communications and for more sophisticated quantum information processing. In Part III, I propose a method for converting and storing photonic quantum bits in the internal state of periodically-spaced neutral atoms by generating and manipulating a photonic band gap and associated defect states. In Part II, I present a cryptographic protocol which allows for the extension of present-day QKD networks over much longer distances without the development of new hardware. I also present a second, related protocol which effectively solves the authentication problem faced by a large QKD network, thus making QKD a viable, information-theoretic secure replacement for public key cryptosystems.
Kawamoto, Kensaku; Lobach, David F; Willard, Huntington F; Ginsburg, Geoffrey S
2009-03-23
In recent years, the completion of the Human Genome Project and other rapid advances in genomics have led to increasing anticipation of an era of genomic and personalized medicine, in which an individual's health is optimized through the use of all available patient data, including data on the individual's genome and its downstream products. Genomic and personalized medicine could transform healthcare systems and catalyze significant reductions in morbidity, mortality, and overall healthcare costs. Critical to the achievement of more efficient and effective healthcare enabled by genomics is the establishment of a robust, nationwide clinical decision support infrastructure that assists clinicians in their use of genomic assays to guide disease prevention, diagnosis, and therapy. Requisite components of this infrastructure include the standardized representation of genomic and non-genomic patient data across health information systems; centrally managed repositories of computer-processable medical knowledge; and standardized approaches for applying these knowledge resources against patient data to generate and deliver patient-specific care recommendations. Here, we provide recommendations for establishing a national decision support infrastructure for genomic and personalized medicine that fulfills these needs, leverages existing resources, and is aligned with the Roadmap for National Action on Clinical Decision Support commissioned by the U.S. Office of the National Coordinator for Health Information Technology. Critical to the establishment of this infrastructure will be strong leadership and substantial funding from the federal government. A national clinical decision support infrastructure will be required for reaping the full benefits of genomic and personalized medicine. Essential components of this infrastructure include standards for data representation; centrally managed knowledge repositories; and standardized approaches for leveraging these knowledge repositories to generate patient-specific care recommendations at the point of care.
Consolidation and development roadmap of the EMI middleware
NASA Astrophysics Data System (ADS)
Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.
2012-12-01
Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.
The Broadband Imperative: Recommendations to Address K-12 Education Infrastructure Needs
ERIC Educational Resources Information Center
Fox, Christine; Waters, John; Fletcher, Geoff; Levin, Douglas
2012-01-01
It is a simple fact that access to high-speed broadband is now as vital a component of K-12 school infrastructure as electricity, air conditioning, and heating. The same tools and resources that have transformed educators' personal, civic, and professional lives must be part of learning experiences intended to prepare today's students for college…
ERIC Educational Resources Information Center
Lauzon, Allan C.
2013-01-01
This paper argues that after-school programmes need to be considered an essential part of lifelong learning infrastructure, particularly in light of the dominance of the economic discourse in both lifelong learning literature and the initial schooling literature. The paper, which is based upon existing literature, begins by providing an overview…
ERIC Educational Resources Information Center
Rowley, Thomas D., Ed.; And Others
This book addresses the need for research information that can be used as a foundation for rural development policy. Part I deals with the four components of rural development: education (human capital), entrepreneurship, physical infrastructure, and social infrastructure. Part II examines analytic methods of measuring rural development efforts,…
The BACnet Campus Challenge - Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masica, Ken; Tom, Steve
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
Generation and analysis of correlated pairs of photons on board a nanosatellite
NASA Astrophysics Data System (ADS)
Chandrasekara, R.; Tang, Z.; Tan, Y. C.; Cheng, C.; Sha, L.; Hiang, G. C.; Oi, D.; Ling, A.
2016-10-01
Progress in quantum computers and their threat to conventional public key infrastructure is driving new forms of encryption. Quantum Key Distribution (QKD) using entangled photons is a promising approach. A global QKD network can be achieved using satellites equipped with optical links. Despite numerous proposals, actual experimental work demonstrating relevant entanglement technology in space is limited due to the prohibitive cost of traditional satellite development. To make progress, we have designed a photon pair source that can operate on modular spacecraft called CubeSats. We report the in-orbit operation of the photon pair source on board an orbiting CubeSat and demonstrate pair generation and polarisation correlation under space conditions. The in-orbit polarisation correlations are compatible with ground-based tests, validating our design. This successful demonstration is a major experimental milestone towards a space-based quantum network. Our approach provides a cost-effective method for proving the space-worthiness of critical components used in entangled photon technology. We expect that it will also accelerate efforts to probe the overlap between quantum and relativistic models of physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roop, S.S.; Warner, J.E.; Rosa, D.
1998-11-01
Railroads continue to play an important role in the Texas transportation system. This study addresses the potential for implementing a rail planning process in the Texas Department of Transportation. The study is documented in three reports, produced in coordinated and parallel efforts by the Center for Transportation Research and the Texas Transportation Institute. This report documents the work performed by TTI, whereby a rail planning framework is presented which formalizes the planning process and presents the key elements as a series of discrete and logical steps. These steps may be used to guide TxDOT in the formation of goals, identificationmore » of issues and affected parties, selection of appropriate analytical methodologies, location of data sources, and implementation of results. The report also presents an in-depth discussion of several key issues facing transportation agencies. These include rail line abandonment, intermodal service planning, and urban rail rationalization. A discussion of the Texas rail system covers the Class 1 railroads, shortline railroads, Amtrak, and the Mexican rail system.« less
The BACnet Campus Challenge - Part 1
Masica, Ken; Tom, Steve
2015-12-01
Here, the BACnet protocol was designed to achieve interoperability among building automation vendors and evolve over time to include new functionality as well as support new communication technologies such as the Ethernet and IP protocols as they became prevalent and economical in the market place. For large multi-building, multi-vendor campus environments, standardizing on the BACnet protocol as an implementation strategy can be a key component in meeting the challenge of an interoperable, flexible, and scalable building automation system. The interoperability of BACnet is especially important when large campuses with legacy equipment have DDC upgrades to facilities performed over different timemore » frames and use different contractors that install equipment from different vendors under the guidance of different campus HVAC project managers. In these circumstances, BACnet can serve as a common foundation for interoperability when potential variability exists in approaches to the design-build process by numerous parties over time. Likewise, BACnet support for a range of networking protocols and technologies can be a key strategy for achieving flexible and scalable automation systems as campuses and enterprises expand networking infrastructures using standard interoperable protocols like IP and Ethernet.« less
New developments in instrumentation at the W. M. Keck Observatory
NASA Astrophysics Data System (ADS)
Adkins, Sean M.; Armandroff, Taft E.; Fitzgerald, Michael P.; Johnson, James; Larkin, James E.; Lewis, Hilton A.; Martin, Christopher; Matthews, Keith Y.; Prochaska, J. X.; Wizinowich, Peter
2014-07-01
The W. M. Keck Observatory continues to develop new capabilities in support of our science driven strategic plan which emphasizes leadership in key areas of observational astronomy. This leadership is a key component of the scientific productivity of our observing community and depends on our ability to develop new instrumentation, upgrades to existing instrumentation, and upgrades to supporting infrastructure at the observatory. In this paper we describe the as measured performance of projects completed in 2014 and the expected performance of projects currently in the development or construction phases. Projects reaching completion in 2014 include a near-IR tip/tilt sensor for the Keck I adaptive optics system, a new center launch system for the Keck II laser guide star facility, and NIRES, a near-IR Echelle spectrograph for the Keck II telescope. Projects in development include a new seeing limited integral field spectrograph for the visible wavelength range called the Keck Cosmic Web Imager, a deployable tertiary mirror for the Keck I telescope, upgrades to the spectrograph detector and the imager of the OSIRIS instrument, and an upgrade to the telescope control systems on both Keck telescopes.
Interactive Television: The State of the Industry.
ERIC Educational Resources Information Center
Galbreath, Jeremy
1996-01-01
Discusses interactive television in the context of the developing information superhighway. Topics include potential applications, including video on demand; telecommunications companies; digital media technologies; content; regulatory issues; the nature of technology users; origination components; distribution/infrastructure components;…
US-CERT Control System Center Input/Output (I/O) Conceputal Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2005-02-01
This document was prepared for the US-CERT Control Systems Center of the National Cyber Security Division (NCSD) of the Department of Homeland Security (DHS). DHS has been tasked under the Homeland Security Act of 2002 to coordinate the overall national effort to enhance the protection of the national critical infrastructure. Homeland Security Presidential Directive HSPD-7 directs the federal departments to identify and prioritize critical infrastructure and protect it from terrorist attack. The US-CERT National Strategy for Control Systems Security was prepared by the NCSD to address the control system security component addressed in the National Strategy to Secure Cyberspace andmore » the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. The US-CERT National Strategy for Control Systems Security identified five high-level strategic goals for improving cyber security of control systems; the I/O upgrade described in this document supports these goals. The vulnerability assessment Test Bed, located in the Information Operations Research Center (IORC) facility at Idaho National Laboratory (INL), consists of a cyber test facility integrated with multiple test beds that simulate the nation's critical infrastructure. The fundamental mission of the Test Bed is to provide industry owner/operators, system vendors, and multi-agency partners of the INL National Security Division a platform for vulnerability assessments of control systems. The Input/Output (I/O) upgrade to the Test Bed (see Work Package 3.1 of the FY-05 Annual Work Plan) will provide for the expansion of assessment capabilities within the IORC facility. It will also provide capabilities to connect test beds within the Test Range and other Laboratory resources. This will allow real time I/O data input and communication channels for full replications of control systems (Process Control Systems [PCS], Supervisory Control and Data Acquisition Systems [SCADA], and components). This will be accomplished through the design and implementation of a modular infrastructure of control system, communications, networking, computing and associated equipment, and measurement/control devices. The architecture upgrade will provide a flexible patching system providing a quick ''plug and play''configuration through various communication paths to gain access to live I/O running over specific protocols. This will allow for in-depth assessments of control systems in a true-to-life environment. The full I/O upgrade will be completed through a two-phased approach. Phase I, funded by DHS, expands the capabilities of the Test Bed by developing an operational control system in two functional areas, the Science & Technology Applications Research (STAR) Facility and the expansion of various portions of the Test Bed. Phase II (see Appendix A), funded by other programs, will complete the full I/O upgrade to the facility.« less
CEMS: Building a Cloud-Based Infrastructure to Support Climate and Environmental Data Services
NASA Astrophysics Data System (ADS)
Kershaw, P. J.; Curtis, M.; Pechorro, E.
2012-04-01
CEMS, the facility for Climate and Environmental Monitoring from Space, is a new joint collaboration between academia and industry to bring together their collective expertise to support research into climate change and provide a catalyst for growth in related Earth Observation (EO) technologies and services in the commercial sector. A recent major investment by the UK Space Agency has made possible the development of a dedicated facility at ISIC, the International Space Innovation Centre at Harwell in the UK. CEMS has a number of key elements: the provision of access to large-volume EO and climate datasets co-located with high performance computing facilities; a flexible infrastructure to support the needs of research projects in the academic community and new business opportunities for commercial companies. Expertise and tools for scientific data quality and integrity are another essential component, giving users confidence and transparency in its data, services and products. Central to the development of this infrastructure is the utilisation of cloud-based technology: multi-tenancy and the dynamic provision of resources are key characteristics to exploit in order to support the range of organisations using the facilities and the varied use cases. The hosting of processing services and applications next to the data within the CEMS facility is another important capability. With the expected exponential increase in data volumes within the climate science and EO domains it is becoming increasingly impracticable for organisations to retrieve this data over networks and provide the necessary storage. Consider for example, the factor of o20 increase in data volumes expected for the ESA Sentinel missions over the equivalent Envisat instruments. We explore the options for the provision of a hybrid community/private cloud looking at offerings from the commercial sector and developments in the Open Source community. Building on this virtualisation layer, a further core services tier will support and serve applications as part of a service oriented architecture. We consider the constituent services in this layer to support access to the data, data processing and the orchestration of workflows.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Woodcock, R.
2013-12-01
One of the greatest drivers for change in the way scientific research is undertaken in Australia was the development of the Australian eResearch Infrastructure which was coordinated by the then Australian Government Department of Innovation, Industry, Science and Research. There were two main tranches of funding: the 2007-2013 National Collaborative Research Infrastructure Strategy (NCRIS) and the 2009 Education and Investment Framework (EIF) Super Science Initiative. Investments were in two areas: the Australian e-Research Infrastructure and domain specific capabilities: combined investment in both is 1,452M with at least 456M being invested in eResearch infrastructure. NCRIS was specifically designed as a community-guided process to provide researchers, both academic and government, with major research facilities, supporting infrastructures and networks necessary for world-class research. Extensive community engagement was sought to inform decisions on where Australia could best make strategic infrastructure investments to further develop its research capacity and improve research outcomes over the next 5 to 10years. The current (2007-2014) Australian e-Research Infrastructure has 2 components: 1. The National eResearch physical infrastructure which includes two petascale HPC facilities (one in Canberra and one in Perth), a 10 Gbps national network (National Research Network), a national data storage infrastructure comprising 8 multi petabyte data stores and shared access methods (Australian Access Federation). 2. A second component is focused on research integration infrastructures and includes the Australian National Data Service, which is concerned with better management, description and access to distributed research data in Australia and the National eResearch Collaboration Tools and Resources (NeCTAR) project. NeCTAR is centred on developing problem oriented digital laboratories which provide better and coordinated access to research tools, data environments and workflows. The eResearch Infrastructure Stack is designed to support 12 individual domain-specific capabilities. Four are relevant to the Earth and Space Sciences: (1) AuScope (a national Earth Science Infrastructure Program), (2) the Integrated Marine Observing System (IMOS), (3) the Terrestrial Ecosystems Research Network (TERN) and (4) the Australian Urban Research Infrastructure Network (AURIN). The two main research integration infrastructures, ANDS and NeCTAR, are seen as pivotal to the success of the Australian eResearch Infrastructure. Without them, there was a risk that that the investments in new computers and data storage would provide physical infrastructure, but few would come to use it as the skills barriers to entry were too high. ANDS focused on transforming Australia's research data environment. Its flagship is Research Data Australia, an Internet-based discovery service designed to provide rich connections between data, projects, researchers and institutions, and promote visibility of Australian research data collections in search engines. NeCTAR focused on building eResearch infrastructure in four areas: virtual laboratories, tools, a federated research cloud and a hosting service. Combined, ANDS and NeCTAR are ensuring that people ARE coming and ARE using the physical infrastructures that were built.
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
The StratusLab cloud distribution: Use-cases and support for scientific applications
NASA Astrophysics Data System (ADS)
Floros, E.
2012-04-01
The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.
NASA Technical Reports Server (NTRS)
VanSuetendael, RIchard; Hayes, Alan; Birr, Richard
2008-01-01
Suborbital space flight and space tourism are new potential markets that could significantly impact the National Airspace System (NAS). Numerous private companies are developing space flight capabilities to capture a piece of an emerging commercial space transportation market. These entrepreneurs share a common vision that sees commercial space flight as a profitable venture. Additionally, U.S. space exploration policy and national defense will impose significant additional demands on the NAS. Air traffic service providers must allow all users fair access to limited airspace, while ensuring that the highest levels of safety, security, and efficiency are maintained. The FAA's Next Generation Air Transportation System (NextGen) will need to accommodate spacecraft transitioning to and from space through the NAS. To accomplish this, space and air traffic operations will need to be seamlessly integrated under some common communications, navigation and surveillance (CNS) infrastructure. As part of NextGen, the FAA has been developing the Automatic Dependent Surveillance Broadcast (ADS-B) which utilizes the Global Positioning System (GPS) to track and separate aircraft. Another key component of NextGen, System-Wide Information Management/ Network Enabled Operations (SWIM/NEO), is an open architecture network that will provide NAS data to various customers, system tools and applications. NASA and DoD are currently developing a space-based range (SBR) concept that also utilizes GPS, communications satellites and other CNS assets. The future SBR will have very similar utility for space operations as ADS-B and SWIM has for air traffic. Perhaps the FAA, NASA, and DoD should consider developing a common space-based CNS infrastructure to support both aviation and space transportation operations. This paper suggests specific areas of research for developing a CNS infrastructure that can accommodate spacecraft and other new types of vehicles as an integrated part of NextGen.
Global sand trade is paving the way for a tragedy of the sand commons
NASA Astrophysics Data System (ADS)
Torres, A.; Brandt, J.; Lear, K.; Liu, J.
2016-12-01
In the first 40 years of the 21st century, planet Earth is highly likely to experience more urban land expansion than in all of history, an increase in transportation infrastructure by more than a third, and a great variety of land reclamation projects. While scientists are beginning to quantify the deep imprint of human infrastructure on biodiversity at large scales, its off-site impacts and linkages to sand mining and trade have been largely ignored. Sand is the most widely used building material in the world. With an ever-increasing demand for this resource, sand is being extracted at rates that far exceed its replenishment, and is becoming increasingly scarce. This has already led to conflicts around the world and will likely lead to a "tragedy of the sand commons" if sustainable sand mining and trade cannot be achieved. We investigate the environmental and socioeconomic interactions over large distances (telecouplings) of infrastructure development and sand mining and trade across diverse systems through transdisciplinary research and the recently proposed telecoupling framework. Our research is generating a thorough understanding of the telecouplings driven by an increasing demand for sand. In particular, we address three main research questions: 1) Where are the conflicts related to sand mining occurring?; 2) What are the major "sending" and "receiving" systems of sand?; and 3) What are the main components (e.g. causes, effects, agents, etc.) of telecoupled systems involving sand mining and trade? Our results highlight the role of global sand trade as a driver of environmental degradation that threatens the integrity of natural systems and their capacity to deliver key ecosystem services. In addition, infrastructure development and sand mining and trade have important implications for other sustainability challenges such as over-fishing and global warming. This knowledge will help to identify opportunities and tools to better promote a more sustainable use of sand, ultimately helping avoid a "tragedy of the sand commons".
Maltz, Jonathan; C Ng, Thomas; Li, Dustin; Wang, Jian; Wang, Kang; Bergeron, William; Martin, Ron; Budinger, Thomas
2005-01-01
In mass trauma situations, emergency personnel are challenged with the task of prioritizing the care of many injured victims. We propose a trauma patient tracking system (TPTS) where first-responders tag all patients with a wireless monitoring device that continuously reports the location of each patient. The system can be used not only to prioritize patient care, but also to determine the time taken for each patient to receive treatment. This is important in training emergency personnel and in identifying bottlenecks in the disaster response process. In situations where biochemical agents are involved, a TPTS may be employed to determine sites of cross-contamination. In order to track patient location in both outdoor and indoor environments, we employ both Global Positioning System (GPS) and Television/ Radio Frequency (TVRF) technologies. Each patient tag employs IEEE 802.11 (Wi-Fi)/TCP/IP networking to communicate with a central server via any available Wi-Fi basestation. A key component to increase TPTS fault-tolerance is a mobile Wi-Fi basestation that employs redundant Internet connectivity to ensure that tags at the disaster scene can send information to the central server even when local infrastructure is unavailable for use. We demonstrate the robustness of the system in tracking multiple patients in a simulated trauma situation in an urban environment.
IEEE802.15.6 NB portable BAN clinic and M2M international standardization.
Kuroda, Masahiro; Nohara, Yasunobu
2013-01-01
The increase of non communicable diseases (NCDs) will change the direction of health services to emphasize the role of preventive medicine in healthcare services. The first short-range medical body are network (BAN) standard IEEE802.15.6 is expected to be used for secure and user-friendly sensor devices for portable medical equipment. A BAN is an enabler for uploading medical data to a backend system for remote diagnoses and treatment. Machine-to-Machine (M2M) infrastructure is also a key technology for providing flexible and affordable services extending electronic health record (EHR) systems. This paper proposes a BAN-based portable clinic that collects health-check data from user-friendly medical devices and sensors and sends the data to a local backend server, and it evaluates the clinic in fields of actual usage. We discuss issues experienced from actual deployment of the system and focus on integrating it into upcoming healthcare M2M infrastructure to achieve affordable and dependable clinic services. We explain the components and workflow of the clinic and the system model. The system is set up at a temporary health center and has a network link to a remote medical help center. The paper concludes with our plan to introduce our system to contribute to internationally standardized preventive medicine.
Global information infrastructure.
Lindberg, D A
1994-01-01
The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.
Multisensor system for the protection of critical infrastructure of a seaport
NASA Astrophysics Data System (ADS)
Kastek, Mariusz; Dulski, Rafał; Zyczkowski, Marek; Szustakowski, Mieczysław; Trzaskawka, Piotr; Ciurapinski, Wiesław; Grelowska, Grazyna; Gloza, Ignacy; Milewski, Stanislaw; Listewnik, Karol
2012-06-01
There are many separated infrastructural objects within a harbor area that may be considered "critical", such as gas and oil terminals or anchored naval vessels. Those objects require special protection, including security systems capable of monitoring both surface and underwater areas, because an intrusion into the protected area may be attempted using small surface vehicles (boats, kayaks, rafts, floating devices with weapons and explosives) as well as underwater ones (manned or unmanned submarines, scuba divers). The paper will present the concept of multisensor security system for a harbor protection, capable of complex monitoring of selected critical objects within the protected area. The proposed system consists of a command centre and several different sensors deployed in key areas, providing effective protection from land and sea, with special attention focused on the monitoring of underwater zone. The initial project of such systems will be presented, its configuration and initial tests of the selected components. The protection of surface area is based on medium-range radar and LLTV and infrared cameras. Underwater zone will be monitored by a sonar and acoustic and magnetic barriers, connected into an integrated monitoring system. Theoretical analyses concerning the detection of fast, small surface objects (such as RIB boats) by a camera system and real test results in various weather conditions will also be presented.
Big data analytics for the Future Circular Collider reliability and availability studies
NASA Astrophysics Data System (ADS)
Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter
2017-10-01
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.
Addressing the gap between public health emergency planning and incident response
Freedman, Ariela M; Mindlin, Michele; Morley, Christopher; Griffin, Meghan; Wooten, Wilma; Miner, Kathleen
2013-01-01
Objectives: Since 9/11, Incident Command System (ICS) and Emergency Operations Center (EOC) are relatively new concepts to public health, which typically operates using less hierarchical and more collaborative approaches to organizing staff. This paper describes the 2009 H1N1 influenza outbreak in San Diego County to explore the use of ICS and EOC in public health emergency response. Methods: This study was conducted using critical case study methodology consisting of document review and 18 key-informant interviews with individuals who played key roles in planning and response. Thematic analysis was used to analyze data. Results: Several broad elements emerged as key to ensuring effective and efficient public health response: 1) developing a plan for emergency response; 2) establishing the framework for an ICS; 3) creating the infrastructure to support response; 4) supporting a workforce trained on emergency response roles, responsibilities, and equipment; and 5) conducting regular preparedness exercises. Conclusions: This research demonstrates the value of investments made and that effective emergency preparedness requires sustained efforts to maintain personnel and material resources. By having the infrastructure and experience based on ICS and EOC, the public health system had the capability to surge-up: to expand its day-to-day operation in a systematic and prolonged manner. None of these critical actions are possible without sustained funding for the public health infrastructure. Ultimately, this case study illustrates the importance of public health as a key leader in emergency response. PMID:28228983
Defining resilience within a risk-informed assessment framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Garill A.; Unwin, Stephen D.; Holter, Gregory M.
2011-08-01
The concept of resilience is the subject of considerable discussion in academic, business, and governmental circles. The United States Department of Homeland Security for one has emphasised the need to consider resilience in safeguarding critical infrastructure and key resources. The concept of resilience is complex, multidimensional, and defined differently by different stakeholders. The authors contend that there is a benefit in moving from discussing resilience as an abstraction to defining resilience as a measurable characteristic of a system. This paper proposes defining resilience measures using elements of a traditional risk assessment framework to help clarify the concept of resilience andmore » as a way to provide non-traditional risk information. The authors show various, diverse dimensions of resilience can be quantitatively defined in a common risk assessment framework based on the concept of loss of service. This allows the comparison of options for improving the resilience of infrastructure and presents a means to perform cost-benefit analysis. This paper discusses definitions and key aspects of resilience, presents equations for the risk of loss of infrastructure function that incorporate four key aspects of resilience that could prevent or mitigate that loss, describes proposed resilience factor definitions based on those risk impacts, and provides an example that illustrates how resilience factors would be calculated using a hypothetical scenario.« less
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
NASA Technical Reports Server (NTRS)
Baldwin, Richard S.; Guzik, Monica; Skierski, Michael
2011-01-01
As NASA prepares for its next era of manned spaceflight missions, advanced energy storage technologies are being developed and evaluated to address future mission needs and technical requirements and to provide new mission-enabling technologies. Cell-level components for advanced lithium-ion batteries possessing higher energy, more reliable performance and enhanced, inherent safety characteristics are actively under development within the NASA infrastructure. A key component for safe and reliable cell performance is the cell separator, which separates the two energetic electrodes and functions to prevent the occurrence of an internal short-circuit while enabling ionic transport. Recently, a new generation of co-extruded separator films has been developed by ExxonMobil Chemical and introduced into their battery separator product portfolio. Several grades of this new separator material have been evaluated with respect to dynamic mechanical properties and safety-related performance attributes. This paper presents the results of these evaluations in comparison to a current state-ofthe-practice separator material. The results are discussed with respect to potential opportunities to enhance the inherent safety characteristics and reliability of future, advanced lithium-ion cell chemistries.
Bronen, Robin; Chapin, F Stuart
2013-06-04
This article presents governance and institutional strategies for climate-induced community relocations. In Alaska, repeated extreme weather events coupled with climate change-induced coastal erosion impact the habitability of entire communities. Community residents and government agencies concur that relocation is the only adaptation strategy that can protect lives and infrastructure. Community relocation stretches the financial and institutional capacity of existing governance institutions. Based on a comparative analysis of three Alaskan communities, Kivalina, Newtok, and Shishmaref, which have chosen to relocate, we examine the institutional constraints to relocation in the United States. We identify policy changes and components of a toolkit that can facilitate community-based adaptation when environmental events threaten people's lives and protection in place is not possible. Policy changes include amendment of the Stafford Act to include gradual geophysical processes, such as erosion, in the statutory definition of disaster and the creation of an adaptive governance framework to allow communities a continuum of responses from protection in place to community relocation. Key components of the toolkit are local leadership and integration of social and ecological well-being into adaptation planning.
DOT National Transportation Integrated Search
2014-12-01
This report summarizes potential climate change effects on the availability of water, land use, transportation infrastructure, and key natural resources in central New Mexico. This work is being done as part of the Interagency Transportation, Land Us...
A Training Framework for the Department of Defense Public Key Infrastructure
2001-09-01
and the growth of electronic commerce within the Department of Defense (DoD) has led to the development and implementation of the DoD Public Key...also grown within the Department of Defense. Electronic commerce and business to business transactions have become more commonplace and have
ERIC Educational Resources Information Center
US Department of Homeland Security, 2010
2010-01-01
Critical infrastructure and key resources (CIKR) provide the essential services that support basic elements of American society. Compromise of these CIKR could disrupt key government and industry activities, facilities, and systems, producing cascading effects throughout the Nation's economy and society and profoundly affecting the national…
Sea Level Rise Impacts on Wastewater Treatment Systems Along the U.S. Coasts
NASA Astrophysics Data System (ADS)
Hummel, Michelle A.; Berry, Matthew S.; Stacey, Mark T.
2018-04-01
As sea levels rise, coastal communities will experience more frequent and persistent nuisance flooding, and some low-lying areas may be permanently inundated. Critical components of lifeline infrastructure networks in these areas are also at risk of flooding, which could cause significant service disruptions that extend beyond the flooded zone. Thus, identifying critical infrastructure components that are exposed to sea level rise is an important first step in developing targeted investment in protective actions and enhancing the overall resilience of coastal communities. Wastewater treatment plants are typically located at low elevations near the coastline to minimize the cost of collecting consumed water and discharging treated effluent, which makes them particularly susceptible to coastal flooding. For this analysis, we used geographic information systems to assess the exposure of wastewater infrastructure to various sea level rise projections at the national level. We then estimated the number of people who would lose wastewater services, which could be more than five times as high as previous predictions of the number of people at risk of direct flooding due to sea level rise. We also performed a regional comparison of wastewater exposure to marine and groundwater flooding in the San Francisco Bay Area. Overall, this analysis highlights the widespread exposure of wastewater infrastructure in the United States and demonstrates that local disruptions to infrastructure networks may have far-ranging impacts on areas that do not experience direct flooding.
2007-05-01
services by implementing a disaster recovery plan to restore an organization’s critical business functions. (DRII 2004). ISO 27001 An information...the International Organization for Standardization ( ISO )), the IT SSP bases the terms and definitions on those in the NIPP because the SSP is an annex...International Organization for Standardization/International Electrotechnical Commission ( ISO /IEC) 27000 Series, Information technology—Security
The NASA John C. Stennis Environmental Geographic Information System
NASA Technical Reports Server (NTRS)
Cohan, Tyrus
2002-01-01
Contents include the following: 1. Introduction: Background information. Initial applications of the SSC EGIS. Ongoing projects. 2.Scope of SSC EGIS. 3. Data layers. 4. Onsite operations. 5. Landcover classifications. 6. Current activities. 7. GIS/Key. 8. Infrastructure base map - development. 9. Infrastructure base map - application. 10. Incorrected layer. 11. Corrected layer. 12. Emergency environmental response tool. 13. Future directions. 14. Bridging the gaps. 15. Environmental geographical information system.
2006-09-01
Telecommunications and Information Administration Telecom Telecommunications Telco Telecommunications Company VBIED Vehicle Borne Improvised Explosive... effect the damage to one system or sector would have on another. These concentrations of the sector’s key assets are becoming attractive targets even...critical U.S. infrastructures, such as the nation’s telephone system . Companies make it easier to control their networks from remote locations to save
NASA Astrophysics Data System (ADS)
Rothman, D. S.; Siraj, A.; Hughes, B.
2013-12-01
The international research community is currently in the process of developing new scenarios for climate change research. One component of these scenarios are the Shared Socio-economic Pathways (SSPs), which describe a set of possible future socioeconomic conditions. These are presented in narrative storylines with associated quantitative drivers. The core quantitative drivers include total population, average GDP per capita, educational attainment, and urbanization at the global, regional, and national levels. At the same time there have been calls, particularly by the IAV community, for the SSPs to include additional quantitative information on other key social factors, such as income inequality, governance, health, and access to key infrastructures, which are discussed in the narratives. The International Futures system (IFs), based at the Pardee Center at the University of Denver, is able to provide forecasts of many of these indicators. IFs cannot use the SSP drivers as exogenous inputs, but we are able to create development pathways that closely reproduce the core quantitative drivers defined by the different SSPs, as well as incorporating assumptions on other key driving factors described in the qualitative narratives. In this paper, we present forecasts for additional quantitative indicators based upon the implementation of the SSP development pathways in IFs. These results will be of value to many researchers.
Surgical and anaesthetic capacity of hospitals in Malawi: key insights.
Henry, Jaymie Ang; Frenkel, Erica; Borgstein, Eric; Mkandawire, Nyengo; Goddia, Cyril
2015-10-01
Surgery is increasingly recognized as an important driver for health systems strengthening, especially in developing countries. To facilitate quality improvement initiatives, baseline knowledge of capacity for surgical, anaesthetic, emergency and obstetric care is critical. In partnership with the Malawi Ministry of Health, we quantified government hospitals' surgical capacity through workforce, infrastructure and health service delivery components. From November 2012 to January 2013, we surveyed district and mission hospital administrators and clinical staff onsite using a modified version of the Personnel, Infrastructure, Procedures, Equipment and Supplies (PIPES) tool from Surgeons OverSeas. We calculated percentage of facilities demonstrating adequacy of the assessed components, surgical case rates, operating theatre density and surgical workforce density. Twenty-seven government hospitals were surveyed (90% of the district hospitals, all central hospitals). Of the surgical workforce surveyed (n = 370), 92.7% were non-surgeons and 77% were clinical officers (COs). Of the 109 anaesthesia providers, 95.4% were non-physician anaesthetists (anaesthesia COs or ACOs). Non-surgeons and ACOs were the only providers of surgical services and anaesthetic services in 85% and 88.9% of hospitals, respectively. No specialists served the district hospitals. All of the hospitals experienced periods without external electricity. Most did not always have a functioning generator (78.3% district, 25% central) or running water (82.6%, 50%). None of the district hospitals had an Intensive Care Unit (ICU). Cricothyroidotomy, bowel resection and cholecystectomy were not done in over two-thirds of hospitals. Every hospital provided general anaesthesia but some did not always have a functioning anaesthesia machine (52.2%, 50%). Surgical rate, operating theatre density and surgical workforce density per 100 000 population was 289.48-747.38 procedures, 0.98 and 5.41 and 3.68 surgical providers, respectively. COs form the backbone of Malawi's surgical and anaesthetic workforce and should be supported with improvements in infrastructure as well as training and mentorship by specialist surgeons and anaesthetists. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.
Adapting New Space System Designs into Existing Ground Infrastructure
NASA Technical Reports Server (NTRS)
Delgado, Hector N.; McCleskey, Carey M.
2008-01-01
As routine space operations extend beyond earth orbit, the ability for ground infrastructures to take on new launch vehicle systems and a more complex suite of spacecraft and payloads has become a new challenge. The U.S. Vision for Space Exploration and its Constellation Program provides opportunities for our space operations community to meet this challenge. Presently, as new flight and ground systems add to the overall groundbased and space-based capabilities for NASA and its international partners, specific choices are being made as to what to abandon, what to retain, as well as what to build new. The total ground and space-based infrastructure must support a long-term, sustainable operation after it is all constructed, deployed, and activated. This paper addresses key areas of engineering concern during conceptual design, development, and routine operations, with a particular focus on: (1) legacy system reusability, (2) system supportability attributes and operations characteristics, (3) ground systems design trades and criteria, and (4) technology application survey. Each key area explored weighs the merits of reusability of the infrastructure in terms of: engineering analysis methods and techniques; top-level facility, systems, and equipment design criteria; and some suggested methods for making the operational system attributes (the "-ilities") highly visible to the design teams and decisionmakers throughout the design process.
Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.; ...
2017-06-27
An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.
An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less
Montagu, Dominic; Harding, April
2012-01-01
Public Private Partnerships (PPP) have been common in infrastructure for many years and are increasingly being considered as a means to finance, build, and manage hospitals. However, the growth of hospital PPPs in the past two decades has led to confusion about what sorts of contractual arrangements between public and private partners consititute a PPP, and what key differences distinguish public private partnership for hospitals from PPPs for infrastructure. Based on experiences from around the world we indentify six key areas where hospital PPPs differ from infrastructure partnerships. We draw upon the hospital partnerships that have been documented in OECD countries and a growing number of middle-income countries to identify four distinct types of hospital PPPs: service focused partnerships in which private partners manage operations within publicly constructed facilities; facilities and finance PPPs, focused on mobilizing capital and creating new hospitals; combined PPPs, involving both facility and clinical operations; and co-located PPPs where privately operated services are developed within the grounds of a public hospital. These four types of hospital PPPs have differing goals, and therefore different contractual and functional aspects, as well as differing risks to both public and private partners. By clarifying these, we provide a base upon which hospital PPPs can be assessed against appropriate goals and benchmarks.
SCIDIP-ES - A science data e-infrastructure for preservation of earth science data
NASA Astrophysics Data System (ADS)
Riddick, Andrew; Glaves, Helen; Marelli, Fulvio; Albani, Mirko; Tona, Calogera; Marketakis, Yannis; Tzitzikas, Yannis; Guarino, Raffaele; Giaretta, David; Di Giammatteo, Ugo
2013-04-01
The capability for long term preservation of earth science data is a key requirement to support on-going research and collaboration within and between many earth science disciplines. A number of critically important current research directions (e.g. understanding climate change, and ensuring sustainability of natural resources) rely on the preservation of data often collected over several decades in a form in which it can be accessed and used easily. In many branches of the earth sciences the capture of key observational data may be difficult or impossible to repeat. For example, a specific geological exposure or subsurface borehole may be only temporarily available, and deriving earth observation data from a particular satellite mission is clearly often a unique opportunity. At the same time such unrepeatable observations may be a critical input to environmental, economic and political decision making. Another key driver for strategic long term data preservation is that key research challenges (such as those described above) frequently require cross disciplinary research utilising raw and interpreted data from a number of earth science disciplines. Effective data preservation strategies can support this requirement for interoperability, and thereby stimulate scientific innovation. The SCIDIP-ES project (EC FP7 grant agreement no. 283401) seeks to address these and other data preservation challenges by developing a Europe wide e-infrastructure for long term data preservation comprising appropriate software tools and infrastructure services to enable and promote long term preservation of earth science data. Because we define preservation in terms of continued usability of the digitally encoded information, the generic infrastructure services will allow a wide variety of data to be made usable by researchers from many different domains. This approach will enable the cost for long-term usability across disciplines to be shared supporting the creation of strong business cases for the long term support of that data. This paper will describe our progress to date, including the results of community engagement and user consultation exercises designed to specify and scope the required tools and services. Our user engagement methodology, ensuring that we are capturing the views of a representative sample of institutional users, will be described. Key results of an in-depth user requirements exercise, and also the conclusions from a survey of existing technologies and policies for earth science data preservation involving almost five hundred respondents across Europe and beyond will also be outlined. A key aim of the project will also be to create harmonised data preservation and access policies for earth science data in Europe, taking into account the requirements of relevant earth science data users and archive providers across Europe, liaising appropriately with other European e-infrastructure projects, and progress on this will be explained.
ERIC Educational Resources Information Center
Fox, C.; Waters, J.; Fletcher, G.; Levin, D.
2012-01-01
It is a simple fact that access to high-speed broadband is now as vital a component of K-12 school infrastructure as electricity, air conditioning, and heating. The same tools and resources that have transformed educators' personal, civic, and professional lives must be part of learning experiences intended to prepare today's students for college…
ERIC Educational Resources Information Center
Cornelius, Fran; Glasgow, Mary Ellen Smith
2007-01-01
Technology's impact on the delivery of health care mandates that nursing faculty use all technologies at their disposal to better prepare students to work in technology-infused health care environments. Essential components of an infrastructure to grow technology-infused nursing education include a skilled team comprised of tech-savvy faculty and…
NASA Astrophysics Data System (ADS)
Kuscahyadi, Febriana; Meilano, Irwan; Riqqi, Akhmad
2017-07-01
Special Region of Yogyakarta Province (DIY) is one of Indonesian regions that often harmed by varied natural disasters which caused huge negative impacts. The most catastrophic one is earthquake in May, 27th 2006 with 6.3 magnitude moment [1], evoked 5716 people died, and economic losses for Rp. 29.1 Trillion, [2]. Their impacts could be minimized by committing disaster risk reduction program. Therefore, it is necessary to measure the natural disaster resilience within a region. Since infrastructure are might be able as facilities that means for evacuations, distribute supplies, and post disaster recovery [3], this research concerns to establish spatial modelling of natural disaster resilience using infrastructure components based on BRIC in DIY Province. There are three infrastructure used in this model; they are school, health facilities, and roads. Distance analysis is used to determine the level of resilient zone. The result gives the spatial understanding as a map that urban areas have better disaster resilience than the rural areas. The coastal areas and mountains areas which are vulnerable towards disaster have less resilience since there are no enough facilities that will increase the disaster resilience
Enabling campus grids with open science grid technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitzel, Derek; Bockelman, Brian; Swanson, David
2011-01-01
The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condormore » clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.« less
Dawson, David A; Purnell, Phil; Roelich, Katy; Busch, Jonathan; Steinberger, Julia K
2014-11-04
Renewable energy technologies, necessary for low-carbon infrastructure networks, are being adopted to help reduce fossil fuel dependence and meet carbon mitigation targets. The evolution of these technologies has progressed based on the enhancement of technology-specific performance criteria, without explicitly considering the wider system (global) impacts. This paper presents a methodology for simultaneously assessing local (technology) and global (infrastructure) performance, allowing key technological interventions to be evaluated with respect to their effect on the vulnerability of wider infrastructure systems. We use exposure of low carbon infrastructure to critical material supply disruption (criticality) to demonstrate the methodology. A series of local performance changes are analyzed; and by extension of this approach, a method for assessing the combined criticality of multiple materials for one specific technology is proposed. Via a case study of wind turbines at both the material (magnets) and technology (turbine generators) levels, we demonstrate that analysis of a given intervention at different levels can lead to differing conclusions regarding the effect on vulnerability. Infrastructure design decisions should take a systemic approach; without these multilevel considerations, strategic goals aimed to help meet low-carbon targets, that is, through long-term infrastructure transitions, could be significantly jeopardized.
Resilience in social insect infrastructure systems
2016-01-01
Both human and insect societies depend on complex and highly coordinated infrastructure systems, such as communication networks, supply chains and transportation networks. Like human-designed infrastructure systems, those of social insects are regularly subject to disruptions such as natural disasters, blockages or breaks in the transportation network, fluctuations in supply and/or demand, outbreaks of disease and loss of individuals. Unlike human-designed systems, there is no deliberate planning or centralized control system; rather, individual insects make simple decisions based on local information. How do these highly decentralized, leaderless systems deal with disruption? What factors make a social insect system resilient, and which factors lead to its collapse? In this review, we bring together literature on resilience in three key social insect infrastructure systems: transportation networks, supply chains and communication networks. We describe how systems differentially invest in three pathways to resilience: resistance, redirection or reconstruction. We suggest that investment in particular resistance pathways is related to the severity and frequency of disturbance. In the final section, we lay out a prospectus for future research. Human infrastructure networks are rapidly becoming decentralized and interconnected; indeed, more like social insect infrastructures. Human infrastructure management might therefore learn from social insect researchers, who can in turn make use of the mature analytical and simulation tools developed for the study of human infrastructure resilience. PMID:26962030
EV Charging Infrastructure Roadmap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karner, Donald; Garetson, Thomas; Francfort, Jim
2016-08-01
As highlighted in the U.S. Department of Energy’s EV Everywhere Grand Challenge, vehicle technology is advancing toward an objective to “… produce plug-in electric vehicles that are as affordable and convenient for the average American family as today’s gasoline-powered vehicles …” [1] by developing more efficient drivetrains, greater battery energy storage per dollar, and lighter-weight vehicle components and construction. With this technology advancement and improved vehicle performance, the objective for charging infrastructure is to promote vehicle adoption and maximize the number of electric miles driven. The EV Everywhere Charging Infrastructure Roadmap (hereafter referred to as Roadmap) looks forward and assumesmore » that the technical challenges and vehicle performance improvements set forth in the EV Everywhere Grand Challenge will be met. The Roadmap identifies and prioritizes deployment of charging infrastructure in support of this charging infrastructure objective for the EV Everywhere Grand Challenge« less
Reliable Communication Models in Interdependent Critical Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Sangkeun; Chinthavali, Supriya; Shankar, Mallikarjun
Modern critical infrastructure networks are becoming increasingly interdependent where the failures in one network may cascade to other dependent networks, causing severe widespread national-scale failures. A number of previous efforts have been made to analyze the resiliency and robustness of interdependent networks based on different models. However, communication network, which plays an important role in today's infrastructures to detect and handle failures, has attracted little attention in the interdependency studies, and no previous models have captured enough practical features in the critical infrastructure networks. In this paper, we study the interdependencies between communication network and other kinds of critical infrastructuremore » networks with an aim to identify vulnerable components and design resilient communication networks. We propose several interdependency models that systematically capture various features and dynamics of failures spreading in critical infrastructure networks. We also discuss several research challenges in building reliable communication solutions to handle failures in these models.« less
Privacy-preserving photo sharing based on a public key infrastructure
NASA Astrophysics Data System (ADS)
Yuan, Lin; McNally, David; Küpçü, Alptekin; Ebrahimi, Touradj
2015-09-01
A significant number of pictures are posted to social media sites or exchanged through instant messaging and cloud-based sharing services. Most social media services offer a range of access control mechanisms to protect users privacy. As it is not in the best interest of many such services if their users restrict access to their shared pictures, most services keep users' photos unprotected which makes them available to all insiders. This paper presents an architecture for a privacy-preserving photo sharing based on an image scrambling scheme and a public key infrastructure. A secure JPEG scrambling is applied to protect regional visual information in photos. Protected images are still compatible with JPEG coding and therefore can be viewed by any one on any device. However, only those who are granted secret keys will be able to descramble the photos and view their original versions. The proposed architecture applies an attribute-based encryption along with conventional public key cryptography, to achieve secure transmission of secret keys and a fine-grained control over who may view shared photos. In addition, we demonstrate the practical feasibility of the proposed photo sharing architecture with a prototype mobile application, ProShare, which is built based on iOS platform.
Cyberinfrastructure for Airborne Sensor Webs
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.
2009-01-01
Since 2004 the NASA Airborne Science Program has been prototyping and using infrastructure that enables researchers to interact with each other and with their instruments via network communications. This infrastructure uses satellite links and an evolving suite of applications and services that leverage open-source software. The use of these tools has increased near-real-time situational awareness during field operations, resulting in productivity improvements and the collection of better data. This paper describes the high-level system architecture and major components, with example highlights from the use of the infrastructure. The paper concludes with a discussion of ongoing efforts to transition to operational status.
2016-07-01
CAC common access card DoD Department of Defense FOUO For Official Use Only GIS geographic information systems GUI graphical user interface HISA...as per requirements of this project, is UNCLASS/For Official Use Only (FOUO), with access re- stricted to DOD common access card (CAC) users. Key...Boko Haram Fuel Dump Discovered in Maiduguru.” Available: http://saharareporters.com/2015/10/01/another-boko-haram-fuel- dump - discovered-maiduguri
Use of Climate Information for Decision-Making and Impacts Research: State of Our Understanding
2016-03-01
SUMMARY Much of human society and its infrastructure has been designed and built on a key assumption: that future climate conditions at any given...experienced in the past. This assumption affects infrastructure design and maintenance, emergency response management, and long-term investment and planning...our scientific understanding of the climate system in a manner that incorporates user needs into the design of scientific experiments, and that
2007-05-01
National Association of Clean Water Agencies Shelly Foston Meridian Institute Michael Gritzuk Pima County (AZ) Wastewater Management Department Genevieve...agencies to assist small and medium systems, and it has helped fund and develop a variety of Web casts and security trainings. Although drinking water...trainings, conference calls, Web casts , and other communica- tions; (2) provide administrative support; (3) provide technical support; and (4
NASA Astrophysics Data System (ADS)
Wong, John-Michael; Stojadinovic, Bozidar
2005-05-01
A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.
ERIC Educational Resources Information Center
Inverness Research, 2016
2016-01-01
In facilities throughout the United States and abroad, communities of scientists share infrastructure, instrumentation, and equipment to conduct scientific research. In these large facilities--laboratories, accelerators, telescope arrays, and research vessels--scientists are researching key questions that have the potential to make a significant…
A PKI Approach for Deploying Modern Secure Distributed E-Learning and M-Learning Environments
ERIC Educational Resources Information Center
Kambourakis, Georgios; Kontoni, Denise-Penelope N.; Rouskas, Angelos; Gritzalis, Stefanos
2007-01-01
While public key cryptography is continuously evolving and its installed base is growing significantly, recent research works examine its potential use in e-learning or m-learning environments. Public key infrastructure (PKI) and attribute certificates (ACs) can provide the appropriate framework to effectively support authentication and…
Building and Strengthening Policy Research Capacity: Key Issues in Canadian Higher Education
ERIC Educational Resources Information Center
Jones, Glen A.
2014-01-01
Given the importance of higher education in social and economic development, governments need to build a strong higher education data and policy research infrastructure to support informed decision-making, provide policy advice, and offer a critical assessment of key trends and issues. The author discusses the decline of higher education policy…
Code of Federal Regulations, 2014 CFR
2014-07-01
... positions where the occupant's duties involve protecting the nation's borders, ports, critical infrastructure or key resources, and where the occupant's neglect, action, or inaction could bring about a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottlieb, Steven Arthur; DeTar, Carleton; Tousaint, Doug
This is the closeout report for the Indiana University portion of the National Computational Infrastructure for Lattice Gauge Theory project supported by the United States Department of Energy under the SciDAC program. It includes information about activities at Indian University, the University of Arizona, and the University of Utah, as those three universities coordinated their activities.
Robotic Sensitive-Site Assessment
2015-09-04
annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and
Open Data Infrastructures And The Future Of Science
NASA Astrophysics Data System (ADS)
Boulton, G. S.
2016-12-01
Open publication of the evidence (the data) supporting a scientific claim has been the bedrock on which the scientific advances of the modern era of science have been built. It is also of immense importance in confronting three challenges unleashed by the digital revolution. The first is the threat the digital data storm poses to the principle of "scientific self-correction", in which false concepts are weeded out because of a demonstrable failure in logic or in the replication of observations or experiments. Large and complex data volumes are difficult to make openly available in ways that make rigorous scrutiny possible. Secondly, linking and integrating data from different sources about the same phenomena have created profound new opportunities for understanding the Earth. If data are neither accessible nor useable, such opportunities cannot be seized. Thirdly, open access publication, open data and ubiquitous modern communications enhance the prospects for an era of "Open Science" in which science emerges from behind its laboratory doors to engage in co-production of knowledge with other stakeholders in addressing major contemporary challenges to human society, in particular the need for long term thinking about planetary sustainability. If the benefits of an open data regime are to be realised, only a small part of the challenge lies in providing "hard" infrastructure. The major challenges lie in the "soft" infrastructure of relationships between the components of national science systems, of analytic and software tools, of national and international standards and the normative principles adopted by scientists themselves. The principles that underlie these relationships, the responsibilities of key actors and the rules of the game needed to maximise national performance and facilitate international collaboration are set out in an International Accord on Open Data.
The MMI Semantic Framework: Rosetta Stones for Earth Sciences
NASA Astrophysics Data System (ADS)
Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.
2009-12-01
Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web services standards could be performed using framework tools.
Implementing CER: what will it take?
Biskupiak, Joseph E; Dunn, Jeffrey D; Holtorf, Anke-Peggy
2012-06-01
Comparative effectiveness research (CER) is undeniably changing how drugs are developed, launched, priced, and reimbursed in the United States. But most organizations are still evaluating what CER can do for them and how and when they can utilize the data. A roundtable of stakeholders, including formulary decision makers, evaluated CER's possible effects on managed care organizations (MCOs) and what it may take to fully integrate CER into decision making. To examine the role of CER in current formulary decision making, compare CER to modeling, discuss ways CER may be used in the future, and describe CER funding sources. While decision makers from different types of organizations, such as pharmacy benefit management (PBM) companies and MCOs, may have varying definitions and expectations of CER, most thought leaders from a roundtable of stakeholders, including formulary decision makers, see value in CER's ability to enhance their formulary decision making. Formulary decision makers may be able to use CER to better inform their coverage decisions in areas such as benefit design, contracting, conditional reimbursement, pay for performance, and other alternative pricing arrangements. Real-world CER will require improvement in the health information technology infrastructure to better capture value-related information. The federal government is viewed as a key driver and funding source behind CER, especially for infrastructure and methods development, while industry will adapt the clinical development and create increasing CER evidence. CER then needs to be applied to determining value (or cost efficacy). It is expected that CER will continue to grow as a valuable component of formulary decision making. Future integration of CER into formulary decision making will require federal government and academic leadership, improvements in the health information technology infrastructure, ongoing funding, and improved and more consistent methodologies.
Impacts of Permafrost on Infrastructure and Ecosystem Services
NASA Astrophysics Data System (ADS)
Trochim, E.; Schuur, E.; Schaedel, C.; Kelly, B. P.
2017-12-01
The Study of Environmental Arctic Change (SEARCH) program developed knowledge pyramids as a tool for advancing scientific understanding and making this information accessible for decision makers. Knowledge pyramids are being used to synthesize, curate and disseminate knowledge of changing land ice, sea ice, and permafrost in the Arctic. Each pyramid consists of a one-two page summary brief in broadly accessible language and literature organized by levels of detail including synthesizes and scientific building blocks. Three knowledge pyramids have been produced related to permafrost on carbon, infrastructure, and ecosystem services. Each brief answers key questions with high societal relevance framed in policy-relevant terms. The knowledge pyramids concerning infrastructure and ecosystem services were developed in collaboration with researchers specializing in the specific topic areas in order to identify the most pertinent issues and accurately communicate information for integration into policy and planning. For infrastructure, the main issue was the need to build consensus in the engineering and science communities for developing improved methods for incorporating data applicable to building infrastructure on permafrost. In ecosystem services, permafrost provides critical landscape properties which affect basic human needs including fuel and drinking water availability, access to hunting and harvest, and fish and wildlife habitat. Translating these broad and complex topics necessitated a systematic and iterative approach to identifying key issues and relating them succinctly to the best state of the art research. The development of the knowledge pyramids provoked collaboration and synthesis across distinct research and engineering communities. The knowledge pyramids also provide a solid basis for policy development and the format allows the content to be regularly updated as the research community advances.
NASA Astrophysics Data System (ADS)
Ruiz-Villanueva, Virginia; Piégay, Hervé; Gurnell, Angela A.; Marston, Richard A.; Stoffel, Markus
2016-09-01
Large wood is an important physical component of woodland rivers and significantly influences river morphology. It is also a key component of stream ecosystems. However, large wood is also a source of risk for human activities as it may damage infrastructure, block river channels, and induce flooding. Therefore, the analysis and quantification of large wood and its mobility are crucial for understanding and managing wood in rivers. As the amount of large-wood-related studies by researchers, river managers, and stakeholders increases, documentation of commonly used and newly available techniques and their effectiveness has also become increasingly relevant as well. Important data and knowledge have been obtained from the application of very different approaches and have generated a significant body of valuable information representative of different environments. This review brings a comprehensive qualitative and quantitative summary of recent advances regarding the different processes involved in large wood dynamics in fluvial systems including wood budgeting and wood mechanics. First, some key definitions and concepts are introduced. Second, advances in quantifying large wood dynamics are reviewed; in particular, how measurements and modeling can be combined to integrate our understanding of how large wood moves through and is retained within river systems. Throughout, we present a quantitative and integrated meta-analysis compiled from different studies and geographical regions. Finally, we conclude by highlighting areas of particular research importance and their likely future trajectories, and we consider a particularly underresearched area so as to stress the future challenges for large wood research.
Urich, Christian; Rauch, Wolfgang
2014-12-01
Long-term projections for key drivers needed in urban water infrastructure planning such as climate change, population growth, and socio-economic changes are deeply uncertain. Traditional planning approaches heavily rely on these projections, which, if a projection stays unfulfilled, can lead to problematic infrastructure decisions causing high operational costs and/or lock-in effects. New approaches based on exploratory modelling take a fundamentally different view. Aim of these is, to identify an adaptation strategy that performs well under many future scenarios, instead of optimising a strategy for a handful. However, a modelling tool to support strategic planning to test the implication of adaptation strategies under deeply uncertain conditions for urban water management does not exist yet. This paper presents a first step towards a new generation of such strategic planning tools, by combing innovative modelling tools, which coevolve the urban environment and urban water infrastructure under many different future scenarios, with robust decision making. The developed approach is applied to the city of Innsbruck, Austria, which is spatially explicitly evolved 20 years into the future under 1000 scenarios to test the robustness of different adaptation strategies. Key findings of this paper show that: (1) Such an approach can be used to successfully identify parameter ranges of key drivers in which a desired performance criterion is not fulfilled, which is an important indicator for the robustness of an adaptation strategy; and (2) Analysis of the rich dataset gives new insights into the adaptive responses of agents to key drivers in the urban system by modifying a strategy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Woods, Cindy; Carlisle, Karen; Larkins, Sarah; Thompson, Sandra Claire; Tsey, Komla; Matthews, Veronica; Bailie, Ross
2017-01-01
Continuous Quality Improvement is a process for raising the quality of primary health care (PHC) across Indigenous PHC services. In addition to clinical auditing using plan, do, study, and act cycles, engaging staff in a process of reflecting on systems to support quality care is vital. The One21seventy Systems Assessment Tool (SAT) supports staff to assess systems performance in terms of five key components. This study examines quantitative and qualitative SAT data from five high-improving Indigenous PHC services in northern Australia to understand the systems used to support quality care. High-improving services selected for the study were determined by calculating quality of care indices for Indigenous health services participating in the Audit and Best Practice in Chronic Disease National Research Partnership. Services that reported continuing high improvement in quality of care delivered across two or more audit tools in three or more audits were selected for the study. Precollected SAT data (from annual team SAT meetings) are presented longitudinally using radar plots for quantitative scores for each component, and content analysis is used to describe strengths and weaknesses of performance in each systems' component. High-improving services were able to demonstrate strong processes for assessing system performance and consistent improvement in systems to support quality care across components. Key strengths in the quality support systems included adequate and orientated workforce, appropriate health system supports, and engagement with other organizations and community, while the weaknesses included lack of service infrastructure, recruitment, retention, and support for staff and additional costs. Qualitative data revealed clear voices from health service staff expressing concerns with performance, and subsequent SAT data provided evidence of changes made to address concerns. Learning from the processes and strengths of high-improving services may be useful as we work with services striving to improve the quality of care provided in other areas.
A service-based BLAST command tool supported by cloud infrastructures.
Carrión, Abel; Blanquer, Ignacio; Hernández, Vicente
2012-01-01
Notwithstanding the benefits of distributed-computing infrastructures for empowering bioinformatics analysis tools with the needed computing and storage capability, the actual use of these infrastructures is still low. Learning curves and deployment difficulties have reduced the impact on the wide research community. This article presents a porting strategy of BLAST based on a multiplatform client and a service that provides the same interface as sequential BLAST, thus reducing learning curve and with minimal impact on their integration on existing workflows. The porting has been done using the execution and data access components from the EC project Venus-C and the Windows Azure infrastructure provided in this project. The results obtained demonstrate a low overhead on the global execution framework and reasonable speed-up and cost-efficiency with respect to a sequential version.
LLVM Infrastructure and Tools Project Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCormick, Patrick Sean
2017-11-06
This project works with the open source LLVM Compiler Infrastructure (http://llvm.org) to provide tools and capabilities that address needs and challenges faced by ECP community (applications, libraries, and other components of the software stack). Our focus is on providing a more productive development environment that enables (i) improved compilation times and code generation for parallelism, (ii) additional features/capabilities within the design and implementations of LLVM components for improved platform/performance portability and (iii) improved aspects related to composition of the underlying implementation details of the programming environment, capturing resource utilization, overheads, etc. -- including runtime systems that are often not easilymore » addressed by application and library developers.« less
CATE: A Case Study of an Interdisciplinary Student-Led Microgravity Experiment
NASA Astrophysics Data System (ADS)
Colwell, J. E.; Dove, A.; Lane, S. S.; Tiller, C.; Whitaker, A.; Lai, K.; Hoover, B.; Benjamin, S.
2015-12-01
The Collisional Accretion Experiment (CATE) was designed, built, and flown on NASA's C-9 parabolic flight airplane in less than a year by an interdisciplinary team of 6 undergraduate students under the supervision of two faculty. CATE was selected in the initial NASA Undergraduate Student Instrument Project (USIP) solicitation in the Fall of 2013, and the experiment flight campaign was in July 2014. The experiment studied collisions between different particle populations at low velocities (sub-m/s) in a vacuum and microgravity to gain insight into processes in the protoplanetary disk and planetary ring systems. Faculty provided the experiment concept and key experiment design parameters, and the student team developed the detailed hardware design for all components, manufactured and tested hardware, operated the experiment in flight, and analyzed data post-flight. Students also developed and led an active social media campaign and education and public outreach campaign to engage local high school students in the project. The ability to follow an experiment through from conception to flight was a key benefit for undergraduate students whose available time for projects such as this is frequently limited to their junior and senior years. Key factors for success of the program included having an existing laboratory infrastructure and experience in developing flight payloads and an intrinsically simple experiment concept. Students were highly motivated, in part, by their sense of technical and scientific ownership of the project, and this engagement was key to the project's success.
2007-05-01
sliced egg sand- wich (closed-faced), frozen fish dinner, rabbit stew, shrimp-flavored instant noodles , venison jerky, buffalo burgers, alligator...meat or 2% or more poultry (e.g., chicken noodle ) Other products Cheese, onion, mushroom pizza; spaghetti sauces (less than 3% red meat), spaghetti...sauce with mushrooms and 2% meat, pork and beans, sliced egg sandwich (closed-faced), frozen fish dinner, rabbit stew, shrimp-flavored instant
2007-05-01
Commission maintains an expert staff of engineers and statisticians to analyze this data in an attempt to reveal troublesome trends in network reliability...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE MAY
Water Infrastructure Needs and Investment: Review and Analysis of Key Issues
2008-11-24
the Rural Development Act of 1972, as amended (7 U.S.C. § 1926). The purpose of these USDA programs is to provide basic amenities, alleviate health...nonregulatory costs (e.g., routine replacement of basic infrastructure).12 Wastewater Needs. The most recent wastewater survey, conducted in 2004 and issued...1.6 billion just to implement the most basic steps needed to improve security (such as better controlling access to facilities with fences, locks
2007-05-01
partners will be encouraged to use the assessment methodologies referenced above, or ISO 27001 and ISO 17799, which are intended to be used together...the Information Systems Audit and Control Association (ISACA), the International Organization for Standardization ( ISO ), and a number of other...programs are aligned with NCSD’s goals for the IT sector and follow best practices developed by NIST and the ISO . The cyber protective programs
Resilience in social insect infrastructure systems.
Middleton, Eliza J T; Latty, Tanya
2016-03-01
Both human and insect societies depend on complex and highly coordinated infrastructure systems, such as communication networks, supply chains and transportation networks. Like human-designed infrastructure systems, those of social insects are regularly subject to disruptions such as natural disasters, blockages or breaks in the transportation network, fluctuations in supply and/or demand, outbreaks of disease and loss of individuals. Unlike human-designed systems, there is no deliberate planning or centralized control system; rather, individual insects make simple decisions based on local information. How do these highly decentralized, leaderless systems deal with disruption? What factors make a social insect system resilient, and which factors lead to its collapse? In this review, we bring together literature on resilience in three key social insect infrastructure systems: transportation networks, supply chains and communication networks. We describe how systems differentially invest in three pathways to resilience: resistance, redirection or reconstruction. We suggest that investment in particular resistance pathways is related to the severity and frequency of disturbance. In the final section, we lay out a prospectus for future research. Human infrastructure networks are rapidly becoming decentralized and interconnected; indeed, more like social insect infrastructures. Human infrastructure management might therefore learn from social insect researchers, who can in turn make use of the mature analytical and simulation tools developed for the study of human infrastructure resilience. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.
2014-12-01
We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.
2009-09-01
DIFFIE-HELLMAN KEY EXCHANGE .......................14 III. GHOSTNET SETUP .........................................15 A. INSTALLATION OF OPENVPN FOR...16 3. Verifying the Secure Connection ..............16 B. RUNNING OPENVPN AS A SERVER ON WINDOWS ............17 1. Creating...Generating Server and Client Keys ............20 5. Keys to Transfer to the Client ...............21 6. Configuring OpenVPN to Use Certificates
A data protection scheme for medical research networks. Review after five years of operation.
Helbing, K; Demiroglu, S Y; Rakebrandt, F; Pommerening, K; Rienhoff, O; Sax, U
2010-01-01
The data protection requirements matured in parallel to new clinical tests generating more personal data since the 1960s. About ten years ago it was recognized that a generic data protection scheme for medical research networks is required, which reinforces patient rights but also allows economically feasible medical research compared to "hand-carved" individual solutions. To give recommendations for more efficient IT infrastructures for medical research networks in compliance with data protection requirements. The IT infrastructures of three medical research networks were reviewed with respect to the relevant data management modules. Recommendations are derived to increase cost efficiency in research networks assessing the consequences of a service provider approach without lowering the data protection level. The existing data protection schemes are very complex. Smaller research networks cannot afford the implementation of such schemes. Larger networks struggle to keep them sustainable. Due to a modular redesign in the medical research network community, a new approach offers opportunities for an efficient sustainable IT infrastructure involving a service provider concept. For standard components 70-80% of the costs could be cut down, for open source components about 37% over a three-year period. Future research networks should switch to a service-oriented approach to achieve a sustainable, cost-efficient IT infrastructure.
NASA Astrophysics Data System (ADS)
Wilebore, Beccy; Willis, Kathy
2016-04-01
Landcover conversion is one of the largest anthropogenic threats to ecological services globally; in the EU around 1500 ha of biodiverse land are lost every day to changes in infrastructure and urbanisation. This land conversion directly affects key ecosystem services that support natural infrastructure, including water flow regulation and the mitigation of flood risks. We assess the sensitivity of runoff production to landcover in the UK at a high spatial resolution, using a distributed hydrologic model in the regional land-surface model JULES (Joint UK Land Environment Simulator). This work, as part of the wider initiative 'NaturEtrade', will create a novel suite of easy-to-use tools and mechanisms to allow EU landowners to quickly map and assess the value of their land in providing key ecosystem services.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
Frontier Fields: A Cost-Effective Approach to Bringing Authentic Science to the Education Community
NASA Astrophysics Data System (ADS)
Eisenhamer, B.; Lawton, B.; Summers, F.; Ryer, H.
2015-11-01
For more than two decades, the Hubble EPO program has sought to bring the wonders of the universe to the education community and the public, and to engage audiences in the adventure of scientific discovery. Program components include standards-based, curriculum-support materials, exhibits and exhibit components, and professional development workshops. The main underpinnings of the program's infrastructure are scientist-educator development teams, partnerships, and an embedded program evaluation component. The Space Telescope Science Institute's Office of Public Outreach is leveraging this existing infrastructure to bring the Frontier Fields science program to the education community in a cost-effective way. Frontier Fields observations and results have been, and will continue to be, embedded into existing product lines and professional development offerings. We also are leveraging our new social media strategy to bring the science program to the public in the form of an ongoing blog.
System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond
NASA Astrophysics Data System (ADS)
Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.
2009-05-01
The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.
John Lin, Zhongping; Zhang, Tianyi; Pasas-Farmer, Stephanie; Brooks, Stephen D; Moyer, Michael; Connolly, Ron
2014-05-01
With the globalization of drug development, there is an increasing need for global bioanalytical support. Bioanalysis provides pivotal data for toxicokinetic, pharmacokinetic, bioavailability and bioequivalence studies used for regional or global regulatory submission. There are many known complications in building a truly global bioanalytical operation, ranging from lack of global regulatory guidelines and global standard operating procedures to barriers in regional requirements on sample shipping, importation and exportation. The primary objective of this article is to discuss common experiences and challenges facing the biopharmaceutical industry when providing bioanalytical support in a global setting. The key components of global bioanalytical services include the supporting infrastructure, spanning project management, IT support of data management, best practices in bioanalytical method transfer and sample analysis, and comprehensive knowledge of the requirements of bioanalysis guidelines and differences in these guidelines. A case study will highlight best practices for successful management of a global project.
A statewide strategy for nursing workforce development through partnerships in Texas.
Kishi, Aileen; Green, Alexia
2008-08-01
Statewide efforts and partnerships were used for nursing workforce development to address the nursing shortage in Texas. A statewide strategic action plan was developed where partnerships and collaboration were the key components. One of the most important outcomes of these statewide partnerships was the passage of the Nursing Shortage Reduction Act 2001. Through this legislation, the Texas Center for Nursing Workforce Studies and its advisory committee were established. This article describes how a statewide infrastructure for nursing workforce policy and legislative and regulatory processes were further developed. An overview is provided on the contributions made by the organizations involved with these strategic partnerships. The ingredients for establishing successful, strategic partnerships are also identified. It is hoped that nursing and health care leaders striving to address the nursing shortage could consider statewide efforts such as those used in Texas to develop nursing workforce policy and legislation.
NASA Astrophysics Data System (ADS)
Turner, Michael S.
1999-03-01
For two decades the hot big-bang model as been referred to as the standard cosmology - and for good reason. For just as long cosmologists have known that there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is inflation + cold dark matter. It holds that the Universe is flat, that slowly moving elementary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now prima facie evidence that supports two basic tenets of this paradigm. An avalanche of high-quality cosmological observations will soon make this case stronger or will break it. Key questions remain to be answered; foremost among them are: identification and detection of the cold dark matter particles and elucidation of the dark-energy component. These are exciting times in cosmology!
Damage assessment of bridge infrastructure subjected to flood-related hazards
NASA Astrophysics Data System (ADS)
Michalis, Panagiotis; Cahill, Paul; Bekić, Damir; Kerin, Igor; Pakrashi, Vikram; Lapthorne, John; Morais, João Gonçalo Martins Paulo; McKeogh, Eamon
2017-04-01
Transportation assets represent a critical component of society's infrastructure systems. Flood-related hazards are considered one of the main climate change impacts on highway and railway infrastructure, threatening the security and functionality of transportation systems. Of such hazards, flood-induced scour is a primarily cause of bridge collapses worldwide and one of the most complex and challenging water flow and erosion phenomena, leading to structural instability and ultimately catastrophic failures. Evaluation of scour risk under severe flood events is a particularly challenging issue considering that depth of foundations is very difficult to evaluate in water environment. The continual inspection, assessment and maintenance of bridges and other hydraulic structures under extreme flood events requires a multidisciplinary approach, including knowledge and expertise of hydraulics, hydrology, structural engineering, geotechnics and infrastructure management. The large number of bridges under a single management unit also highlights the need for efficient management, information sharing and self-informing systems to provide reliable, cost-effective flood and scour risk management. The "Intelligent Bridge Assessment Maintenance and Management System" (BRIDGE SMS) is an EU/FP7 funded project which aims to couple state-of-the art scientific expertise in multidisciplinary engineering sectors with industrial knowledge in infrastructure management. This involves the application of integrated low-cost structural health monitoring systems to provide real-time information towards the development of an intelligent decision support tool and a web-based platform to assess and efficiently manage bridge assets. This study documents the technological experience and presents results obtained from the application of sensing systems focusing on the damage assessment of water-hazards at bridges over watercourses in Ireland. The applied instrumentation is interfaced with an open-source platform that can offer a more economical remote monitoring solution. The results presented in this investigation provide an important guide for a multidisciplinary approach to bridge monitoring and can be used as a benchmark for the field application of cost-effective and robust sensing methods. This will deliver key information regarding the impact of water-related hazards at bridge structures through an integrated structural health monitoring and management system. Acknowledgement: The authors wish to acknowledge the financial support of the European Commission, through the Marie Curie action Industry-Academia Partnership and Pathways Network BRIDGE SMS (Intelligent Bridge Assessment Maintenance and Management System) - FP7-People-2013-IAPP- 612517.
Building a federated data infrastructure for integrating the European Supersites
NASA Astrophysics Data System (ADS)
Freda, Carmela; Cocco, Massimo; Puglisi, Giuseppe; Borgstrom, Sven; Vogfjord, Kristin; Sigmundsson, Freysteinn; Ergintav, Semih; Meral Ozel, Nurcan; Consortium, Epos
2017-04-01
The integration of satellite and in-situ Earth observations fostered by the GEO Geohazards Supersites and National Laboratories (GSNL) initiative is aimed at providing access to spaceborne and in-situ geoscience data for selected sites prone to earthquake, volcanic eruptions and/or other environmental hazards. The initiative was launched with the "Frascati declaration" at the conclusion of the 3rd International Geohazards workshop of the Group of Earth Observation (GEO) held in November 2007 in Frascati, Italy. The development of the GSNL and the integration of in-situ and space Earth observations require the implementation of in-situ e-infrastructures and services for scientific users and other stakeholders. The European Commission has funded three projects to support the development of the European supersites: FUTUREVOLC for the Icelandic volcanoes, MED-SUV for Mt. Etna and Campi Flegrei/Vesuvius (Italy), and MARSITE for the Marmara Sea near fault observatory (Turkey). Because the establishment of a network of supersites in Europe will, among other advantages, facilitate the link with the Global Earth Observation System of Systems (GEOSS), EPOS (the European Plate Observing System) has supported these initiatives by integrating the observing systems and infrastructures developed in these three projects in its implementation plan aimed at integrating existing and new research infrastructures for solid Earth sciences. In this contribution we will present the EPOS federated approach and the key actions needed to: i) develop sustainable long-term Earth observation strategies preceding and following earthquakes and volcanic eruptions; ii) develop an innovative integrated e-infrastructure component necessary to create an effective service for users; iii) promote the strategic and outreach actions to meet the specific user needs; iv) develop expertise in the use and interpretation of Supersites data in order to promote capacity building and timely transfer of scientific knowledge. All these will facilitate new scientific discoveries through the availability of unprecedented data sets and it will increase resilience and preparedness in the society. Making straightway available observations of natural processes controlling natural phenomena and promoting their comparison with numerical simulations and their interpretation through theoretical analyses will foster scientific excellence in solid Earth research. The EPOS federated approach might be considered as a proxy for other regions of the world and therefore it could contribute to develop the supersite initiative globally.
Quantum cryptography using coherent states: Randomized encryption and key generation
NASA Astrophysics Data System (ADS)
Corndorf, Eric
With the advent of the global optical-telecommunications infrastructure, an increasing number of individuals, companies, and agencies communicate information with one another over public networks or physically-insecure private networks. While the majority of the traffic flowing through these networks requires little or no assurance of secrecy, the same cannot be said for certain communications between banks, between government agencies, within the military, and between corporations. In these arenas, the need to specify some level of secrecy in communications is a high priority. While the current approaches to securing sensitive information (namely the public-key-cryptography infrastructure and deterministic private-key ciphers like AES and 3DES) seem to be cryptographically strong based on empirical evidence, there exist no mathematical proofs of secrecy for any widely deployed cryptosystem. As an example, the ubiquitous public-key cryptosystems infer all of their secrecy from the assumption that factoring of the product of two large primes is necessarily time consuming---something which has not, and perhaps cannot, be proven. Since the 1980s, the possibility of using quantum-mechanical features of light as a physical mechanism for satisfying particular cryptographic objectives has been explored. This research has been fueled by the hopes that cryptosystems based on quantum systems may provide provable levels of secrecy which are at least as valid as quantum mechanics itself. Unfortunately, the most widely considered quantum-cryptographic protocols (BB84 and the Ekert protocol) have serious implementation problems. Specifically, they require quantum-mechanical states which are not readily available, and they rely on unproven relations between intrusion-level detection and the information available to an attacker. As a result, the secrecy level provided by these experimental implementations is entirely unspecified. In an effort to provably satisfy the cryptographic objectives of key generation and direct data-encryption, a new quantum cryptographic principle is demonstrated wherein keyed coherent-state signal sets are employed. Taking advantage of the fundamental and irreducible quantum-measurement noise of coherent states, these schemes do not require the users to measure the influence of an attacker. Experimental key-generation and data encryption schemes based on these techniques, which are compatible with today's WDM fiber-optic telecommunications infrastructure, are implemented and analyzed.
Kaufman, James H; Eiron, Iris; Deen, Glenn; Ford, Dan A; Smith, Eishay; Knoop, Sarah; Nelken, H; Kol, Tomer; Mesika, Yossi; Witting, Karen; Julier, Kevin; Bennett, Craig; Rapp, Bill; Carmeli, Boaz; Cohen, Simona
2005-01-01
Recently there has been increased focus on the need to modernize the healthcare information infrastructure in the United States.1–4 The U.S. healthcare industry is by far the largest in the world in both absolute dollars and in percentage of GDP (more than $1.5 trillion, or 15 percent of GDP). It is also fragmented and complex. These difficulties, coupled with an antiquated infrastructure for the collection of and access to medical data, lead to enormous inefficiencies and sources of error. Consumer, regulatory, and governmental pressure drive a growing consensus that the time has come to modernize the U.S. healthcare information infrastructure (HII). While such transformation may be disruptive in the short term, it will, in the future, significantly improve the quality, expediency, efficiency, and successful delivery of healthcare while decreasing costs to patients and payers and improving the overall experiences of consumers and providers. The launch of a national health infrastructure initiative in the United States in May 2004-with the goal of providing an electronic health record for every American within the next decade-will eventually transform the healthcare industry in general, just as information technology (IT) has transformed other industries in the past. The key to this successful outcome will be based on the way we apply IT to healthcare data and the services delivered through IT. This must be accomplished in a way that protects individuals and allows competition but gives caregivers reliable and efficient access to the data required to treat patients and to improve the practice of medical science. This paper describes key IT solutions and technologies that address the challenges of creating a nation-wide healthcare IT infrastructure. Furthermore we discuss the emergence of new electronic healthcare services and the current efforts of IBM Research, Software Group, and Healthcare Life Sciences to realize this new vision for healthcare. PMID:18066378
Mehrolhassani, Mohammad Hossein; Emami, Mozhgan
2013-01-01
Background: Change theories provide an opportunity for organizational managers to plan, monitor and evaluate changes using a framework which enable them, among others, to show a fast response to environmental fluctuations and to predict the changing patterns of individuals and technology. The current study aimed to explore whether the change in the public accounting system of the Iranian health sector has followed Kurt Lewin’s change theory or not. Methods: This study which adopted a mixed methodology approach, qualitative and quantitative methods, was conducted in 2012. In the first phase of the study, 41 participants using purposive sampling and in the second phase, 32 affiliated units of Kerman University of Medical Sciences (KUMS) were selected as the study sample. Also, in phase one, we used face-to-face in-depth interviews (6 participants) and the quote method (35 participants) for data collection. We used a thematic framework analysis for analyzing data. In phase two, a questionnaire with a ten-point Likert scale was designed and then, data were analyzed using descriptive indicators, principal component and factorial analyses. Results: The results of phase one yielded a model consisting of four categories of superstructure, apparent infrastructure, hidden infrastructure and common factors. By linking all factors, totally, 12 components based on the quantitative results showed that the state of all components were not satisfactory at KUMS (5.06±2.16). Leadership and management; and technology components played the lowest and the greatest roles in implementing the accrual accounting system respectively. Conclusion: The results showed that the unfreezing stage did not occur well and the components were immature, mainly because the emphasis was placed on superstructure components rather than the components of hidden infrastructure. The study suggests that a road map should be developed in the financial system based on Kurt Lewin’s change theory and the model presented in this paper underpins the change management in any organizations. PMID:24596885
Mehrolhassani, Mohammad Hossein; Emami, Mozhgan
2013-11-01
Change theories provide an opportunity for organizational managers to plan, monitor and evaluate changes using a framework which enable them, among others, to show a fast response to environmental fluctuations and to predict the changing patterns of individuals and technology. The current study aimed to explore whether the change in the public accounting system of the Iranian health sector has followed Kurt Lewin's change theory or not. This study which adopted a mixed methodology approach, qualitative and quantitative methods, was conducted in 2012. In the first phase of the study, 41 participants using purposive sampling and in the second phase, 32 affiliated units of Kerman University of Medical Sciences (KUMS) were selected as the study sample. Also, in phase one, we used face-to-face in-depth interviews (6 participants) and the quote method (35 participants) for data collection. We used a thematic framework analysis for analyzing data. In phase two, a questionnaire with a ten-point Likert scale was designed and then, data were analyzed using descriptive indicators, principal component and factorial analyses. The results of phase one yielded a model consisting of four categories of superstructure, apparent infrastructure, hidden infrastructure and common factors. By linking all factors, totally, 12 components based on the quantitative results showed that the state of all components were not satisfactory at KUMS (5.06±2.16). Leadership and management; and technology components played the lowest and the greatest roles in implementing the accrual accounting system respectively. The results showed that the unfreezing stage did not occur well and the components were immature, mainly because the emphasis was placed on superstructure components rather than the components of hidden infrastructure. The study suggests that a road map should be developed in the financial system based on Kurt Lewin's change theory and the model presented in this paper underpins the change management in any organizations.
NASA Astrophysics Data System (ADS)
Riddick, Andrew; Glaves, Helen; Marelli, Fulvio; Albani, Mirko; Tona, Calogera; Marketakis, Yannis; Tzitzikas, Yannis; Guarino, Raffaele; Giaretta, David; Di Giammatteo, Ugo
2013-04-01
The capability for long term preservation of earth science data is a key requirement to support on-going research and collaboration within and between many earth science disciplines. A number of critically important current research directions (e.g. understanding climate change, and ensuring sustainability of natural resources) rely on the preservation of data often collected over several decades in a form in which it can be accessed and used easily. Another key driver for strategic long term data preservation is that key research challenges (such as those described above) frequently require cross disciplinary research utilising raw and interpreted data from a number of earth science disciplines. Effective data preservation strategies can support this requirement for interoperability and collaboration, and thereby stimulate scientific innovation. The SCIDIP-ES project (EC FP7 grant agreement no. 283401) seeks to address these and other data preservation challenges by developing a Europe wide infrastructure for long term data preservation comprising appropriate software tools and infrastructure services to enable and promote long term preservation of earth science data. Because we define preservation in terms of continued usability of the digitally encoded information, the generic infrastructure services will allow a wide variety of data to be made usable by researchers from many different domains. This approach promotes international collaboration between researchers and will enable the cost for long-term usability across disciplines to be shared supporting the creation of strong business cases for the long term support of that data. This paper will describe our progress to date, including the results of community engagement and user consultation exercises designed to specify and scope the required tools and services. Our user engagement methodology, ensuring that we are capturing the views of a representative sample of institutional users, will be described. Key results of an in-depth user requirements exercise, and also the conclusions from a survey of existing technologies and policies for earth science data preservation involving almost five hundred respondents across Europe and beyond will also be outlined. A key aim of the project will also be to create harmonised data preservation and access policies for earth science data in Europe, taking into account the requirements of relevant earth science data users and archive providers across Europe, and liaising appropriately with other European data integration and e-infrastructure projects to ensure a collaborative strategy.
Security and Privacy in Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, Glenn A.; Edgar, Thomas W.; Rice, Theora R.
As you have seen from the previous chapters, cyber-physical systems (CPS) are broadly used across technology and industrial domains. While these systems enable process optimization and efficiency and allow previously impossible functionality, security and privacy are key concerns for their design, development, and operation. CPS have been key components utilized in some of the highest publicized security breaches over the last decade. In this chapter, we will look over the CPS described in the previous chapters from a security perspective. In this chapter, we explain classical information and physical security fundamentals in the context of CPS and contextualize them acrossmore » application domains. We give examples where the interplay of functionality and diverse communication can introduce unexpected vulnerabilities and produce larger impacts. We will discuss how CPS security and privacy is inherently different from that of pure cyber or physical systems and what may be done to secure these systems, considering their emergent cyber-physical properties. Finally, we will discuss security and privacy implications of merging infrastructural and personal CPS. Our hope is to impart the knowledge of what CPS security and privacy are, why they are important, and explain existing processes and challenges.« less
The Civil Aviation Sector in Lebanon. Part 1; Institutional Reforms
NASA Technical Reports Server (NTRS)
Baaj, M. Hadi
2002-01-01
Civil aviation is one of the key contributors to a successful economic system. This has been recognized within Lebanon, which is undertaking developing a new civil aviation strategy encompassing a program of organizational reform, coordinated internationally, to meet the challenges of the new century. Such strategy is vital, as it will provide a coherent vision for the sector, compliment the extensive investments deployed by Lebanon in its aviation infrastructure, and guide future planning and investments. The proposed Civil Aviation Strategy for Lebanon has two major components: (1) institutional reform aiming at creating effective overall legal and regulatory frameworks in-line with current international best practice; and (2) implementation of liberalization measures and open skies policy. This paper aims to: (1) present Lebanon's current institutional arrangements; (2) review the institutional arrangements in key select countries (in order to define current trends in best institutional practice); (3) discuss the proposed institutional reform (which are at the basis of Lebanon's Draft Civil Aviation Reform Law) while showing that they conform with the identified best institutional trends; and (4) outline an implementation plan. The Draft Law has been approved by the Council of Ministers and now awaits Parliamentary endorsement.
Engineering the System and Technical Integration
NASA Technical Reports Server (NTRS)
Blair, J. C.; Ryan, R. S.; Schutzenhofer, L. A.
2011-01-01
Approximately 80% of the problems encountered in aerospace systems have been due to a breakdown in technical integration and/or systems engineering. One of the major challenges we face in designing, building, and operating space systems is: how is adequate integration achieved for the systems various functions, parts, and infrastructure? This Contractor Report (CR) deals with part of the problem of how we engineer the total system in order to achieve the best balanced design. We will discuss a key aspect of this question - the principle of Technical Integration and its components, along with management and decision making. The CR will first provide an introduction with a discussion of the Challenges in Space System Design and meeting the challenges. Next is an overview of Engineering the System including Technical Integration. Engineering the System is expanded to include key aspects of the Design Process, Lifecycle Considerations, etc. The basic information and figures used in this CR were presented in a NASA training program for Program and Project Managers Development (PPMD) in classes at Georgia Tech and at Marshall Space Flight Center (MSFC). Many of the principles and illustrations are extracted from the courses we teach for MSFC.
NASA Astrophysics Data System (ADS)
Hurd, B. H.; Coonrod, J.
2008-12-01
Climate change is expected to alter surface hydrology throughout the arid Western United States, in most cases compressing the period of peak snowmelt and runoff, and in some cases, for example, the Rio Grande, limiting total runoff. As such, climate change is widely expected to further stress arid watersheds, particularly in regions where trends in population growth, economic development and environmental regulation are current challenges. Strategies to adapt to such changes are evolving at various institutional levels including conjunctive management of surface and ground waters. Groundwater resources remain one of the key components of water management strategies aimed at accommodating continued population growth and mitigating the potential for water supply disruptions under climate change. By developing a framework for valuing these resources and for value improvements in the information pertaining to their characteristics, this research can assist in prioritizing infrastructure and investment to change and enhance water resource management. The key objective of this paper is to 1) develop a framework for estimating the value of groundwater resources and improved information, and 2) provide some preliminary estimates of this value and how it responds to plausible scenarios of climate change.
A hybrid reconfigurable solar and wind energy system
NASA Astrophysics Data System (ADS)
Gadkari, Sagar A.
We study the feasibility of a novel hybrid solar-wind hybrid system that shares most of its infrastructure and components. During periods of clear sunny days the system will generate electricity from the sun using a parabolic concentrator. The concentrator is formed by individual mirror elements and focuses the light onto high intensity vertical multi-junction (VMJ) cells. During periods of high wind speeds and at night, the same concentrator setup will be reconfigured to channel the wind into a wind turbine which will be used to harness wind energy. In this study we report on the feasibility of this type of solar/wind hybrid energy system. The key mechanisms; optics, cooling mechanism of VMJ cells and air flow through the system were investigated using simulation tools. The results from these simulations, along with a simple economic analysis giving the levelized cost of energy for such a system are presented. An iterative method of design refinement based on the simulation results was used to work towards a prototype design. The levelized cost of the system achieved in the economic analysis shows the system to be a good alternative for a grid isolated site and could be used as a standalone system in regions of lower demand. The new approach to solar wind hybrid system reported herein will pave way for newer generation of hybrid systems that share common infrastructure in addition to the storage and distribution of energy.
The national response for preventing healthcare-associated infections: data and monitoring.
Kahn, Katherine L; Weinberg, Daniel A; Leuschner, Kristin J; Gall, Elizabeth M; Siegel, Sari; Mendel, Peter
2014-02-01
Historically, the ability to accurately track healthcare-associated infections (HAIs) was hindered due to a lack of coordination among data sources and shortcomings in individual data sources. This paper presents the results of the evaluation of the HAI data and the monitoring component of the Action Plan, focusing on context (goals), inputs, and processes. We used the Content-Input-Process-Product framework, together with the HAI prevention system framework, to describe the transformative processes associated with data and monitoring efforts. Six HAI priority conditions in the 2009 Action Plan created a focus for the selection of goals and activities. Key Action Plan decisions included a phased-in data and monitoring approach, commitment to linking the selection of priority HAIs to highly visible national 5-year prevention targets, and the development of a comprehensive HAI database inventory. Remaining challenges relate to data validation, resources, and the opportunity to integrate electronic health and laboratory records with other provider data systems. The Action Plan's data and monitoring program has developed a sound infrastructure that builds upon technological advances and embodies a firm commitment to prioritization, coordination and alignment, accountability and incentives, stakeholder engagement, and an awareness of the need for predictable resources. With time, and adequate resources, it is likely that the investment in data-related infrastructure during the Action Plan's initial years will reap great rewards.
Information Technology Support for Clinical Genetic Testing within an Academic Medical Center.
Aronson, Samuel; Mahanta, Lisa; Ros, Lei Lei; Clark, Eugene; Babb, Lawrence; Oates, Michael; Rehm, Heidi; Lebo, Matthew
2016-01-20
Academic medical centers require many interconnected systems to fully support genetic testing processes. We provide an overview of the end-to-end support that has been established surrounding a genetic testing laboratory within our environment, including both laboratory and clinician facing infrastructure. We explain key functions that we have found useful in the supporting systems. We also consider ways that this infrastructure could be enhanced to enable deeper assessment of genetic test results in both the laboratory and clinic.
Developments in damage assessment by Marie Skłodowska-Curie TRUSS ITN project
NASA Astrophysics Data System (ADS)
González, A.
2017-05-01
The growth of cities, the impacts of climate change and the massive cost of providing new infrastructure provide the impetus for TRUSS (Training in Reducing Uncertainty in Structural Safety), a €3.7 million Marie Skłodowska-Curie Action Innovative Training Network project funded by EU’s Horizon 2020 programme, which aims to maximize the potential of infrastructure that already exists (http://trussitn.eu). For that purpose, TRUSS brings together an international, inter-sectoral and multidisciplinary collaboration between five academic and eleven industry institutions from five European countries. The project covers rail and road infrastructure, buildings and energy and marine infrastructure. This paper reports progress in fields such as advanced sensor-based structural health monitoring solutions - unmanned aerial vehicles, optical backscatter reflectometry, monitoring sensors mounted on vehicles, … - and innovative algorithms for structural designs and short- and long-term assessments of buildings, bridges, pavements, ships, ship unloaders, nuclear components and wind turbine towers that will support infrastructure operators and owners in managing their assets.
NASA Astrophysics Data System (ADS)
Sucipto, Katoningsih, Sri; Ratnaningrum, Anggry
2017-03-01
With large number of schools and many components of school infrastructure supporting with limited funds,so, the school infrastructure development cannot be done simultaneously. Implementation of development must be based on priorities according to the needs. Record all existing needs Identify the condition of the school infrastructure, so that all data recorded bias is valid and has covered all the infrastructure needs of the school. SIPIS very helpful in the process of recording all the necessary needs of the school. Make projections of school development, student participants to the HR business. Make the order needs based on their level of importance. Determine the order in accordance with the needs of its importance, the most important first. By using SIPIS can all be arranged correctly so that do not confuse to construct what should be done in advance but be the last because of factors like and dislike. Make the allocation of funds in detail, then when submitting the budget funds provided in accordance with demand.
Handling Emergency Management in [an] Object Oriented Modeling Environment
NASA Technical Reports Server (NTRS)
Tokgoz, Berna Eren; Cakir, Volkan; Gheorghe, Adrian V.
2010-01-01
It has been understood that protection of a nation from extreme disasters is a challenging task. Impacts of extreme disasters on a nation's critical infrastructures, economy and society could be devastating. A protection plan itself would not be sufficient when a disaster strikes. Hence, there is a need for a holistic approach to establish more resilient infrastructures to withstand extreme disasters. A resilient infrastructure can be defined as a system or facility that is able to withstand damage, but if affected, can be readily and cost-effectively restored. The key issue to establish resilient infrastructures is to incorporate existing protection plans with comprehensive preparedness actions to respond, recover and restore as quickly as possible, and to minimize extreme disaster impacts. Although national organizations will respond to a disaster, extreme disasters need to be handled mostly by local emergency management departments. Since emergency management departments have to deal with complex systems, they have to have a manageable plan and efficient organizational structures to coordinate all these systems. A strong organizational structure is the key in responding fast before and during disasters, and recovering quickly after disasters. In this study, the entire emergency management is viewed as an enterprise and modelled through enterprise management approach. Managing an enterprise or a large complex system is a very challenging task. It is critical for an enterprise to respond to challenges in a timely manner with quick decision making. This study addresses the problem of handling emergency management at regional level in an object oriented modelling environment developed by use of TopEase software. Emergency Operation Plan of the City of Hampton, Virginia, has been incorporated into TopEase for analysis. The methodology used in this study has been supported by a case study on critical infrastructure resiliency in Hampton Roads.
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases
Zaslavsky, Ilya; Baldock, Richard A.; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project. PMID:25309417
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.
Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.
A comprehensive typology for mainstreaming urban green infrastructure
NASA Astrophysics Data System (ADS)
Young, Robert; Zanders, Julie; Lieberknecht, Katherine; Fassman-Beck, Elizabeth
2014-11-01
During a National Science Foundation (US) funded "International Greening of Cities Workshop" in Auckland, New Zealand, participants agreed an effective urban green infrastructure (GI) typology should identify cities' present stage of GI development and map next steps to mainstream GI as a component of urban infrastructure. Our review reveals current GI typologies do not systematically identify such opportunities. We address this knowledge gap by developing a new typology incorporating political, economic, and ecological forces shaping GI implementation. Applying this information allows symmetrical, place-based exploration of the social and ecological elements driving a city's GI systems. We use this information to distinguish current levels of GI development and clarify intervention opportunities to advance GI into the mainstream of metropolitan infrastructure. We employ three case studies (San Antonio, Texas; Auckland, New Zealand; and New York, New York) to test and refine our typology.
Elachola, Habidah; Al-Tawfiq, Jaffar A; Turkestani, Abdulhafiz; Memish, Ziad A
2016-08-31
Mass gatherings (MG) are characterized by the influx of large numbers of people with the need to have infrastructural changes to support these gatherings. Thus, Public Health Emergency Operations Center (PHEOC) is critical management infrastructure for both the delivery of public health functions and for mounting adequate response during emergencies. The recognition of the importance of PHEOC at the leadership and political level is foundational for the success of any public health intervention during MG. The ability of the PHEOC to effectively function depends on appropriate design and infrastructure, staffing and command structure, and plans and procedures developed prior to the event. Multi-ministerial or jurisdictional coordination will be required and PHEOC should be positioned with such authorities. This paper outlines the essential concepts, elements, design, and operational aspects of PHEOC during MG.
Integrating sea floor observatory data: the EMSO data infrastructure
NASA Astrophysics Data System (ADS)
Huber, Robert; Azzarone, Adriano; Carval, Thierry; Doumaz, Fawzi; Giovanetti, Gabriele; Marinaro, Giuditta; Rolin, Jean-Francois; Beranzoli, Laura; Waldmann, Christoph
2013-04-01
The European research infrastructure EMSO is a European network of fixed-point, deep-seafloor and water column observatories deployed in key sites of the European Continental margin and Arctic. It aims to provide the technological and scientific framework for the investigation of the environmental processes related to the interaction between the geosphere, biosphere, and hydrosphere and for a sustainable management by long-term monitoring also with real-time data transmission. Since 2006, EMSO is on the ESFRI (European Strategy Forum on Research Infrastructures) roadmap and has entered its construction phase in 2012. Within this framework, EMSO is contributing to large infrastructure integration projects such as ENVRI and COOPEUS. The EMSO infrastructure is geographically distributed in key sites of European waters, spanning from the Arctic, through the Atlantic and Mediterranean Sea to the Black Sea. It is presently consisting of thirteen sites which have been identified by the scientific community according to their importance respect to Marine Ecosystems, Climate Changes and Marine GeoHazards. The data infrastructure for EMSO is being designed as a distributed system. Presently, EMSO data collected during experiments at each EMSO site are locally stored and organized in catalogues or relational databases run by the responsible regional EMSO nodes. Three major institutions and their data centers are currently offering access to EMSO data: PANGAEA, INGV and IFREMER. In continuation of the IT activities which have been performed during EMSOs twin project ESONET, EMSO is now implementing the ESONET data architecture within an operational EMSO data infrastructure. EMSO aims to be compliant with relevant marine initiatives such as MyOceans, EUROSITES, EuroARGO, SEADATANET and EMODNET as well as to meet the requirements of international and interdisciplinary projects such as COOPEUS and ENVRI, EUDAT and iCORDI. A major focus is therefore set on standardization and interoperability of the EMSO data infrastructure. Beneath common standards for metadata exchange such as OpenSearch or OAI-PMH, EMSO has chosen to implement core standards of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards, such as Catalogue Service for Web (CS-W), Sensor Observation Service (SOS) and Observations and Measurements (O&M). Further, strong integration efforts are currently undertaken to harmonize data formats e.g NetCDF as well as the used ontologies and terminologies. The presentation will also give information to users about the discovery and visualization procedure for the EMSO data presently available.
2015-05-01
Infrastructure, Task 2.1” ERDC/CERL TR-15-5 ii Abstract Two critical infrastructure corrosion issues at Fort Bragg, NC, are the cor- rosion of steel utility...piping union joints in mechanical rooms and the cor- rosion of steel pump housings in cooling tower systems. Reliable operation of these components...pump 5 incorporating 316 stainless steel housing. .................................... 19 Figure 13. New pump 5 being installed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabharwall, Piyush; O'Brien, James E.; McKellar, Michael G.
2015-03-01
Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performancemore » of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial application and demonstration under NHES.« less
Transforming revenue management.
Silveria, Richard; Alliegro, Debra; Nudd, Steven
2008-11-01
Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.
An Efficient and Versatile Means for Assembling and Manufacturing Systems in Space
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Doggett, William R.; Hafley, Robert A.; Komendera, Erik; Correll, Nikolaus; King, Bruce
2012-01-01
Within NASA Space Science, Exploration and the Office of Chief Technologist, there are Grand Challenges and advanced future exploration, science and commercial mission applications that could benefit significantly from large-span and large-area structural systems. Of particular and persistent interest to the Space Science community is the desire for large (in the 10- 50 meter range for main aperture diameter) space telescopes that would revolutionize space astronomy. Achieving these systems will likely require on-orbit assembly, but previous approaches for assembling large-scale telescope truss structures and systems in space have been perceived as very costly because they require high precision and custom components. These components rely on a large number of mechanical connections and supporting infrastructure that are unique to each application. In this paper, a new assembly paradigm that mitigates these concerns is proposed and described. A new assembly approach, developed to implement the paradigm, is developed incorporating: Intelligent Precision Jigging Robots, Electron-Beam welding, robotic handling/manipulation, operations assembly sequence and path planning, and low precision weldable structural elements. Key advantages of the new assembly paradigm, as well as concept descriptions and ongoing research and technology development efforts for each of the major elements are summarized.
Office ergonomics programs. A case study of North American corporations.
Moore, J S
1997-12-01
Subject matter experts from 13 North American corporations provided detailed descriptions of the historical development and the current components and operations of their office ergonomics programs. Results were summarized across corporations and presented for the following programmatic topics: backgrounds of key people, initial awareness and preliminary needs assessment, program development, program implementation, program monitoring and evaluation, program components, education and training, workstation and job analysis, early identification of cases, case management, and alternate office environments. The subject matter experts also provided comments about the strengths of their programs, their advice to others, and lessons they learned. These observations suggested the need for an office ergonomics program, and possibly other occupational health programs, to fit into a corporation's culture and capitalize on its infrastructure. Most corporations used multidisciplinary task forces or teams to develop their programs. Communication, which included training, awareness, advertising, and feedback, was also an important issue. Flexibility and simplicity were important attributes of these programs. It is hoped that this descriptive information will be helpful to some occupational health managers interested in or concerned about managerial perspectives and skills related to the development and implementation of programs within their own corporations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilligan, Kimberly V.; Gaudet, Rachel N.
In 2007, the U.S. Department of Energy National Nuclear Security Administration (DOE NNSA) Office of Nonproliferation and Arms Control (NPAC) completed a comprehensive review of the current and potential future challenges facing the international safeguards system. One of the report’s key recommendations was for DOE NNSA to launch a major new program to revitalize the international safeguards technology and human resource base. In 2007, at the International Atomic Energy Agency (IAEA) General Conference, then Secretary of Energy Samuel W. Bodman announced the newly created Next Generation Safeguards Initiative (NGSI). NGSI consists of five program elements: policy development and outreach, conceptsmore » and approaches, technology and analytical methodologies, human capital development (HCD), and infrastructure development. This report addresses the HCD component of NGSI. The goal of the HCD component as defined in the NNSA Program Plan is “to revitalize and expand the international safeguards human capital base by attracting and training a new generation of talent.” The major objectives listed in the HCD goal include education and training, outreach to universities and professional societies, postdoctoral appointments, and summer internships at national laboratories.« less
Using the ISS as a Testbed to Prepare for the Next Generation of Space-Based Telescopes
NASA Technical Reports Server (NTRS)
Ess, Kim; Thronson, Harley; Boyles, Mark; Sparks, William; Postman, Marc; Carpenter, Kenneth
2012-01-01
The ISS provides a unique opportunity to develop the technologies and operational capabilities necessary to assemble future large space telescopes that may be used to investigate planetary systems around neighboring stars. Assembling telescopes in space is a paradigm-shifting approach to space astronomy. Using the ISS as a testbed will reduce the technical risks of implementing this major scientific facility, such as laser metrology and wavefront sensing and control (WFSC). The Optical Testbed and Integration on ISS eXperiment (OpTIIX) will demonstrate the robotic assembly of major components, including the primary and secondary mirrors, to mechanical tolerances using existing ISS infrastructure, and the alignment of the optical elements to a diffraction-limited optical system in space. Assembling the optical system and removing and replacing components via existing ISS capabilities, such as the Special Purpose Dexterous Manipulator (SPDM) or the ISS flight crew, allows for future experimentation and repair, if necessary. First flight on ISS for OpTIIX, a small 1.5 meter optical telescope, is planned for 2015. In addition to demonstration of key risk-retiring technologies, the OpTIIX program includes a public outreach program to show the broad value of ISS utilization.
NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community
NASA Astrophysics Data System (ADS)
Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.
2017-12-01
The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.
17 CFR 23.603 - Business continuity and disaster recovery.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., facilities, infrastructure, personnel and competencies essential to the continued operations of the swap.... The individuals identified shall be authorized to make key decisions on behalf of the swap dealer or...
17 CFR 23.603 - Business continuity and disaster recovery.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., facilities, infrastructure, personnel and competencies essential to the continued operations of the swap.... The individuals identified shall be authorized to make key decisions on behalf of the swap dealer or...
ITS concepts for rural corridor management.
DOT National Transportation Integrated Search
2007-09-01
The Arizona Department of Transportations (ADOT) SPR-570: Rural ITS Progress Study - Arizona 2004 provided : 20 key recommendations for improved utilization of the rural ITS (Intelligent Transportation Systems) infrastructure. : Two years later, i...
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
Assessing the vulnerability of infrastructure to climate change on the Islands of Samoa
NASA Astrophysics Data System (ADS)
Fakhruddin, S. H. M.
2015-03-01
Pacific Islanders have been exposed to risks associated with climate change. Samoa as one of the Pacific Islands are prone to climatic hazards that will likely increase in coming decades, affecting coastal communities and infrastructure around the islands. Climate models do not predict a reduction of such disaster events in the future in Samoa; indeed, most predict an increase in such events. This paper identifies key infrastructure and their functions and status in order to provide an overall picture of relative vulnerability to climate-related stresses of such infrastructure on the island. By reviewing existing reports as well as holding a series of consultation meetings, a list of critical infrastructures were developed and shared with stakeholders for their consideration. An indicator-based vulnerability model (SIVM) was developed in collaboration with stakeholders to assess the vulnerability of selected infrastructure systems on the Samoan Islands. Damage costs were extracted from the Evan cyclone recovery needs document. On the other hand, criticality and capacity to repair data were collected from stakeholders. Having stakeholder perspectives on these two issues was important because (a) criticality of a given infrastructure could be viewed differently among different stakeholders, and (b) stakeholders were the best available source (in this study) to estimate the capacity to repair non-physical damage to such infrastructure. Analysis of the results suggested rankings from most vulnerable to least vulnerable sectors are the transportation sector, the power sector, the water supply sector and the sewerage system.
Assessing the vulnerability of infrastructure to climate change on the Islands of Samoa
NASA Astrophysics Data System (ADS)
Fakhruddin, S. H. M.; Babel, M. S.; Kawasaki, A.
2015-06-01
Pacific Islanders have been exposed to risks associated with climate change. Samoa, as one of the Pacific Islands, is prone to climatic hazards that will likely increase in the coming decades, affecting coastal communities and infrastructure around the islands. Climate models do not predict a reduction of such disaster events in the future in Samoa; indeed, most predict an increase. This paper identifies key infrastructure and their functions and status in order to provide an overall picture of relative vulnerability to climate-related stresses of such infrastructure on the island. By reviewing existing reports as well as holding a series of consultation meetings, a list of critical infrastructure was developed and shared with stakeholders for their consideration. An indicator-based vulnerability model (SIVM) was developed in collaboration with stakeholders to assess the vulnerability of selected infrastructure systems on the Samoan Islands. Damage costs were extracted from the Cyclone Evan recovery needs document. Additionally, data on criticality and capacity to repair damage were collected from stakeholders. Having stakeholder perspectives on these two issues was important because (a) criticality of a given infrastructure could be viewed differently among different stakeholders, and (b) stakeholders were the best available source (in this study) to estimate the capacity to repair non-physical damage to such infrastructure. Analysis of the results suggested a ranking of sectors from the most vulnerable to least vulnerable are: the transportation sector, the power sector, the water supply sector and the sewerage system.
Rapid Arctic Changes due to Infrastructure and Climate (RATIC) in the Russian North
NASA Astrophysics Data System (ADS)
Walker, D. A.; Kofinas, G.; Raynolds, M. K.; Kanevskiy, M. Z.; Shur, Y.; Ambrosius, K.; Matyshak, G. V.; Romanovsky, V. E.; Kumpula, T.; Forbes, B. C.; Khukmotov, A.; Leibman, M. O.; Khitun, O.; Lemay, M.; Allard, M.; Lamoureux, S. F.; Bell, T.; Forbes, D. L.; Vincent, W. F.; Kuznetsova, E.; Streletskiy, D. A.; Shiklomanov, N. I.; Fondahl, G.; Petrov, A.; Roy, L. P.; Schweitzer, P.; Buchhorn, M.
2015-12-01
The Rapid Arctic Transitions due to Infrastructure and Climate (RATIC) initiative is a forum developed by the International Arctic Science Committee (IASC) Terrestrial, Cryosphere, and Social & Human working groups for developing and sharing new ideas and methods to facilitate the best practices for assessing, responding to, and adaptively managing the cumulative effects of Arctic infrastructure and climate change. An IASC white paper summarizes the activities of two RATIC workshops at the Arctic Change 2014 Conference in Ottawa, Canada and the 2015 Third International Conference on Arctic Research Planning (ICARP III) meeting in Toyama, Japan (Walker & Pierce, ed. 2015). Here we present an overview of the recommendations from several key papers and posters presented at these conferences with a focus on oil and gas infrastructure in the Russian north and comparison with oil development infrastructure in Alaska. These analyses include: (1) the effects of gas- and oilfield activities on the landscapes and the Nenets indigenous reindeer herders of the Yamal Peninsula, Russia; (2) a study of urban infrastructure in the vicinity of Norilsk, Russia, (3) an analysis of the effects of pipeline-related soil warming on trace-gas fluxes in the vicinity of Nadym, Russia, (4) two Canadian initiatives that address multiple aspects of Arctic infrastructure called Arctic Development and Adaptation to Permafrost in Transition (ADAPT) and the ArcticNet Integrated Regional Impact Studies (IRIS), and (5) the effects of oilfield infrastructure on landscapes and permafrost in the Prudhoe Bay region, Alaska.
Patient Engagement in Kidney Research: Opportunities and Challenges Ahead
Molnar, Amber O.; Barua, Moumita; Konvalinka, Ana; Schick-Makaroff, Kara
2017-01-01
Purpose of Review: Patient engagement in research is increasingly recognized as an important component of the research process and may facilitate translation of research findings. To heighten awareness on this important topic, this review presents opportunities and challenges of patient engagement in research, drawing on specific examples from 4 areas of Canadian kidney research conducted by New Investigators in the Kidney Research Scientist Core Education and National Training (KRESCENT) Program. Sources of Information: Research expertise, published reports, peer-reviewed articles, and research funding body websites. Methods: In this review, the definition, purpose, and potential benefits of patient engagement in research are discussed. Approaches toward patient engagement that may help with translation and uptake of research findings into clinical practice are highlighted. Opportunities and challenges of patient engagement are presented in both basic science and clinical research with the following examples of kidney research: (1) precision care in focal and segmental glomerulosclerosis, (2) systems biology approaches to improve management of chronic kidney disease and enhance kidney graft survival, (3) reducing the incidence of suboptimal dialysis initiation, and (4) use of patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) in kidney practice. Key Findings: Clinical research affords more obvious opportunities for patient engagement. The most obvious step at which to engage patients is in the setting of research priorities. Engagement at all stages of the research cycle may prove to be more challenging, and requires a detailed plan, along with funds and infrastructure to ensure that it is not merely tokenistic. Basic science research is several steps removed from the clinical application and involves complex scientific concepts, which makes patient engagement inherently more difficult. Limitations: This is a narrative review of the literature that has been partly influenced by the perspectives and experiences of the authors and focuses on research conducted by the authors. The evidence base to support the suggested benefits of patient engagement in research is currently limited. Implications: The formal incorporation of patients’ priorities, perspectives, and experiences is now recognized as a key component of the research process. If patients and researchers are able to effectively work together, this could enhance research quality and efficiency. To effectively engage patients, proper infrastructure and dedicated funding are needed. Going forward, a rigorous evaluation of patient engagement strategies and their effectiveness will be needed. PMID:29225906
GEM1: First-year modeling and IT activities for the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global components are in the planning stages, such as the developments of a unified active fault database and earthquake catalog. The flagship activity of GEM's first year is GEM1, a focused pilot project to develop GEM's first hazard and risk modeling products and initial IT infrastructure, starting in January 2009 and ending in March 2010. GEM1 will provide core capabilities for the present and key knowledge for future development of the full GEM computing Environment and product set. We will build GEM1 largely using existing tools and datasets, connected through a unified IT infrastructure, in order to bring GEM's initial capabilities online as rapidly as possible. The Swiss Seismological Service at ETH-Zurich is leading the GEM1 effort in cooperation with partners around the world. We anticipate that GEM1's products will include: • A global compilation of regional seismic source zone models in one or more common representations • Global synthetic earthquake catalogs for use in hazard calculations • Initial set of regional and global catalogues for validation • Global hazard models in map and database forms • First compilation of global vulnerabilities and fragilities • Tools for exposure and loss assessment • Validation of results and software for existing risk assessment tools to be used in future GEM stages • Demonstration risk scenarios for target cities • First version of GEM IT infrastructure All these products will be made freely available to the greatest extent possible. For more information on GEM and GEM1, please visit http://www.globalquakemodel.org.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Kaiser, K. E.; Steimke, A.; Leonard, A.; FitzGerald, K.; Benner, S. G.; Vache, K. B.; Hillis, V.; Bolte, J.; Han, B.
2017-12-01
Humans exert tremendous influence on the redistribution of water in space and time. Humans have developed substantial infrastructure to provide water in adequate quantity and quality for production of food and energy, while seeking to maintain landscape processes and properties giving rise to ecosystem services on which humans rely (even when and if they are not well understood). Cyber-physical infrastructure includes dams, distributary canal networks, ditches to manage return flow, and networks of sensors to monitor environmental conditions. Social infrastructure includes legal frameworks for water rights, governance networks, and land management policies aimed at maintaining water quality. Changes in regional climate, land use and its intensity, and land cover in source areas exert pressures on this infrastructure, requiring models to characterize system-wide vulnerability and resilience. Here we present a synthesis of several ongoing and completed studies aimed at advancing our fundamental understanding of and ability to numerically model a system in which biophysical and human components cannot be separated. These studies are set within the Boise and Snake River Basin in the US Pacific Northwest and are organized around the aims of: (1) developing improved understanding and models of the ways that humans interact with each other and with biophysical processes at a range of spatiotemporal scales, and (2) using those models to predict how changes in climate and societal drivers, including in-migration and shifts in agricultural practices, will impact regional hydroclimate and associated ecosystem services. Key findings indicate differential pressures on water availability based on water rights seniority within the Lower Boise River basin under historical conditions, the potential for significantly earlier curtailment of water rights in future decades, and potential changes in agricultural practices in anticipation of future climate changes. This ongoing suite of projects illustrate significant improvements in modeling human modification of the timing and partitioning of hydrologic fluxes. Important challenges and opportunities remain, however, particularly in improving modeling the interactions between and among actors that exert controls on the redistribution of water.
Blue-Green Solutions in Urban Development
NASA Astrophysics Data System (ADS)
Karlsson, Caroline; Kalantari, Zahra
2017-04-01
With the ongoing urbanisation and increasing pressure for new housing and infrastructure, the nexus of developing compact, energy-efficient and yet liveable and sustainable cities is urgent to address. In this context, blue-green spaces and related ecosystem services (ES) are critical resources that need to be integrated in policy and planning of urban. Among the ES provided by blue-green spaces, regulating ES such as water retention and purification are particularly important in urban areas, affecting water supply and quality, related cultural ES and biodiversity, as well as cities potential to adapt to climate change. Blue-green infrastructure management is considered a sustainable way to reducing negative effects of urbanisation, such as decreasing flood risks, as well as adapting to climate change for example by controlling increasing flood and drought risks. Blue-green infrastructure management can for example create multifunctional surfaces with valuable environmental and social functions and generally handle greenways and ecological networks as important ecosystem service components, for example for stormwater regulation in a sustainable urban drainage system. The Norrström drainage basin (22,000 km2) is a large demonstrator for Blue-green infrastructure management. Both urbanisation and agriculture are extensive within this basin, which includes the Swedish capital Stockholm and is part of the fertile Swedish belt. Together, the relatively high population density combined with agricultural and industrial activities in this region imply large eutrophication and pollution pressures, not least transferred through storm runoff to both inland surface waters and the coastal waters of the Baltic Sea. The ecosystems of this basin provide highly valued but also threatened services. For example, Lake Mälaren is the single main freshwater supply for the Swedish capital Stockholm, as well as a key nutrient retention system that strongly mitigates waterborne nutrient loads to the Baltic Sea a function that is in turn threatened by climate change. Large socio-economic values are also at stake here with regard to ecosystem regulation of both flood and drought risks, again threatened by both climate change and human development activities within the Norrström basin itself.
Bellamy, Chloe C; van der Jagt, Alexander P N; Barbour, Shelley; Smith, Mike; Moseley, Darren
2017-10-01
Pollinators such as bees and hoverflies are essential components of an urban ecosystem, supporting and contributing to the biodiversity, functioning, resilience and visual amenity of green infrastructure. Their urban habitats also deliver health and well-being benefits to society, by providing important opportunities for accessing nature nearby to the homes of a growing majority of people living in towns and cities. However, many pollinator species are in decline, and the loss, degradation and fragmentation of natural habitats are some of the key drivers of this change. Urban planners and other practitioners need evidence to carefully prioritise where they focus their resources to provide and maintain a high quality, multifunctional green infrastructure network that supports pollinators and people. We provide a modelling framework to inform green infrastructure planning as a nature based solution with social and ecological benefits. We show how habitat suitability models (HSM) incorporating remote sensed vegetation data can provide important information on the influence of urban landcover composition and spatial configuration on species distributions across cities. Using Edinburgh, Scotland, as a case study city, we demonstrate this approach for bumble bees and hoverflies, providing high resolution predictive maps that identify pollinator habitat hotspots and pinch points across the city. By combining this spatial HSM output with health deprivation data, we highlight 'win-win' opportunity areas in most need of improved green infrastructure to support pollinator habitat quality and connectivity, as well as societal health and well-being. In addition, in collaboration with municipal planners, local stakeholders, and partners from a local greenspace learning alliance, we identified opportunities for citizen engagement activities to encourage interest in wildlife gardening as part of a 'pollinator pledge'. We conclude that this quantitative, spatially explicit and transferable approach provides a useful decision-making tool for targeting nature-based solutions to improve biodiversity and increase environmental stewardship, with the aim of providing a more attractive city to live, work and invest in. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Benefits and Challenges of Linking Green Infrastructure and Highway Planning in the United States
NASA Astrophysics Data System (ADS)
Marcucci, Daniel J.; Jordan, Lauren M.
2013-01-01
Landscape-level green infrastructure creates a network of natural and semi-natural areas that protects and enhances ecosystem services, regenerative capacities, and ecological dynamism over long timeframes. It can also enhance quality of life and certain economic activity. Highways create a network for moving goods and services efficiently, enabling commerce, and improving mobility. A fundamentally profound conflict exists between transportation planning and green infrastructure planning because they both seek to create connected, functioning networks across the same landscapes and regions, but transportation networks, especially in the form of highways, fragment and disconnect green infrastructure networks. A key opportunity has emerged in the United States during the last ten years with the promotion of measures to link transportation and environmental concerns. In this article we examined the potential benefits and challenges of linking landscape-level green infrastructure planning and implementation with integrated transportation planning and highway project development in the United States policy context. This was done by establishing a conceptual model that identified logical flow lines from planning to implementation as well as the potential interconnectors between green infrastructure and highway infrastructure. We analyzed the relationship of these activities through literature review, policy analysis, and a case study of a suburban Maryland, USA landscape. We found that regionally developed and adopted green infrastructure plans can be instrumental in creating more responsive regional transportation plans and streamlining the project environmental review process while enabling better outcomes by enabling more targeted mitigation. In order for benefits to occur, however, landscape-scale green infrastructure assessments and plans must be in place before integrated transportation planning and highway project development occurs. It is in the transportation community's interests to actively facilitate green infrastructure planning because it creates a more predictable environmental review context. On the other hand, for landscape-level green infrastructure, transportation planning and development is much more established and better funded and can provide a means of supporting green infrastructure planning and implementation, thereby enhancing conservation of ecological function.
Benefits and challenges of linking green infrastructure and highway planning in the United States.
Marcucci, Daniel J; Jordan, Lauren M
2013-01-01
Landscape-level green infrastructure creates a network of natural and semi-natural areas that protects and enhances ecosystem services, regenerative capacities, and ecological dynamism over long timeframes. It can also enhance quality of life and certain economic activity. Highways create a network for moving goods and services efficiently, enabling commerce, and improving mobility. A fundamentally profound conflict exists between transportation planning and green infrastructure planning because they both seek to create connected, functioning networks across the same landscapes and regions, but transportation networks, especially in the form of highways, fragment and disconnect green infrastructure networks. A key opportunity has emerged in the United States during the last ten years with the promotion of measures to link transportation and environmental concerns. In this article we examined the potential benefits and challenges of linking landscape-level green infrastructure planning and implementation with integrated transportation planning and highway project development in the United States policy context. This was done by establishing a conceptual model that identified logical flow lines from planning to implementation as well as the potential interconnectors between green infrastructure and highway infrastructure. We analyzed the relationship of these activities through literature review, policy analysis, and a case study of a suburban Maryland, USA landscape. We found that regionally developed and adopted green infrastructure plans can be instrumental in creating more responsive regional transportation plans and streamlining the project environmental review process while enabling better outcomes by enabling more targeted mitigation. In order for benefits to occur, however, landscape-scale green infrastructure assessments and plans must be in place before integrated transportation planning and highway project development occurs. It is in the transportation community's interests to actively facilitate green infrastructure planning because it creates a more predictable environmental review context. On the other hand, for landscape-level green infrastructure, transportation planning and development is much more established and better funded and can provide a means of supporting green infrastructure planning and implementation, thereby enhancing conservation of ecological function.
ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond
NASA Astrophysics Data System (ADS)
van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.
2015-12-01
ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.
Is strategic asset management applicable to small and medium utilities?
Alegre, Helena
2010-01-01
Urban water infrastructures provide essential services to modern societies and represent a major portion of the value of municipal physical assets. Managing these assets rationally is therefore fundamental for the sustainability of the services and to the economy of societies. "Asset Management" (AM) is a modern term for an old practice--assets have always been managed. In recent years, significant evolution occurred in terms of the AM formal approaches, of the monitoring and decision support tools and of the implementation success cases. However, most tools developed are too sophisticated and data seek for small utilities. The European R&D network COST Action C18 ( E-mail: www.costc18.org) identified key research problems related to the management of urban water infrastructures, currently not covered by on-going projects of the European Framework Program. The top 1 topic is "Efficient management of small community". This paper addresses challenges and opportunities for small and medium utilities with regard to infrastructure AM (IAM). To put this into context, the first sections discuss the need for IAM, highlight key recent developments, and present IAM drivers, as well as research and development gaps, priorities and products needed.
Code of Federal Regulations, 2013 CFR
2013-01-01
... important infrastructure components are threatened; (iv) When reviewing paragraphs (c)(3)(i) through (iii... containing federally designated critical habitat where the species or the critical habitat could be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... important infrastructure components are threatened; (iv) When reviewing paragraphs (c)(3)(i) through (iii... containing federally designated critical habitat where the species or the critical habitat could be...
Transportation and the Bureau of Land Management
DOT National Transportation Integrated Search
2008-01-01
The Volpe Center prepared a promotional booklet featuring an overview of the BLM's transportation infrastructure. The booklet describes the BLM's roads, bridges, and trails; highlights their economic and recreational importance; and lists key transpo...
NASA Technical Reports Server (NTRS)
Srinivasan, J.; Farrington, A.; Gray, A.
2001-01-01
They present an overview of long-life reconfigurable processor technologies and of a specific architecture for implementing a software reconfigurable (software-defined) network processor for space applications.
Ocean Observatories and the Integrated Ocean Observing System, IOOS: Developing the Synergy
NASA Astrophysics Data System (ADS)
Altalo, M. G.
2006-05-01
The National Office for Integrated and Sustained Ocean Observations is responsible for the planning, coordination and development of the U.S. Integrated Ocean Observing System, IOOS, which is both the U.S. contribution to GOOS as well as the ocean component of GEOSS. The IOOS is comprised of global observations as well as regional coastal observations coordinated so as to provide environmental information to optimize societal management decisions including disaster resilience, public health, marine transport, national security, climate and weather impact, and natural resource and ecosystem management. Data comes from distributed sensor systems comprising Federal and state monitoring efforts as well as regional enhancements, which are managed through data management and communications (DMAC) protocols. At present, 11 regional associations oversee the development of the observing System components in their region and are the primary interface with the user community. The ocean observatories are key elements of this National architecture and provide the infrastructure necessary to test new technologies, platforms, methods, models, and practices which, when validated, can transition into the operational components of the IOOS. This allows the IOOS to remain "state of the art" through incorporation of research at all phases. Both the observatories as well as the IOOS will contribute to the enhanced understanding of the ocean and coastal system so as to transform science results into societal solutions.
2016-04-13
ordnance and munitions components; endangered species habitat; and protected marine resources.1 More recently, DOD stated in its 2014 Sustainable Ranges...House Report 113-446 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2015 directed DOD to submit a report assessing... act on our 2014 recommendations, and we will continue to monitor DOD actions in this area. Page 4 GAO-16-381R Defense Infrastructure DOD Has
Infrastructure for the Geospatial Web
NASA Astrophysics Data System (ADS)
Lake, Ron; Farley, Jim
Geospatial data and geoprocessing techniques are now directly linked to business processes in many areas. Commerce, transportation and logistics, planning, defense, emergency response, health care, asset management and many other domains leverage geospatial information and the ability to model these data to achieve increased efficiencies and to develop better, more comprehensive decisions. However, the ability to deliver geospatial data and the capacity to process geospatial information effectively in these domains are dependent on infrastructure technology that facilitates basic operations such as locating data, publishing data, keeping data current and notifying subscribers and others whose applications and decisions are dependent on this information when changes are made. This chapter introduces the notion of infrastructure technology for the Geospatial Web. Specifically, the Geography Markup Language (GML) and registry technology developed using the ebRIM specification delivered from the OASIS consortium are presented as atomic infrastructure components in a working Geospatial Web.
An Overview of the Distributed Space Exploration Simulation (DSES) Project
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Michael G.; Bowman, James D.
2007-01-01
This paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which investigates technologies, and processes related to integrated, distributed simulation of complex space systems in support of NASA's Exploration Initiative. In particular, it describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. With regard to network infrastructure, DSES is developing a Distributed Simulation Network for use by all NASA centers. With regard to software, DSES is developing software models, tools and procedures that streamline distributed simulation development and provide an interoperable infrastructure for agency-wide integrated simulation. Finally, with regard to simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper presents the current status and plans for these three areas, including examples of specific simulations.
Road maintenance and rehabilitation : funding and allocation strategies
DOT National Transportation Integrated Search
1994-01-24
With ageing road infrastructure and sustained traffic growth, the maintenance and rehabilitation of road and motorway networks require increased funding. Adequate allocation and distribution of available resources are therefore a key policy issue. Th...
75 FR 81249 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
...: By name, Social Security Number (SSN), and/or date of birth. Safeguards: System login is accomplished by DoD Common Access Card (CAC). Public Key Infrastructure (PKI) network login is required and allows...
NASA Technical Reports Server (NTRS)
Moore, Andrew J.; Schubert, Matthew; Rymer, Nicholas; Balachandran, Swee; Consiglio, Maria; Munoz, Cesar; Smith, Joshua; Lewis, Dexter; Schneider, Paul
2017-01-01
Flights at low altitudes in close proximity to electrical transmission infrastructure present serious navigational challenges: GPS and radio communication quality is variable and yet tight position control is needed to measure defects while avoiding collisions with ground structures. To advance unmanned aerial vehicle (UAV) navigation technology while accomplishing a task with economic and societal benefit, a high voltage electrical infrastructure inspection reference mission was designed. An integrated air-ground platform was developed for this mission and tested in two days of experimental flights to determine whether navigational augmentation was needed to successfully conduct a controlled inspection experiment. The airborne component of the platform was a multirotor UAV built from commercial off-the-shelf hardware and software, and the ground component was a commercial laptop running open source software. A compact ultraviolet sensor mounted on the UAV can locate 'hot spots' (potential failure points in the electric grid), so long as the UAV flight path adequately samples the airspace near the power grid structures. To improve navigation, the platform was supplemented with two navigation technologies: lidar-to-polyhedron preflight processing for obstacle demarcation and inspection distance planning, and trajectory management software to enforce inspection standoff distance. Both navigation technologies were essential to obtaining useful results from the hot spot sensor in this obstacle-rich, low-altitude airspace. Because the electrical grid extends into crowded airspaces, the UAV position was tracked with NASA unmanned aerial system traffic management (UTM) technology. The following results were obtained: (1) Inspection of high-voltage electrical transmission infrastructure to locate 'hot spots' of ultraviolet emission requires navigation methods that are not broadly available and are not needed at higher altitude flights above ground structures. (2) The sensing capability of a novel airborne UV detector was verified with a standard ground-based instrument. Flights with this sensor showed that UAV measurement operations and recording methods are viable. With improved sensor range, UAVs equipped with compact UV sensors could serve as the detection elements in a self-diagnosing power grid. (3) Simplification of rich lidar maps to polyhedral obstacle maps reduces data volume by orders of magnitude, so that computation with the resultant maps in real time is possible. This enables real-time obstacle avoidance autonomy. Stable navigation may be feasible in the GPS-deprived environment near transmission lines by a UAV that senses ground structures and compares them to these simplified maps. (4) A new, formally verified path conformance software system that runs onboard a UAV was demonstrated in flight for the first time. It successfully maneuvered the aircraft after a sudden lateral perturbation that models a gust of wind, and processed lidar-derived polyhedral obstacle maps in real time. (5) Tracking of the UAV in the national airspace using the NASA UTM technology was a key safety component of this reference mission, since the flights were conducted beneath the landing approach to a heavily used runway. Comparison to autopilot tracking showed that UTM tracking accurately records the UAV position throughout the flight path.
Subcarrier Wave Quantum Key Distribution in Telecommunication Network with Bitrate 800 kbit/s
NASA Astrophysics Data System (ADS)
Gleim, A. V.; Nazarov, Yu. V.; Egorov, V. I.; Smirnov, S. V.; Bannik, O. I.; Chistyakov, V. V.; Kynev, S. M.; Anisimov, A. A.; Kozlov, S. A.; Vasiliev, V. N.
2015-09-01
In the course of work on creating the first quantum communication network in Russia we demonstrated quantum key distribution in metropolitan optical network infrastructure. A single-pass subcarrier wave quantum cryptography scheme was used in the experiments. BB84 protocol with strong reference was chosen for performing key distribution. The registered sifted key rate in an optical cable with 1.5 dB loss was 800 Kbit/s. Signal visibility exceeded 98%, and quantum bit error rate value was 1%. The achieved result is a record for this type of systems.
NASA Astrophysics Data System (ADS)
Argenti, M.; Giannini, V.; Averty, R.; Bigagli, L.; Dumoulin, J.
2012-04-01
The EC FP7 ISTIMES project has the goal of realizing an ICT-based system exploiting distributed and local sensors for non destructive electromagnetic monitoring in order to make critical transport infrastructures more reliable and safe. Higher situation awareness thanks to real time and detailed information and images of the controlled infrastructure status allows improving decision capabilities for emergency management stakeholders. Web-enabled sensors and a service-oriented approach are used as core of the architecture providing a sys-tem that adopts open standards (e.g. OGC SWE, OGC CSW etc.) and makes efforts to achieve full interoperability with other GMES and European Spatial Data Infrastructure initiatives as well as compliance with INSPIRE. The system exploits an open easily scalable network architecture to accommodate a wide range of sensors integrated with a set of tools for handling, analyzing and processing large data volumes from different organizations with different data models. Situation Awareness tools are also integrated in the system. Definition of sensor observations and services follows a metadata model based on the ISO 19115 Core set of metadata elements and the O&M model of OGC SWE. The ISTIMES infrastructure is based on an e-Infrastructure for geospatial data sharing, with a Data Cata-log that implements the discovery services for sensor data retrieval, acting as a broker through static connections based on standard SOS and WNS interfaces; a Decision Support component which helps decision makers providing support for data fusion and inference and generation of situation indexes; a Presentation component which implements system-users interaction services for information publication and rendering, by means of a WEB Portal using SOA design principles; A security framework using Shibboleth open source middleware based on the Security Assertion Markup Language supporting Single Sign On (SSO). ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663
Quantifying habitat impacts of natural gas infrastructure to facilitate biodiversity offsetting
Jones, Isabel L; Bull, Joseph W; Milner-Gulland, Eleanor J; Esipov, Alexander V; Suttle, Kenwyn B
2014-01-01
Habitat degradation through anthropogenic development is a key driver of biodiversity loss. One way to compensate losses is “biodiversity offsetting” (wherein biodiversity impacted is “replaced” through restoration elsewhere). A challenge in implementing offsets, which has received scant attention in the literature, is the accurate determination of residual biodiversity losses. We explore this challenge for offsetting gas extraction in the Ustyurt Plateau, Uzbekistan. Our goal was to determine the landscape extent of habitat impacts, particularly how the footprint of “linear” infrastructure (i.e. roads, pipelines), often disregarded in compensation calculations, compares with “hub” infrastructure (i.e. extraction facilities). We measured vegetation cover and plant species richness using the line-intercept method, along transects running from infrastructure/control sites outward for 500 m, accounting for wind direction to identify dust deposition impacts. Findings from 24 transects were extrapolated to the broader plateau by mapping total landscape infrastructure network using GPS data and satellite imagery. Vegetation cover and species richness were significantly lower at development sites than controls. These differences disappeared within 25 m of the edge of the area physically occupied by infrastructure. The current habitat footprint of gas infrastructure is 220 ± 19 km2 across the Ustyurt (total ∼ 100,000 km2), 37 ± 6% of which is linear infrastructure. Vegetation impacts diminish rapidly with increasing distance from infrastructure, and localized dust deposition does not conspicuously extend the disturbance footprint. Habitat losses from gas extraction infrastructure cover 0.2% of the study area, but this reflects directly eliminated vegetation only. Impacts upon fauna pose a more difficult determination, as these require accounting for behavioral and demographic responses to disturbance by elusive mammals, including threatened species. This study demonstrates that impacts of linear infrastructure in regions such as the Ustyurt should be accounted for not just with respect to development sites but also associated transportation and delivery routes. PMID:24455163
Quantifying habitat impacts of natural gas infrastructure to facilitate biodiversity offsetting.
Jones, Isabel L; Bull, Joseph W; Milner-Gulland, Eleanor J; Esipov, Alexander V; Suttle, Kenwyn B
2014-01-01
Habitat degradation through anthropogenic development is a key driver of biodiversity loss. One way to compensate losses is "biodiversity offsetting" (wherein biodiversity impacted is "replaced" through restoration elsewhere). A challenge in implementing offsets, which has received scant attention in the literature, is the accurate determination of residual biodiversity losses. We explore this challenge for offsetting gas extraction in the Ustyurt Plateau, Uzbekistan. Our goal was to determine the landscape extent of habitat impacts, particularly how the footprint of "linear" infrastructure (i.e. roads, pipelines), often disregarded in compensation calculations, compares with "hub" infrastructure (i.e. extraction facilities). We measured vegetation cover and plant species richness using the line-intercept method, along transects running from infrastructure/control sites outward for 500 m, accounting for wind direction to identify dust deposition impacts. Findings from 24 transects were extrapolated to the broader plateau by mapping total landscape infrastructure network using GPS data and satellite imagery. Vegetation cover and species richness were significantly lower at development sites than controls. These differences disappeared within 25 m of the edge of the area physically occupied by infrastructure. The current habitat footprint of gas infrastructure is 220 ± 19 km(2) across the Ustyurt (total ∼ 100,000 km(2)), 37 ± 6% of which is linear infrastructure. Vegetation impacts diminish rapidly with increasing distance from infrastructure, and localized dust deposition does not conspicuously extend the disturbance footprint. Habitat losses from gas extraction infrastructure cover 0.2% of the study area, but this reflects directly eliminated vegetation only. Impacts upon fauna pose a more difficult determination, as these require accounting for behavioral and demographic responses to disturbance by elusive mammals, including threatened species. This study demonstrates that impacts of linear infrastructure in regions such as the Ustyurt should be accounted for not just with respect to development sites but also associated transportation and delivery routes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonder, J.; Brooker, A.; Burton, E.
This presentation discusses current research at NREL on advanced wireless power transfer vehicle and infrastructure analysis. The potential benefits of E-roadway include more electrified driving miles from battery electric vehicles, plug-in hybrid electric vehicles, or even properly equipped hybrid electric vehicles (i.e., more electrified miles could be obtained from a given battery size, or electrified driving miles could be maintained while using smaller and less expensive batteries, thereby increasing cost competitiveness and potential market penetration). The system optimization aspect is key given the potential impact of this technology on the vehicles, the power grid and the road infrastructure.
Top Cyber: Developing the Top One Percent to Defeat the Advanced Persistent Threat
2014-02-13
Critical Infrastructure.” V3-CO-UK, 7 May 2013. http://www.v3.co.uk/v3-uk/news/2266397/us- government-chinese- hackers -have-the-skills-to-take-down...it is a source of great risk; and, for hackers , it is the super-highway to a target-rich environment. For the U.S. Government (USG), cyberspace is...worldwide. Ultimately, for all its benefits, the Internet presents an “Achilles’ Heel” in the defense of the nation’s Critical Infrastructure and Key
NASA Astrophysics Data System (ADS)
Bolton, Richard W.; Dewey, Allen; Horstmann, Paul W.; Laurentiev, John
1997-01-01
This paper examines the role virtual enterprises will have in supporting future business engagements and resulting technology requirements. Two representative end-user scenarios are proposed that define the requirements for 'plug-and-play' information infrastructure frameworks and architectures necessary to enable 'virtual enterprises' in US manufacturing industries. The scenarios provide a high- level 'needs analysis' for identifying key technologies, defining a reference architecture, and developing compliant reference implementations. Virtual enterprises are short- term consortia or alliances of companies formed to address fast-changing opportunities. Members of a virtual enterprise carry out their tasks as if they all worked for a single organization under 'one roof', using 'plug-and-play' information infrastructure frameworks and architectures to access and manage all information needed to support the product cycle. 'Plug-and-play' information infrastructure frameworks and architectures are required to enhance collaboration between companies corking together on different aspects of a manufacturing process. This new form of collaborative computing will decrease cycle-time and increase responsiveness to change.
Creating infrastructure supportive of evidence-based nursing practice: leadership strategies.
Newhouse, Robin P
2007-01-01
Nursing leadership is the cornerstone of successful evidence-based practice (EBP) programs within health care organizations. The key to success is a strategic approach to building an EBP infrastructure, with allocation of appropriate human and material resources. This article indicates the organizational infrastructure that enables evidence-based nursing practice and strategies for leaders to enhance evidence-based practice using "the conceptual model for considering the determinants of diffusion, dissemination, and implementation of innovations in health service delivery and organization." Enabling EBP within organizations is important for promoting positive outcomes for nurses and patients. Fostering EBP is not a static or immediate outcome, but a long-term developmental process within organizations. Implementation requires multiple strategies to cultivate a culture of inquiry where nurses generate and answer important questions to guide practice. Organizations that can enable the culture and build infrastructure to help nurses develop EBP competencies will produce a professional environment that will result in both personal growth for their staff and improvements in quality that would not otherwise be possible.
Analysis of Pervasive Mobile Ad Hoc Routing Protocols
NASA Astrophysics Data System (ADS)
Qadri, Nadia N.; Liotta, Antonio
Mobile ad hoc networks (MANETs) are a fundamental element of pervasive networks and therefore, of pervasive systems that truly support pervasive computing, where user can communicate anywhere, anytime and on-the-fly. In fact, future advances in pervasive computing rely on advancements in mobile communication, which includes both infrastructure-based wireless networks and non-infrastructure-based MANETs. MANETs introduce a new communication paradigm, which does not require a fixed infrastructure - they rely on wireless terminals for routing and transport services. Due to highly dynamic topology, absence of established infrastructure for centralized administration, bandwidth constrained wireless links, and limited resources in MANETs, it is challenging to design an efficient and reliable routing protocol. This chapter reviews the key studies carried out so far on the performance of mobile ad hoc routing protocols. We discuss performance issues and metrics required for the evaluation of ad hoc routing protocols. This leads to a survey of existing work, which captures the performance of ad hoc routing algorithms and their behaviour from different perspectives and highlights avenues for future research.
ICAT: Integrating data infrastructure for facilities based science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flannery, Damian; Matthews, Brian; Griffin, Tom
2009-12-21
ICAT: Integrating data infrastructure for facilities based science Damian Flannery, Brian Matthews, Tom Griffin, Juan Bicarregui, Michael Gleaves, Laurent Lerusse, Roger Downing, Alun Ashton, Shoaib Sufi, Glen Drinkwater, Kerstin Kleese Abstract— Scientific facilities, in particular large-scale photon and neutron sources, have demanding requirements to manage the increasing quantities of experimental data they generate in a systematic and secure way. In this paper, we describe the ICAT infrastructure for cataloguing facility generated experimental data which has been in development within STFC and DLS for several years. We consider the factors which have influenced its design and describe its architecture and metadatamore » model, a key tool in the management of data. We go on to give an outline of its current implementation and use, with plans for its future development.« less
A framework for considering externalities in urban water asset management.
Marlow, David; Pearson, Leonie; Macdonald, Darla Hatton; Whitten, Stuart; Burn, Stewart
2011-01-01
Urban communities rely on a complex network of infrastructure assets to connect them to water resources. There is considerable capital investment required to maintain, upgrade and extend this infrastructure. As the remit of a water utility is broader than just financial considerations, infrastructure investment decisions must be made in light of environmental and societal issues. One way of facilitating this is to integrate consideration of externalities into decision making processes. This paper considers the concept of externalities from an asset management perspective. A case study is provided to show the practical implications to a water utility and asset managers. A framework for the inclusion of externalities in asset management decision making is also presented. The potential for application of the framework is highlighted through a brief consideration of its key elements.
Architecture of the local spatial data infrastructure for regional climate change research
NASA Astrophysics Data System (ADS)
Titov, Alexander; Gordov, Evgeny
2013-04-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, etc.) are actively used in modeling and analysis of climate change for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset studies in the area of climate and environmental change require a special software support based on SDI approach. A dedicated architecture of the local spatial data infrastructure aiming at regional climate change analysis using modern web mapping technologies is presented. Geoportal is a key element of any SDI, allowing searching of geoinformation resources (datasets and services) using metadata catalogs, producing geospatial data selections by their parameters (data access functionality) as well as managing services and applications of cartographical visualization. It should be noted that due to objective reasons such as big dataset volume, complexity of data models used, syntactic and semantic differences of various datasets, the development of environmental geodata access, processing and visualization services turns out to be quite a complex task. Those circumstances were taken into account while developing architecture of the local spatial data infrastructure as a universal framework providing geodata services. So that, the architecture presented includes: 1. Effective in terms of search, access, retrieval and subsequent statistical processing, model of storing big sets of regional georeferenced data, allowing in particular to store frequently used values (like monthly and annual climate change indices, etc.), thus providing different temporal views of the datasets 2. General architecture of the corresponding software components handling geospatial datasets within the storage model 3. Metadata catalog describing in detail using ISO 19115 and CF-convention standards datasets used in climate researches as a basic element of the spatial data infrastructure as well as its publication according to OGC CSW (Catalog Service Web) specification 4. Computational and mapping web services to work with geospatial datasets based on OWS (OGC Web Services) standards: WMS, WFS, WPS 5. Geoportal as a key element of thematic regional spatial data infrastructure providing also software framework for dedicated web applications development To realize web mapping services Geoserver software is used since it provides natural WPS implementation as a separate software module. To provide geospatial metadata services GeoNetwork Opensource (http://geonetwork-opensource.org) product is planned to be used for it supports ISO 19115/ISO 19119/ISO 19139 metadata standards as well as ISO CSW 2.0 profile for both client and server. To implement thematic applications based on geospatial web services within the framework of local SDI geoportal the following open source software have been selected: 1. OpenLayers JavaScript library, providing basic web mapping functionality for the thin client such as web browser 2. GeoExt/ExtJS JavaScript libraries for building client-side web applications working with geodata services. The web interface developed will be similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. The work is partially supported by RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2.1 and IP 131.
NASA Astrophysics Data System (ADS)
Schaap, Dick M. A.; Fichaut, Michele
2014-05-01
The second phase of the project SeaDataNet is well underway since October 2011 and is making good progress. The main objective is to improve operations and to progress towards an efficient data management infrastructure able to handle the diversity and large volume of data collected via research cruises and monitoring activities in European marine waters and global oceans. The SeaDataNet infrastructure comprises a network of interconnected data centres and a central SeaDataNet portal. The portal provides users a unified and transparent overview of the metadata and controlled access to the large collections of data sets, managed by the interconnected data centres, and the various SeaDataNet standards and tools,. Recently the 1st Innovation Cycle has been completed, including upgrading of the CDI Data Discovery and Access service to ISO 19139 and making it fully INSPIRE compliant. The extensive SeaDataNet Vocabularies have been upgraded too and implemented for all SeaDataNet European metadata directories. SeaDataNet is setting and governing marine data standards, and exploring and establishing interoperability solutions to connect to other e-infrastructures on the basis of standards of ISO (19115, 19139), OGC (WMS, WFS, CS-W and SWE), and OpenSearch. The population of directories has also increased considerably in cooperation and involvement in associated EU projects and initiatives. SeaDataNet now gives overview and access to more than 1.4 million data sets for physical oceanography, chemistry, geology, geophysics, bathymetry and biology from more than 90 connected data centres from 30 countries riparian to European seas. Access to marine data is also a key issue for the implementation of the EU Marine Strategy Framework Directive (MSFD). The EU communication 'Marine Knowledge 2020' underpins the importance of data availability and harmonising access to marine data from different sources. SeaDataNet qualified itself for leading the data management component of the EMODNet (European Marine Observation and Data Network) that is promoted in the EU Communication. In the past 4 years EMODNet portals have been initiated for marine data themes: digital bathymetry, chemistry, physical oceanography, geology, biology, and seabed habitat mapping. These portals are now being expanded to all European seas in successor projects, which started mid 2013 from EU DG MARE. EMODNet encourages more data providers to come forward for data sharing and participating in the process of making complete overviews and homogeneous data products. The EMODNet Bathymetry project is very illustrative for the synergy with SeaDataNet and added value of generating public data products. The project develops and publishes Digital Terrain Models (DTM) for the European seas. These are produced from survey and aggregated data sets. The portal provides a versatile DTM viewing service with many relevant map layers and functions for retrieving. A further refinement is taking place in the new phase. The presentation will give information on present services of the SeaDataNet infrastructure and services, highlight key achievements in SeaDataNet II so far, and give further insights in the EMODNet Bathymetry progress.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
Web-GIS platform for green infrastructure in Bucharest, Romania
NASA Astrophysics Data System (ADS)
Sercaianu, Mihai; Petrescu, Florian; Aldea, Mihaela; Oana, Luca; Rotaru, George
2015-06-01
In the last decade, reducing urban pollution and improving quality of public spaces became a more and more important issue for public administration authorities in Romania. The paper describes the development of a web-GIS solution dedicated to monitoring of the green infrastructure in Bucharest, Romania. Thus, the system allows the urban residents (citizens) to collect themselves and directly report relevant information regarding the current status of the green infrastructure of the city. Consequently, the citizens become an active component of the decision-support process within the public administration. Besides the usual technical characteristics of such geo-information processing systems, due to the complex legal and organizational problems that arise in collecting information directly from the citizens, additional analysis was required concerning, for example, local government involvement, environmental protection agencies regulations or public entities requirements. Designing and implementing the whole information exchange process, based on the active interaction between the citizens and public administration bodies, required the use of the "citizen-sensor" concept deployed with GIS tools. The information collected and reported from the field is related to a lot of factors, which are not always limited to the city level, providing the possibility to consider the green infrastructure as a whole. The "citizen-request" web-GIS for green infrastructure monitoring solution is characterized by a very diverse urban information, due to the fact that the green infrastructure itself is conditioned by a lot of urban elements, such as urban infrastructures, urban infrastructure works and construction density.
Feasibility and costs of water fluoridation in remote Australian Aboriginal communities
Ehsani, Jonathon P; Bailie, Ross
2007-01-01
Background Fluoridation of public water supplies remains the key potential strategy for prevention of dental caries. The water supplies of many remote Indigenous communities do not contain adequate levels of natural fluoride. The small and dispersed nature of communities presents challenges for the provision of fluoridation infrastructure and until recently smaller settlements were considered unfavourable for cost-effective water fluoridation. Technological advances in water treatment and fluoridation are resulting in new and more cost-effective water fluoridation options and recent cost analyses support water fluoridation for communities of less than 1,000 people. Methods Small scale fluoridation plants were installed in two remote Northern Territory communities in early 2004. Fluoride levels in community water supplies were expected to be monitored by local staff and by a remote electronic system. Site visits were undertaken by project investigators at commissioning and approximately two years later. Interviews were conducted with key informants and documentation pertaining to costs of the plants and operational reports were reviewed. Results The fluoridation plants were operational for about 80% of the trial period. A number of technical features that interfered with plant operation were identified and addressed though redesign. Management systems and the attitudes and capacity of operational staff also impacted on the effective functioning of the plants. Capital costs for the wider implementation of these plants in remote communities is estimated at about $US94,000 with recurrent annual costs of $US11,800 per unit. Conclusion Operational issues during the trial indicate the need for effective management systems, including policy and funding responsibility. Reliable manufacturers and suppliers of equipment should be identified and contractual agreements should provide for ongoing technical assistance. Water fluoridation units should be considered as a potential priority component of health related infrastructure in at least the larger remote Indigenous communities which have inadequate levels of natural fluoride and high levels of dental caries. PMID:17555604
Experience from the 1st Year running a Massive High Quality Videoconferencing Service for the LHC
NASA Astrophysics Data System (ADS)
Fernandes, Joao; Baron, Thomas; Bompastor, Bruno
2014-06-01
In the last few years, we have witnessed an explosion of visual collaboration initiatives in the industry. Several advances in video services and also in their underlying infrastructure are currently improving the way people collaborate globally. These advances are creating new usage paradigms: any device in any network can be used to collaborate, in most cases with an overall high quality. To keep apace with this technology progression, the CERN IT Department launched a service based on the Vidyo product. This new service architecture introduces Adaptive Video Layering, which dynamically optimizes the video for each endpoint by leveraging the H.264 Scalable Video Coding (SVC)-based compression technology. It combines intelligent AV routing techniques with the flexibility of H.264 SVC video compression, in order to achieve resilient video collaboration over the Internet, 3G and WiFi. We present an overview of the results that have been achieved after this major change. In particular, the first year of operation of the CERN Vidyo service will be described in terms of performance and scale: The service became part of the daily activity of the LHC collaborations, reaching a monthly usage of more than 3200 meetings with a peak of 750 simultaneous connections. We also present some key features such as the integration with CERN Indico. LHC users can now join a Vidyo meeting either from their personal computer or a CERN videoconference room simply from an Indico event page, with the ease of a single click. The roadmap for future improvements, service extensions and core infrastructure tendencies such as cloud based services and virtualization of system components will also be discussed. Vidyo's strengths allowed us to build a universal service (it is accessible from PCs, but also videoconference rooms, traditional phones, tablets and smartphones), developed with 3 key ideas in mind: ease of use, full integration and high quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaus, P.S.
This Configuration Management Implementation Plan (CMIP) was developed to assist in managing systems, structures, and components (SSCS), to facilitate the effective control and statusing of changes to SSCS, and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Privatization Infrastructure will take in implementing a configuration management program, to identify the Program`s products that need configuration management control, to determine the rigor of control, and to identify the mechanisms for that control.
Aligning business and information technology domains: strategic planning in hospitals.
Henderson, J C; Thomas, J B
1992-01-01
This article develops a framework for strategic information technology (IT) management in hospitals, termed the Strategic Alignment Model. This model is defined in terms of four domains--business strategy, IT strategy, organizational infrastructure, and IT infrastructure--each with its constituent components. The concept of strategic alignment is developed using two fundamental dimensions--strategic fit and integration. Different perspectives that hospitals use for aligning the various domains are discussed, and a prescriptive model of strategic IT planning is proposed.
A SCORM Thin Client Architecture for E-Learning Systems Based on Web Services
ERIC Educational Resources Information Center
Casella, Giovanni; Costagliola, Gennaro; Ferrucci, Filomena; Polese, Giuseppe; Scanniello, Giuseppe
2007-01-01
In this paper we propose an architecture of e-learning systems characterized by the use of Web services and a suitable middleware component. These technical infrastructures allow us to extend the system with new services as well as to integrate and reuse heterogeneous software e-learning components. Moreover, they let us better support the…
Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes
NASA Astrophysics Data System (ADS)
Tavallaei, Saeed Ebadi; Falahati, Abolfazl
Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.
Applying Aspect-Oriented Programming to Intelligent Synthesis
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Norvig, Peter (Technical Monitor)
2000-01-01
I discuss a component-centered, aspect-oriented system, the Object Infrastructure Framework (OIF), NASA's initiative on Intelligent Synthesis Environments (ISE), and the application of OIF to the architecture of ISE.
Alternative Fuels Data Center: California Ramps Up Biofuels Infrastructure
providers are able to offer station owners turn-key packages, complete with tanks and dispensers, with a , and helping to streamline permitting processes. Today, fuel providers are able to offer station owners
25 CFR 292.2 - How are key terms defined in this part?
Code of Federal Regulations, 2010 CFR
2010-04-01
... or a public road or right-of-way and includes parcels that touch at a point. Former reservation means... establish that its governmental functions, infrastructure or services will be directly, immediately and...
25 CFR 292.2 - How are key terms defined in this part?
Code of Federal Regulations, 2011 CFR
2011-04-01
... or a public road or right-of-way and includes parcels that touch at a point. Former reservation means... establish that its governmental functions, infrastructure or services will be directly, immediately and...
Transportation planning for electric vehicles and associated infrastructure.
DOT National Transportation Integrated Search
2017-05-01
Planning is the key to successful adoption and deployment of any new technology, and : it is particularly important when that advancement involves a paradigm shift such as : electrified transportation. At its core, electric transportation is largely ...
Infrastructure and technology for sustainable livable cities.
DOT National Transportation Integrated Search
2016-07-31
Providing access and mobility for key installations and businesses located in cities become a : challenge when there is limited public transport and non-motorized facilities. The challenges : are significant in cities that are subjected to severe win...
Takeda, Hiroshi; Matsumura, Yasushi; Nakagawa, Katsuhiko; Teratani, Tadamasa; Qiyan, Zhang; Kusuoka, Hideo; Matsuoka, Masami
2004-01-01
To share healthcare information and to promote cooperation among healthcare providers and customers (patients) under computerized network environment, a non-profit organization (NPO), named as OCHIS, was established at Osaka, Japan in 2003. Since security and confidentiality issues on the Internet have been major concerns in the OCHIS, the system has been based on healthcare public key infrastructure (HPKI), and found that there remained problems to be solved technically and operationally. An experimental study was conducted to elucidate the central and the local function in terms of a registration authority and a time stamp authority by contracting with the Ministry of Economics and Trading Industries in 2003. This paper describes the experimental design with NPO and the results of the study concerning message security and HPKI. The developed system has been operated practically in Osaka urban area.
Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew
2017-06-30
EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.
On-Site Fabrication Infrastructure to Enable Efficient Exploration and Utilization of Space
NASA Technical Reports Server (NTRS)
Howell, Joe T.; Fikes, John C.; McLemore, Carole A.; Good, James E.
2008-01-01
Unlike past one-at-a-time mission approaches, system-of-systems infrastructures will be needed to enable ambitious scenarios for sustainable future space exploration and utilization. So what do we do when we get to the moon for sustainable exploration. On-site fabrication infrastructure will be needed to support habitat structure development, tools and mechanical part fabrication, as well as repair and replacement of ground support and space mission hardware such as life support items, vehicle components and crew systems. The on-site fabrication infrastructure will need the In Situ Fabrication and Repair (ISFR) element, which is working in conjunction with the In Situ Resources Utilization (ISRU) element, to live off the land. The ISFR element has worked closely with the ISRU element in the past year to assess the ability of using lunar regolith as a viable feedstock for fabrication material. Preliminary work has shown promise and the ISFR Element will continue to concentrate on this activity. Fabrication capabilities have been furthered with the process certification effort that, when completed, will allow for space-qualified hardware to be manufactured. Materials being investigated include titanium and aluminum alloys as well as lunar regolith simulants with binders. This paper addresses the latest advancements made in the fabrication of infrastructures that support efficient, affordable, reliable infrastructures for both space exploration systems and logistics; infrastructures that allow sustained, affordable and highly effective operations on the Moon and beyond.
International Symposium on Grids and Clouds (ISGC) 2016
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.
Infrastructure for large space telescopes
NASA Astrophysics Data System (ADS)
MacEwen, Howard A.; Lillie, Charles F.
2016-10-01
It is generally recognized (e.g., in the National Aeronautics and Space Administration response to recent congressional appropriations) that future space observatories must be serviceable, even if they are orbiting in deep space (e.g., around the Sun-Earth libration point, SEL2). On the basis of this legislation, we believe that budgetary considerations throughout the foreseeable future will require that large, long-lived astrophysics missions must be designed as evolvable semipermanent observatories that will be serviced using an operational, in-space infrastructure. We believe that the development of this infrastructure will include the design and development of a small to mid-sized servicing vehicle (MiniServ) as a key element of an affordable infrastructure for in-space assembly and servicing of future space vehicles. This can be accomplished by the adaptation of technology developed over the past half-century into a vehicle approximately the size of the ascent stage of the Apollo Lunar Module to provide some of the servicing capabilities that will be needed by very large telescopes located in deep space in the near future (2020s and 2030s). We specifically address the need for a detailed study of these servicing requirements and the current proposals for using presently available technologies to provide the appropriate infrastructure.
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A. G.
2007-12-01
As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing science through a variety of cyber-infrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is being collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circum-arctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through an assessment of community needs, readiness, and resources and through the development of a prototype web-mapping portal.
Enabling fast charging - Infrastructure and economic considerations
NASA Astrophysics Data System (ADS)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; Francfort, James; Michelbacher, Christopher; Carlson, Richard B.; Zhang, Jiucai; Vijayagopal, Ram; Dias, Fernando; Mohanpurkar, Manish; Scoffield, Don; Hardy, Keith; Shirk, Matthew; Hovsapian, Rob; Ahmed, Shabbir; Bloom, Ira; Jansen, Andrew N.; Keyser, Matthew; Kreuzer, Cory; Markel, Anthony; Meintz, Andrew; Pesaran, Ahmad; Tanim, Tanvir R.
2017-11-01
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehicle service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.
Enabling fast charging – Infrastructure and economic considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. This discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging at 400 kW and above. In so doing, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Enabling fast charging – Infrastructure and economic considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Enabling fast charging – Infrastructure and economic considerations
Burnham, Andrew; Dufek, Eric J.; Stephens, Thomas; ...
2017-10-23
The ability to charge battery electric vehicles (BEVs) on a time scale that is on par with the time to fuel an internal combustion engine vehicle (ICEV) would remove a significant barrier to the adoption of BEVs. However, for viability, fast charging at this time scale needs to also occur at a price that is acceptable to consumers. Therefore, the cost drivers for both BEV owners and charging station providers are analyzed. In addition, key infrastructure considerations are examined, including grid stability and delivery of power, the design of fast charging stations and the design and use of electric vehiclemore » service equipment. Each of these aspects have technical barriers that need to be addressed, and are directly linked to economic impacts to use and implementation. Here, this discussion focuses on both the economic and infrastructure issues which exist and need to be addressed for the effective implementation of fast charging up to 350 kW. In doing so, it has been found that there is a distinct need to effectively manage the intermittent, high power demand of fast charging, strategically plan infrastructure corridors, and to further understand the cost of operation of charging infrastructure and BEVs.« less
Energy Theft in the Advanced Metering Infrastructure
NASA Astrophysics Data System (ADS)
McLaughlin, Stephen; Podkuiko, Dmitry; McDaniel, Patrick
Global energy generation and delivery systems are transitioning to a new computerized "smart grid". One of the principle components of the smart grid is an advanced metering infrastructure (AMI). AMI replaces the analog meters with computerized systems that report usage over digital communication interfaces, e.g., phone lines. However, with this infrastructure comes new risk. In this paper, we consider adversary means of defrauding the electrical grid by manipulating AMI systems. We document the methods adversaries will use to attempt to manipulate energy usage data, and validate the viability of these attacks by performing penetration testing on commodity devices. Through these activities, we demonstrate that not only is theft still possible in AMI systems, but that current AMI devices introduce a myriad of new vectors for achieving it.
LHCb Build and Deployment Infrastructure for run 2
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2015-12-01
After the successful run 1 of the LHC, the LHCb Core software team has taken advantage of the long shutdown to consolidate and improve its build and deployment infrastructure. Several of the related projects have already been presented like the build system using Jenkins, as well as the LHCb Performance and Regression testing infrastructure. Some components are completely new, like the Software Configuration Database (using the Graph DB Neo4j), or the new packaging installation using RPM packages. Furthermore all those parts are integrated to allow easier and quicker releases of the LHCb Software stack, therefore reducing the risk of operational errors. Integration and Regression tests are also now easier to implement, allowing to improve further the software checks.
Federation for a Secure Enterprise
2016-09-10
12 October 2005 e. RFC Internet X.509 Public Key Infrastructure: Certification Path Building, 2005 f. Public Key Cryptography Standard, PKCS #1...v2.2: RSA Cryptography Standard, RSA Laboratories, October 27, 2012 g. PKCS#12 format PKCS #12 v1.0: Personal Information Exchange Syntax Standard, RSA...ClientHello padding extension, 2015-02-17 f. Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS) Versions 1.2 and Earlier
Who is Responsible for Defending United States Interests in Cyberspace?
2013-03-01
The U.S. Chamber of Commerce worked openly to challenge legislation that potentially would have directed minimum standards for operators of key...of companies to enable private public collaboration to 6 protect critical infrastructure as the U.S. Chamber of Commerce actively identifies...cyberspace-lawmakers-say/50869/ (accessed November 12, 2012). 19 US Chamber of Commerce , “Key Vote letter on S. 3414, the Cybersecurity Act of 2012
Federated Ground Station Network Model and Interface Specification
2014-12-01
interface definition language JSON JavaScript Object Notation LEO low Earth orbit LNA low-noise amplifier MC3 Mobile CubeSat Command and Control...Naval Research Laboratory OQPSK offset quadrature phase-shift keying xviii P2P peer-to-peer PKI public key infrastructure REST Representational...enhanced our work being performed on the Mobile CubeSat Command and Control (MC3) ground station network. You also provided crucial guidance from
PKI-based secure mobile access to electronic health services and data.
Kambourakis, G; Maglogiannis, I; Rouskas, A
2005-01-01
Recent research works examine the potential employment of public-key cryptography schemes in e-health environments. In such systems, where a Public Key Infrastructure (PKI) is established beforehand, Attribute Certificates (ACs) and public key enabled protocols like TLS, can provide the appropriate mechanisms to effectively support authentication, authorization and confidentiality services. In other words, mutual trust and secure communications between all the stakeholders, namely physicians, patients and e-health service providers, can be successfully established and maintained. Furthermore, as the recently introduced mobile devices with access to computer-based patient record systems are expanding, the need of physicians and nurses to interact increasingly with such systems arises. Considering public key infrastructure requirements for mobile online health networks, this paper discusses the potential use of Attribute Certificates (ACs) in an anticipated trust model. Typical trust interactions among doctors, patients and e-health providers are presented, indicating that resourceful security mechanisms and trust control can be obtained and implemented. The application of attribute certificates to support medical mobile service provision along with the utilization of the de-facto TLS protocol to offer competent confidentiality and authorization services is also presented and evaluated through experimentation, using both the 802.11 WLAN and General Packet Radio Service (GPRS) networks.
EuroGEOSS/GENESIS ``e-Habitat'' AIP-3 Use Scenario
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Dubois, G.; Santoro, M.; Peedell, S.; de Longueville, B.; Nativi, S.; Craglia, M.
2010-12-01
Natural ecosystems are in rapid decline. Major habitats are disappearing at a speed never observed before. The current rate of species extinction is several orders of magnitude higher than the background rate from the fossil record. Protected Areas (PAs) and Protected Area Systems are designed to conserve natural and cultural resources, to maintain biodiversity (ecosystems, species, genes) and ecosystem services. The scientific challenge of understanding how environmental and climatological factors impact on ecosystems and habitats requires the use of information from different scientific domains. Thus, multidisciplinary interoperability is a crucial requirement for a framework aiming to support scientists. The Group on Earth Observations (or GEO) is coordinating international efforts to build a Global Earth Observation System of Systems (GEOSS). This emerging public infrastructure is interconnecting a diverse and growing array of instruments and systems for monitoring and forecasting changes in the global environment. This “system of systems” supports multidisciplinary and cross-disciplinary scientific researches. The presented GEOSS-based interoperability framework facilitates the discovery and exploitation of datasets and models from heterogeneous scientific domains and Information Technology services (data sources). The GEO Architecture and Data Committee (ADC) launched the Architecture Implementation Pilot (AIP) Initiative to develop and deploy new processes and infrastructure components for the GEOSS Common Infrastructure (GCI) and the broader GEOSS architecture. The current AIP Phase 3 (AIP-3) aims to increase GEOSS capacity to support several strategic Societal Benefit Areas (SBAs) including: Disaster Management, Health/Air Quality, Biodiversity, Energy, Health/Disease and Water. As to Biodiversity, the EC-funded EuroGEOSS (http://www.eurogeoss.eu) and GENESIS (http://www.genesis-fp7.eu) projects have developed a use scenario called “e-Habitat”. This scenario demonstrates how a GEOSS-based interoperability infrastructure can aid decision makers to assess and possibly forecast the irreplaceability of a given protected area, an essential indicator for assessing the criticality of threats this protected area is exposed to. Based on the previous AIP-Phase2 experience, the EuroGEOSS and GENESIS projects enhanced the successfully experimented interoperability infrastructure with: a) a discovery broker service which underpins semantics enabled queries: the EuroGEOSS/GENESIS Discovery Augmentation Component (DAC); b) environmental modeling components (i.e. OGC WPS instances) implementing algorithms to predict evolution of PAs ecosystems; c) a workflow engine to: i) browse semantic repositories; ii) retrieve concepts of interest; iii) search for resources (i.e. datasets and models) related to such concepts; iv) execute WPS instances. This presentation introduces the enhanced infrastructure developed by the EuroGEOSS/GENESIS AIP-3 Pilot to implement the “e-Habitat” use scenario. The presented infrastructure is accessible through the GEO Portal and is going to be used for demonstrating the “e-Habitat” model at the GEO Ministerial Meeting - Beijing, November 2010.
Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Nathan
2014-09-01
Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less
NASA Astrophysics Data System (ADS)
Poursartip, B.
2015-12-01
Seismic hazard assessment to predict the behavior of infrastructures subjected to earthquake relies on ground motion numerical simulation because the analytical solution of seismic waves is limited to only a few simple geometries. Recent advances in numerical methods and computer architectures make it ever more practical to reliably and quickly obtain the near-surface response to seismic events. The key motivation stems from the need to access the performance of sensitive components of the civil infrastructure (nuclear power plants, bridges, lifelines, etc), when subjected to realistic scenarios of seismic events. We discuss an integrated approach that deploys best-practice tools for simulating seismic events in arbitrarily heterogeneous formations, while also accounting for topography. Specifically, we describe an explicit forward wave solver based on a hybrid formulation that couples a single-field formulation for the computational domain with an unsplit mixed-field formulation for Perfectly-Matched-Layers (PMLs and/or M-PMLs) used to limit the computational domain. Due to the material heterogeneity and the contrasting discretization needs it imposes, an adaptive time solver is adopted. We use a Runge-Kutta-Fehlberg time-marching scheme that adjusts optimally the time step such that the local truncation error rests below a predefined tolerance. We use spectral elements for spatial discretization, and the Domain Reduction Method in accordance with double couple method to allow for the efficient prescription of the input seismic motion. Of particular interest to this development is the study of the effects idealized topographic features have on the surface motion when compared against motion results that are based on a flat-surface assumption. We discuss the components of the integrated approach we followed, and report the results of parametric studies in two and three dimensions, for various idealized topographic features, which show motion amplification that depends, as expected, on the relation between the topographic feature's characteristics and the dominant wavelength. Lastly, we report results involving three-dimensional simulations.
Geiling, James; Burkle, Frederick M; Amundson, Dennis; Dominguez-Cherit, Guillermo; Gomersall, Charles D; Lim, Matthew L; Luyckx, Valerie; Sarani, Babak; Uyeki, Timothy M; West, T Eoin; Christian, Michael D; Devereaux, Asha V; Dichter, Jeffrey R; Kissoon, Niranjan
2014-10-01
Planning for mass critical care (MCC) in resource-poor or constrained settings has been largely ignored, despite their large populations that are prone to suffer disproportionately from natural disasters. Addressing MCC in these settings has the potential to help vast numbers of people and also to inform planning for better-resourced areas. The Resource-Poor Settings panel developed five key question domains; defining the term resource poor and using the traditional phases of disaster (mitigation/preparedness/response/recovery), literature searches were conducted to identify evidence on which to answer the key questions in these areas. Given a lack of data upon which to develop evidence-based recommendations, expert-opinion suggestions were developed, and consensus was achieved using a modified Delphi process. The five key questions were then separated as follows: definition, infrastructure and capacity building, resources, response, and reconstitution/recovery of host nation critical care capabilities and research. Addressing these questions led the panel to offer 33 suggestions. Because of the large number of suggestions, the results have been separated into two sections: part 1, Infrastructure/Capacity in this article, and part 2, Response/Recovery/Research in the accompanying article. Lack of, or presence of, rudimentary ICU resources and limited capacity to enhance services further challenge resource-poor and constrained settings. Hence, capacity building entails preventative strategies and strengthening of primary health services. Assistance from other countries and organizations is needed to mount a surge response. Moreover, planning should include when to disengage and how the host nation can provide capacity beyond the mass casualty care event.
iSAW: Integrating Structure, Actors, and Water to study socio-hydro-ecological systems
NASA Astrophysics Data System (ADS)
Hale, Rebecca L.; Armstrong, Andrea; Baker, Michelle A.; Bedingfield, Sean; Betts, David; Buahin, Caleb; Buchert, Martin; Crowl, Todd; Dupont, R. Ryan; Ehleringer, James R.; Endter-Wada, Joanna; Flint, Courtney; Grant, Jacqualine; Hinners, Sarah; Horsburgh, Jeffery S.; Jackson-Smith, Douglas; Jones, Amber S.; Licon, Carlos; Null, Sarah E.; Odame, Augustina; Pataki, Diane E.; Rosenberg, David; Runburg, Madlyn; Stoker, Philip; Strong, Courtenay
2015-03-01
Urbanization, climate, and ecosystem change represent major challenges for managing water resources. Although water systems are complex, a need exists for a generalized representation of these systems to identify important components and linkages to guide scientific inquiry and aid water management. We developed an integrated Structure-Actor-Water framework (iSAW) to facilitate the understanding of and transitions to sustainable water systems. Our goal was to produce an interdisciplinary framework for water resources research that could address management challenges across scales (e.g., plot to region) and domains (e.g., water supply and quality, transitioning, and urban landscapes). The framework was designed to be generalizable across all human-environment systems, yet with sufficient detail and flexibility to be customized to specific cases. iSAW includes three major components: structure (natural, built, and social), actors (individual and organizational), and water (quality and quantity). Key linkages among these components include: (1) ecological/hydrologic processes, (2) ecosystem/geomorphic feedbacks, (3) planning, design, and policy, (4) perceptions, information, and experience, (5) resource access and risk, and (6) operational water use and management. We illustrate the flexibility and utility of the iSAW framework by applying it to two research and management problems: understanding urban water supply and demand in a changing climate and expanding use of green storm water infrastructure in a semi-arid environment. The applications demonstrate that a generalized conceptual model can identify important components and linkages in complex and diverse water systems and facilitate communication about those systems among researchers from diverse disciplines.
Second-order schedules: discrimination of components1
Squires, Nancy; Norborg, James; Fantino, Edmund
1975-01-01
Pigeons were exposed to a series of second-order schedules in which the completion of a fixed number of fixed-interval components produced food. In Experiment 1, brief (2 sec) stimulus presentations occurred as each fixed-interval component was completed. During the brief-stimulus presentation terminating the last fixed-interval component, a response was required on a second key, the brief-stimulus key, to produce food. Responses on the brief-stimulus key before the last brief-stimulus presentation had no scheduled consequences, but served as a measure of the extent to which the final component was discriminated from preceding components. Whether there were one, two, four, or eight fixed-interval components, responses on the brief-stimulus key occurred during virtually every brief-stimulus presentation. In Experiment 2, an attempt was made to punish unnecessary responses on the brief-stimulus key, i.e., responses on the brief-stimulus key that occurred before the last component. None of the pigeons learned to withhold these responses, even though they produced a 15-sec timeout and loss of primary reinforcement. In Experiment 3, different key colors were associated with each component of a second-order schedule (a chain schedule). In contrast to Experiment 1, brief-stimulus key responses were confined to the last component. It was concluded that pigeons do not discriminate well between components of second-order schedules unless a unique exteroceptive cue is provided for each component. The relative discriminability of the components may account for the observed differences in initial-component response rates between comparable brief-stimulus, tandem, and chain schedules. PMID:16811868
Weiler, Gabriele; Schröder, Christina; Schera, Fatima; Dobkowicz, Matthias; Kiefer, Stephan; Heidtke, Karsten R; Hänold, Stefanie; Nwankwo, Iheanyi; Forgó, Nikolaus; Stanulla, Martin; Eckert, Cornelia; Graf, Norbert
2014-01-01
Biobanks represent key resources for clinico-genomic research and are needed to pave the way to personalised medicine. To achieve this goal, it is crucial that scientists can securely access and share high-quality biomaterial and related data. Therefore, there is a growing interest in integrating biobanks into larger biomedical information and communication technology (ICT) infrastructures. The European project p-medicine is currently building an innovative ICT infrastructure to meet this need. This platform provides tools and services for conducting research and clinical trials in personalised medicine. In this paper, we describe one of its main components, the biobank access framework p-BioSPRE (p-medicine Biospecimen Search and Project Request Engine). This generic framework enables and simplifies access to existing biobanks, but also to offer own biomaterial collections to research communities, and to manage biobank specimens and related clinical data over the ObTiMA Trial Biomaterial Manager. p-BioSPRE takes into consideration all relevant ethical and legal standards, e.g., safeguarding donors’ personal rights and enabling biobanks to keep control over the donated material and related data. The framework thus enables secure sharing of biomaterial within open and closed research communities, while flexibly integrating related clinical and omics data. Although the development of the framework is mainly driven by user scenarios from the cancer domain, in this case, acute lymphoblastic leukaemia and Wilms tumour, it can be extended to further disease entities. PMID:24567758
X-33/RLV System Health Management/Vehicle Health Management
NASA Technical Reports Server (NTRS)
Mouyos, William; Wangu, Srimal
1998-01-01
To reduce operations costs, Reusable Launch Vehicles (RLVS) must include highly reliable robust subsystems which are designed for simple repair access with a simplified servicing infrastructure, and which incorporate expedited decision-making about faults and anomalies. A key component for the Single Stage To Orbit (SSTO) RLV system used to meet these objectives is System Health Management (SHM). SHM incorporates Vehicle Health Management (VHM), ground processing associated with the vehicle fleet (GVHM), and Ground Infrastructure Health Management (GIHM). The primary objective of SHM is to provide an automated and paperless health decision, maintenance, and logistics system. Sanders, a Lockheed Martin Company, is leading the design, development, and integration of the SHM system for RLV and for X-33 (a sub-scale, sub-orbit Advanced Technology Demonstrator). Many critical technologies are necessary to make SHM (and more specifically VHM) practical, reliable, and cost effective. This paper will present the X-33 SHM design which forms the baseline for the RLV SHM, and it will discuss applications of advanced technologies to future RLVs. In addition, this paper will describe a Virtual Design Environment (VDE) which is being developed for RLV. This VDE will allow for system design engineering, as well as program management teams, to accurately and efficiently evaluate system designs, analyze the behavior of current systems, and predict the feasibility of making smooth and cost-efficient transitions from older technologies to newer ones. The RLV SHM design methodology will reduce program costs, decrease total program life-cycle time, and ultimately increase mission success.
Exposing the cancer genome atlas as a SPARQL endpoint
Deus, Helena F.; Veiga, Diogo F.; Freire, Pablo R.; Weinstein, John N.; Mills, Gordon B.; Almeida, Jonas S.
2011-01-01
The Cancer Genome Atlas (TCGA) is a multidisciplinary, multi-institutional effort to characterize several types of cancer. Datasets from biomedical domains such as TCGA present a particularly challenging task for those interested in dynamically aggregating its results because the data sources are typically both heterogeneous and distributed. The Linked Data best practices offer a solution to integrate and discover data with those characteristics, namely through exposure of data as Web services supporting SPARQL, the Resource Description Framework query language. Most SPARQL endpoints, however, cannot easily be queried by data experts. Furthermore, exposing experimental data as SPARQL endpoints remains a challenging task because, in most cases, data must first be converted to Resource Description Framework triples. In line with those requirements, we have developed an infrastructure to expose clinical, demographic and molecular data elements generated by TCGA as a SPARQL endpoint by assigning elements to entities of the Simple Sloppy Semantic Database (S3DB) management model. All components of the infrastructure are available as independent Representational State Transfer (REST) Web services to encourage reusability, and a simple interface was developed to automatically assemble SPARQL queries by navigating a representation of the TCGA domain. A key feature of the proposed solution that greatly facilitates assembly of SPARQL queries is the distinction between the TCGA domain descriptors and data elements. Furthermore, the use of the S3DB management model as a mediator enables queries to both public and protected data without the need for prior submission to a single data source. PMID:20851208