Sample records for common infrastructure control

  1. Common Badging and Access Control System (CBACS)

    NASA Technical Reports Server (NTRS)

    Dischinger, Portia

    2005-01-01

    This slide presentation presents NASA's Common Badging and Access Control System. NASA began a Smart Card implementation in January 2004. Following site surveys, it was determined that NASA's badging and access control systems required upgrades to common infrastructure in order to provide flexibly, usability, and return on investment prior to a smart card implantation. Common Badging and Access Control System (CBACS) provides the common infrastructure from which FIPS-201 compliant processes, systems, and credentials can be developed and used.

  2. Data distribution service-based interoperability framework for smart grid testbed infrastructure

    DOE PAGES

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    2016-03-02

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.

    This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less

  4. Internal hydrological mechanism of permeable pavement and interaction with subsurface water

    EPA Science Inventory

    Many communities are implementing green infrastructure stormwater control measures (SCMs) in urban environments across the U.S. to mimic pre-urban, natural hydrology more closely. Permeable pavement is one SCM infrastructure that has been commonly selected for both new and retro...

  5. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  6. Influence of governance structure on green stormwater infrastructure investment

    USGS Publications Warehouse

    Hopkins, Kristina G.; Grimm, Nancy B.; York, Abigail M.

    2018-01-01

    Communities are faced with the challenge of meeting regulatory requirements mandating reductions in water pollution from stormwater and combined sewer overflows (CSO). Green stormwater infrastructure and gray stormwater infrastructure are two types of water management strategies communities can use to address water pollution. In this study, we used long-term control plans from 25 U.S. cities to synthesize: the types of gray and green infrastructure being used by communities to address combined sewer overflows; the types of goals set; biophysical characteristics of each city; and factors associated with the governance of stormwater management. These city characteristics were then used to identify common characteristics of “green leader” cities—those that dedicated >20% of the control plan budget in green infrastructure. Five “green leader” cities were identified: Milwaukee, WI, Philadelphia, PA, Syracuse, NY, New York City, NY, and Buffalo, NY. These five cities had explicit green infrastructure goals targeting the volume of stormwater or percentage of impervious cover managed by green infrastructure. Results suggested that the management scale and complexity of the management system are less important factors than the ability to harness a “policy window” to integrate green infrastructure into control plans. Two case studies—Philadelphia, PA, and Milwaukee, WI—indicated that green leader cities have a long history of building momentum for green infrastructure through a series of phases from experimentation, demonstration, and finally—in the case of Philadelphia—a full transition in the approach used to manage CSOs.

  7. State Transmission Infrastructure Authorities: The Story So Far; December 2007 - December 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, K.; Fink. S.

    2008-05-01

    This report examines the status and future direction of state transmission infrastructure authorities. It summarizes common characteristics, discusses current transmission projects, and outlines common issues the state infrastructure authorities have faced.

  8. Measuring infrastructure: A key step in program evaluation and planning

    PubMed Central

    Schmitt, Carol L.; Glasgow, LaShawn; Lavinghouze, S. Rene; Rieker, Patricia P.; Fulmer, Erika; McAleer, Kelly; Rogers, Todd

    2016-01-01

    State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General’s call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model’s utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. PMID:27037655

  9. Measuring infrastructure: A key step in program evaluation and planning.

    PubMed

    Schmitt, Carol L; Glasgow, LaShawn; Lavinghouze, S Rene; Rieker, Patricia P; Fulmer, Erika; McAleer, Kelly; Rogers, Todd

    2016-06-01

    State tobacco prevention and control programs (TCPs) require a fully functioning infrastructure to respond effectively to the Surgeon General's call for accelerating the national reduction in tobacco use. The literature describes common elements of infrastructure; however, a lack of valid and reliable measures has made it difficult for program planners to monitor relevant infrastructure indicators and address observed deficiencies, or for evaluators to determine the association among infrastructure, program efforts, and program outcomes. The Component Model of Infrastructure (CMI) is a comprehensive, evidence-based framework that facilitates TCP program planning efforts to develop and maintain their infrastructure. Measures of CMI components were needed to evaluate the model's utility and predictive capability for assessing infrastructure. This paper describes the development of CMI measures and results of a pilot test with nine state TCP managers. Pilot test findings indicate that the tool has good face validity and is clear and easy to follow. The CMI tool yields data that can enhance public health efforts in a funding-constrained environment and provides insight into program sustainability. Ultimately, the CMI measurement tool could facilitate better evaluation and program planning across public health programs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  11. Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt

    EPA Science Inventory

    Accurately predicting runoff volume from impervious surfaces for water quality design events (e.g., 25.4 mm) is important for sizing green infrastructure stormwater control measures to meet water quality and infiltration design targets. The objective of this research was to quan...

  12. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  13. Model development, testing and experimentation in a CyberWorkstation for Brain-Machine Interface research.

    PubMed

    Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose

    2010-01-01

    The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.

  14. EuCARD 2010: European coordination of accelerator research and development

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2010-09-01

    Accelerators are basic tools of the experimental physics of elementary particles, nuclear physics, light sources of the fourth generation. They are also used in myriad other applications in research, industry and medicine. For example, there are intensely developed transmutation techniques for nuclear waste from nuclear power and atomic industries. The European Union invests in the development of accelerator infrastructures inside the framework programs to build the European Research Area. The aim is to build new accelerator research infrastructures, develop the existing ones, and generally make the infrastructures more available to competent users. The paper summarizes the first year of activities of the EU FP7 Project Capacities EuCARD -European Coordination of Accelerator R&D. EuCARD is a common venture of 37 European Accelerator Laboratories, Institutes, Universities and Industrial Partners involved in accelerator sciences and technologies. The project, initiated by ESGARD, is an Integrating Activity co-funded by the European Commission under Framework Program 7 - Capacities for a duration of four years, starting April 1st, 2009. Several teams from this country participate actively in this project. The contribution from Polish research teams concerns: photonic and electronic measurement - control systems, RF-gun co-design, thin-film superconducting technology, superconducting transport infrastructures, photon and particle beam measurements and control.

  15. An extensible infrastructure for fully automated spike sorting during online experiments.

    PubMed

    Santhanam, Gopal; Sahani, Maneesh; Ryu, Stephen; Shenoy, Krishna

    2004-01-01

    When recording extracellular neural activity, it is often necessary to distinguish action potentials arising from distinct cells near the electrode tip, a process commonly referred to as "spike sorting." In a number of experiments, notably those that involve direct neuroprosthetic control of an effector, this cell-by-cell classification of the incoming signal must be achieved in real time. Several commercial offerings are available for this task, but all of these require some manual supervision per electrode, making each scheme cumbersome with large electrode counts. We present a new infrastructure that leverages existing unsupervised algorithms to sort and subsequently implement the resulting signal classification rules for each electrode using a commercially available Cerebus neural signal processor. We demonstrate an implementation of this infrastructure to classify signals from a cortical electrode array, using a probabilistic clustering algorithm (described elsewhere). The data were collected from a rhesus monkey performing a delayed center-out reach task. We used both sorted and unsorted (thresholded) action potentials from an array implanted in pre-motor cortex to "predict" the reach target, a common decoding operation in neuroprosthetic research. The use of sorted spikes led to an improvement in decoding accuracy of between 3.6 and 6.4%.

  16. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  17. Modernization of B-2 Data, Video, and Control Systems Infrastructure

    NASA Technical Reports Server (NTRS)

    Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.

  18. 76 FR 41111 - Approval and Promulgation of Implementation Plans; South Carolina; 110(a)(1) and (2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ...EPA is taking final action to approve the December 13, 2007, submission submitted by the State of South Carolina, through the South Carolina Department of Health and Environmental Control (SC DHEC) as demonstrating that the State meets the state implementation plan (SIP) requirements of sections 110(a)(1) and (2) of the Clean Air Act (CAA or the Act) for the 1997 8-hour ozone national ambient air quality standards (NAAQS). Section 110(a) of the CAA requires that each state adopt and submit a SIP for the implementation, maintenance, and enforcement of each NAAQS promulgated by the EPA, which is commonly referred to as an ``infrastructure'' SIP. South Carolina certified that the South Carolina SIP contains provisions that ensure the 1997 8-hour ozone NAAQS is implemented, enforced, and maintained in South Carolina (hereafter referred to as ``infrastructure submission''). South Carolina's infrastructure submission, provided to EPA on December 13, 2007, addressed all the required infrastructure elements for the 1997 8-hour ozone NAAQS. Additionally, EPA is correcting an inadvertent error and responding to adverse comments received on EPA's March 17, 2011, proposed approval of South Carolina's December 13, 2007, infrastructure submission.

  19. The INDIGO-Datacloud Authentication and Authorization Infrastructure

    NASA Astrophysics Data System (ADS)

    Ceccanti, A.; Hardt, M.; Wegh, B.; Millar, AP; Caberletti, M.; Vianello, E.; Licehammer, S.

    2017-10-01

    Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by scientists. These computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it hard for user communities to port and run their scientific applications on resources aggregated from multiple providers. The INDIGO-DataCloud project wants to provide the services and tools needed to enable a secure composition of resources from multiple providers in support of scientific applications. In order to do so, a common AAI architecture has to be defined that supports multiple authentication mechanisms, support delegated authorization across services and can be easily integrated in off-the-shelf software. In this contribution we introduce the INDIGO Authentication and Authorization Infrastructure, describing its main components and their status and how authentication, delegation and authorization flows are implemented across services.

  20. Data discovery and data processing for environmental research infrastructures

    NASA Astrophysics Data System (ADS)

    Los, Wouter; Beranzoli, Laura; Corriero, Giuseppe; Cossu, Roberto; Fiore, Nicola; Hardisty, Alex; Legré, Yannick; Pagano, Pasquale; Puglisi, Giuseppe; Sorvari, Sanna; Turunen, Esa

    2013-04-01

    The European ENVRI project (Common operations of Environmental Research Infrastructures) is addressing common ICT solutions for the research infrastructures as selected in the ESFRI Roadmap. More specifically, the project is looking for solutions that will assist interdisciplinary users who want to benefit from the data and other services of more than a single research infrastructure. However, the infrastructure architectures, the data, data formats, scales and granularity are very different. Indeed, they deal with diverse scientific disciplines, from plate tectonics, the deep sea, sea and land surface up to atmosphere and troposphere, from the dead to the living environment, and with a variety of instruments producing increasingly larger amounts of data. One of the approaches in the ENVRI project is to design a common Reference Model that will serve to promote infrastructure interoperability at the data, technical and service levels. The analysis of the characteristics of the environmental research infrastructures assisted in developing the Reference Model, and which is also an example for comparable infrastructures worldwide. Still, it is for users already now important to have the facilities available for multi-disciplinary data discovery and data processing. The rise of systems research, addressing Earth as a single complex and coupled system is requiring such capabilities. So, another approach in the project is to adapt existing ICT solutions to short term applications. This is being tested for a few study cases. One of these is looking for possible coupled processes following a volcano eruption in the vertical column from deep sea to troposphere. Another one deals with volcano either human impacts on atmospheric and sea CO2 pressure and the implications for sea acidification and marine biodiversity and their ecosystems. And a third one deals with the variety of sensor and satellites data sensing the area around a volcano cone. Preliminary results on these studies will be reported. The common results will assist in shaping more generic solutions to be adopted by the appropriate research infrastructures.

  1. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    The mission of EGI-Engage project [1] is to accelerate the implementation of the Open Science Commons vision, where researchers from all disciplines have easy and open access to the innovative digital services, data, knowledge and expertise they need for collaborative and excellent research. The Open Science Commons is grounded on three pillars: the e-Infrastructure Commons, an ecosystem of services that constitute the foundation layer of distributed infrastructures; the Open Data Commons, where observations, results and applications are increasingly available for scientific research and for anyone to use and reuse; and the Knowledge Commons, in which communities have shared ownership of knowledge, participate in the co-development of software and are technically supported to exploit state-of-the-art digital services. To develop the Knowledge Commons, EGI-Engage is supporting the work of a set of community-specific Competence Centres, with participants from user communities (scientific institutes), National Grid Initiatives (NGIs), technology and service providers. Competence Centres collect and analyse requirements, integrate community-specific applications into state-of-the-art services, foster interoperability across e-Infrastructures, and evolve services through a user-centric development model. One of these Competence Centres is focussed on the European Plate Observing System (EPOS) [2] as representative of the solid earth science communities. EPOS is a pan-European long-term plan to integrate data, software and services from the distributed (and already existing) Research Infrastructures all over Europe, in the domain of the solid earth science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data Intensive analysis and also explores possible interaction with other Common Data Infrastructure initiatives as EUDAT. In the presentation, the state of the art of the two use cases, together with the open challenges and the future application will be discussed. Also, possible integration of EGI solutions with EPOS and other e-infrastructure providers will be considered. [1] EGI-ENGAGE https://www.egi.eu/about/egi-engage/ [2] EPOS http://www.epos-eu.org/

  2. 4-D COMMON OPERATIONAL PICTURE (COP) FOR MISSION ASSURANCE (4D COP) Task Order 0001: Air Force Research Laboratory (AFRL) Autonomy Collaboration in Intelligence, Surveillance, and Reconnaissance (ISR), Electronic Warfare (EW)/Cyber and Combat Identification (CID)

    DTIC Science & Technology

    2016-10-27

    Domain C2, Adaptive Domain Control, Global Integrated ISR, Rapid Global Mobility , and Global Precision Strike, orgnanized within a framework of...mission needs. (Among the dozen implications) A more transparent, networked infrastructure that integrates ubiquitous sensors, automated systems...Conclusion 5.1 Common Technical Trajectory One of the most significant opportunities for AFRL is to develop and mobilize the qualitative roadmap

  3. Setting the stage for the EPOS ERIC: Integration of the legal, governance and financial framework

    NASA Astrophysics Data System (ADS)

    Atakan, Kuvvet; Bazin, Pierre-Louis; Bozzoli, Sabrina; Freda, Carmela; Giardini, Domenico; Hoffmann, Thomas; Kohler, Elisabeth; Kontkanen, Pirjo; Lauterjung, Jörn; Pedersen, Helle; Saleh, Kauzar; Sangianantoni, Agata

    2017-04-01

    EPOS - the European Plate Observing System - is the ESFRI infrastructure serving the need of the solid Earth science community at large. The EPOS mission is to create a single sustainable, and distributed infrastructure that integrates the diverse European Research Infrastructures for solid Earth science under a common framework. Thematic Core Services (TCS) and Integrated Core Services (Central Hub, ICS-C and Distributed, ICS-D) are key elements, together with NRIs (National Research Infrastructures), in the EPOS architecture. Following the preparatory phase, EPOS has initiated formal steps to adopt an ERIC legal framework (European Research Infrastructure Consortium). The statutory seat of EPOS will be in Rome, Italy, while the ICS-C will be jointly operated by France, UK and Denmark. The TCS planned so far cover: seismology, near-fault observatories, GNSS data and products, volcano observations, satellite data, geomagnetic observations, anthropogenic hazards, geological information modelling, multiscale laboratories and geo-energy test beds for low carbon energy. In the ERIC process, EPOS and all its services must achieve sustainability from a legal, governance, financial, and technical point of view, as well as full harmonization with national infrastructure roadmaps. As EPOS is a distributed infrastructure, the TCSs have to be linked to the future EPOS ERIC from legal and governance perspectives. For this purpose the TCSs have started to organize themselves as consortia and negotiate agreements to define the roles of the different actors in the consortium as well as their commitment to contribute to the EPOS activities. The link to the EPOS ERIC shall be made by service agreements of dedicated Service Providers. A common EPOS data policy has also been developed, based on the general principles of Open Access and paying careful attention to licensing issues, quality control, and intellectual property rights, which shall apply to the data, data products, software and services (DDSS) accessible through EPOS. From a financial standpoint, EPOS elaborated common guidelines for all institutions providing services, and selected a costing model and funding approach which foresees a mixed support of the services via national contributions and ERIC membership fees. In the EPOS multi-disciplinary environment, harmonization and integration are required at different levels and with a variety of different stakeholders; to this purpose, a Service Coordination Board (SCB) and technical Harmonization Groups (HGs) were established to develop the EPOS metadata standards with the EPOS Integrated Central Services, and to harmonize data and product standards with other projects at European and international level, including e.g. ENVRI+, EUDAT and EarthCube (US).

  4. caCORE: a common infrastructure for cancer informatics.

    PubMed

    Covitz, Peter A; Hartel, Frank; Schaefer, Carl; De Coronado, Sherri; Fragoso, Gilberto; Sahni, Himanso; Gustafson, Scott; Buetow, Kenneth H

    2003-12-12

    Sites with substantive bioinformatics operations are challenged to build data processing and delivery infrastructure that provides reliable access and enables data integration. Locally generated data must be processed and stored such that relationships to external data sources can be presented. Consistency and comparability across data sets requires annotation with controlled vocabularies and, further, metadata standards for data representation. Programmatic access to the processed data should be supported to ensure the maximum possible value is extracted. Confronted with these challenges at the National Cancer Institute Center for Bioinformatics, we decided to develop a robust infrastructure for data management and integration that supports advanced biomedical applications. We have developed an interconnected set of software and services called caCORE. Enterprise Vocabulary Services (EVS) provide controlled vocabulary, dictionary and thesaurus services. The Cancer Data Standards Repository (caDSR) provides a metadata registry for common data elements. Cancer Bioinformatics Infrastructure Objects (caBIO) implements an object-oriented model of the biomedical domain and provides Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. caCORE has been used to develop scientific applications that bring together data from distinct genomic and clinical science sources. caCORE downloads and web interfaces can be accessed from links on the caCORE web site (http://ncicb.nci.nih.gov/core). caBIO software is distributed under an open source license that permits unrestricted academic and commercial use. Vocabulary and metadata content in the EVS and caDSR, respectively, is similarly unrestricted, and is available through web applications and FTP downloads. http://ncicb.nci.nih.gov/core/publications contains links to the caBIO 1.0 class diagram and the caCORE 1.0 Technical Guide, which provide detailed information on the present caCORE architecture, data sources and APIs. Updated information appears on a regular basis on the caCORE web site (http://ncicb.nci.nih.gov/core).

  5. Converging research needs across framework convention on tobacco control articles: making research relevant to global tobacco control practice and policy.

    PubMed

    Leischow, Scott J; Ayo-Yusuf, Olalekan; Backinger, Cathy L

    2013-04-01

    Much of the research used to support the ratification of the WHO Framework Convention on Tobacco Control (FCTC) was conducted in high-income countries or in highly controlled environments. Therefore, for the global tobacco control community to make informed decisions that will continue to effectively inform policy implementation, it is critical that the tobacco control community, policy makers, and funders have updated information on the state of the science as it pertains to provisions of the FCTC. Following the National Cancer Institute's process model used in identifying the research needs of the U.S. Food and Drug Administration's relatively new tobacco law, a core team of scientists from the Society for Research on Nicotine and Tobacco identified and commissioned internationally recognized scientific experts on the topics covered within the FCTC. These experts analyzed the relevant sections of the FCTC and identified critical gaps in research that is needed to inform policy and practice requirements of the FCTC. This paper summarizes the process and the common themes from the experts' recommendations about the research and related infrastructural needs. Research priorities in common across Articles include improving surveillance, fostering research communication/collaboration across organizations and across countries, and tracking tobacco industry activities. In addition, expanding research relevant to low- and middle-income countries (LMIC), was also identified as a priority, including identification of what existing research findings are transferable, what new country-specific data are needed, and the infrastructure needed to implement and disseminate research so as to inform policy in LMIC.

  6. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov Websites

    Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation

  7. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  8. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  9. The federal role in the health information infrastructure: a debate of the pros and cons of government intervention.

    PubMed Central

    Shortliffe, E H; Bleich, H L; Caine, C G; Masys, D R; Simborg, D W

    1996-01-01

    Some observers feel that the federal government should play a more active leadership role in educating the medical community and in coordinating and encouraging a more rapid and effective implementation of clinically relevant applications of wide-area networking. Other people argue that the private sector is recognizing the importance of these issues and will, when the market demands it, adopt and enhance the telecommunications systems that are needed to produce effective uses of the National Information Infrastructure (NII) by the healthcare community. This debate identifies five areas for possible government involvement: convening groups for the development of standards; providing funding for research and development; ensuring the equitable distribution of resources, particularly to places and people considered by private enterprise to provide low opportunities for profit; protecting rights of privacy, intellectual property, and security; and overcoming the jurisdictional barriers to cooperation, particularly when states offer conflicting regulations. Arguments against government involvement include the likely emergence of an adequate infrastructure under free market forces, the often stifling effect of regulation, and the need to avoid a common-and-control mentality in an infrastructure that is best promoted collaboratively. PMID:8816347

  10. [caCORE: core architecture of bioinformation on cancer research in America].

    PubMed

    Gao, Qin; Zhang, Yan-lei; Xie, Zhi-yun; Zhang, Qi-peng; Hu, Zhang-zhi

    2006-04-18

    A critical factor in the advancement of biomedical research is the ease with which data can be integrated, redistributed and analyzed both within and across domains. This paper summarizes the Biomedical Information Core Infrastructure built by National Cancer Institute Center for Bioinformatics in America (NCICB). The main product from the Core Infrastructure is caCORE--cancer Common Ontologic Reference Environment, which is the infrastructure backbone supporting data management and application development at NCICB. The paper explains the structure and function of caCORE: (1) Enterprise Vocabulary Services (EVS). They provide controlled vocabulary, dictionary and thesaurus services, and EVS produces the NCI Thesaurus and the NCI Metathesaurus; (2) The Cancer Data Standards Repository (caDSR). It provides a metadata registry for common data elements. (3) Cancer Bioinformatics Infrastructure Objects (caBIO). They provide Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. The vision for caCORE is to provide a common data management framework that will support the consistency, clarity, and comparability of biomedical research data and information. In addition to providing facilities for data management and redistribution, caCORE helps solve problems of data integration. All NCICB-developed caCORE components are distributed under open-source licenses that support unrestricted usage by both non-profit and commercial entities, and caCORE has laid the foundation for a number of scientific and clinical applications. Based on it, the paper expounds caCORE-base applications simply in several NCI projects, of which one is CMAP (Cancer Molecular Analysis Project), and the other is caBIG (Cancer Biomedical Informatics Grid). In the end, the paper also gives good prospects of caCORE, and while caCORE was born out of the needs of the cancer research community, it is intended to serve as a general resource. Cancer research has historically contributed to many areas beyond tumor biology. At the same time, the paper makes some suggestions about the study at the present time on biomedical informatics in China.

  11. Convergence, Competition, Cooperation: The Report of the Governor's Blue Ribbon Telecommunications Infrastructure Task Force. Volume One.

    ERIC Educational Resources Information Center

    Wisconsin Governor's Office, Madison.

    This report by the Blue Ribbon Task Force on Wisconsin's Telecommunications Infrastructure considers infrastructure to be the common network that connects individual residences, businesses, and agencies, rather than the individual systems and equipment themselves. The task force recognizes that advances in telecommunications technologies and…

  12. Multi Infrastructure Control and Optimization Toolkit, Resilient Design Module (MICOT-RDT), version 2.X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell; Nagarajan, Harsha; Yamangil, Emre

    2016-06-24

    MICOT is a tool for optimizing and controlling infrastructure systems. In includes modules for optimizing the operations of an infrastructure structure (for example optimal dispatch), designing infrastructure systems, restoring infrastructures systems, resiliency, preparing for natural disasters, interdicting networks, state estimation, sensor placement, and simulation of infrastructure systems. It implements algorithms developed at LANL that have been published in the academic community. This is a release of the of resilient design module of the MICOT.

  13. Building a North American Spatial Data Infrastructure

    USGS Publications Warehouse

    Coleman, D.J.; Nebert, D.D.

    1998-01-01

    This paper addresses the state of spatial data infrastructures within North America in late 1997. After providing some background underlying the philosophy and development of the SDI concept, the authors discuss effects of technology, institutions, and standardization that confront the cohesive implementation of a common infrastructure today. The paper concludes with a comparative framework and specific examples of elements and initiatives defining respective spatial data infrastructure initiatives in the United States and Canada.

  14. Computer networks for financial activity management, control and statistics of databases of economic administration at the Joint Institute for Nuclear Research

    NASA Astrophysics Data System (ADS)

    Tyupikova, T. V.; Samoilov, V. N.

    2003-04-01

    Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.

  15. Reusable experiment controllers, case studies

    NASA Astrophysics Data System (ADS)

    Buckley, Brian A.; Gaasbeck, Jim Van

    1996-03-01

    Congress has given NASA and the science community a reality check. The tight and ever shrinking budgets are trimming the fat from many space science programs. No longer can a Principal Investigator (PI) afford to waste development dollars on re-inventing spacecraft controllers, experiment/payload controllers, ground control systems, or test sets. Inheritance of the Ground Support Equipment (GSE) from one program to another is not a significant re-use of technology to develop a science mission in these times. Reduction of operational staff and highly autonomous experiments are needed to reduce the sustaining cost of a mission. The re-use of an infrastructure from one program to another is needed to truly attain the cost and time savings required. Interface and Control Systems, Inc. (ICS) has a long history of re-usable software. Navy, Air Force, and NASA programs have benefited from the re-use of a common control system from program to program. Several standardization efforts in the AIAA have adopted the Spacecraft Command Language (SCL) architecture as a point solution to satisfy requirements for re-use and autonomy. The Environmental Research Institute of Michigan (ERIM) has been a long-standing customer of ICS and are working on their 4th generation system using SCL. Much of the hardware and software infrastructure has been re-used from mission to mission with little cost for re-hosting a new experiment. The same software infrastructure has successfully been used on Clementine, and an end-to-end system is being deployed for the Far Ultraviolet Spectroscopic Explorer (FUSE) for Johns Hopkins University. A case study of the ERIM programs, Clementine and FUSE will be detailed in this paper.

  16. SEE-GRID eInfrastructure for Regional eScience

    NASA Astrophysics Data System (ADS)

    Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel

    In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure compatible with European developments, and empowering the scientists in the region in equal participation in the use of pan- European infrastructures, is materializing through the above initiatives. This model has a number of concrete operational and organizational guidelines which can be adapted to help e-Infrastructure developments in other world regions. In this paper we review the most important developments and contributions by the SEEGRID- SCI project.

  17. Data interoperabilty between European Environmental Research Infrastructures and their contribution to global data networks

    NASA Astrophysics Data System (ADS)

    Kutsch, W. L.; Zhao, Z.; Hardisty, A.; Hellström, M.; Chin, Y.; Magagna, B.; Asmi, A.; Papale, D.; Pfeil, B.; Atkinson, M.

    2017-12-01

    Environmental Research Infrastructures (ENVRIs) are expected to become important pillars not only for supporting their own scientific communities, but also a) for inter-disciplinary research and b) for the European Earth Observation Program Copernicus as a contribution to the Global Earth Observation System of Systems (GEOSS) or global thematic data networks. As such, it is very important that data-related activities of the ENVRIs will be well integrated. This requires common policies, models and e-infrastructure to optimise technological implementation, define workflows, and ensure coordination, harmonisation, integration and interoperability of data, applications and other services. The key is interoperating common metadata systems (utilising a richer metadata model as the `switchboard' for interoperation with formal syntax and declared semantics). The metadata characterises data, services, users and ICT resources (including sensors and detectors). The European Cluster Project ENVRIplus has developed a reference model (ENVRI RM) for common data infrastructure architecture to promote interoperability among ENVRIs. The presentation will provide an overview of recent progress and give examples for the integration of ENVRI data in global integration networks.

  18. NGScloud: RNA-seq analysis of non-model species using cloud computing.

    PubMed

    Mora-Márquez, Fernando; Vázquez-Poletti, José Luis; López de Heredia, Unai

    2018-05-03

    RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon's hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis. NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution. unai.lopezdeheredia@upm.es.

  19. Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing

    NASA Astrophysics Data System (ADS)

    Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.

    2006-05-01

    Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.

  20. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  1. DOE/DHS INDUSTRIAL CONTROL SYSTEM CYBER SECURITY PROGRAMS: A MODEL FOR USE IN NUCLEAR FACILITY SAFEGUARDS AND SECURITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert S. Anderson; Mark Schanfein; Trond Bjornard

    2011-07-01

    Many critical infrastructure sectors have been investigating cyber security issues for several years especially with the help of two primary government programs. The U.S. Department of Energy (DOE) National SCADA Test Bed and the U.S. Department of Homeland Security (DHS) Control Systems Security Program have both implemented activities aimed at securing the industrial control systems that operate the North American electric grid along with several other critical infrastructure sectors (ICS). These programs have spent the last seven years working with industry including asset owners, educational institutions, standards and regulating bodies, and control system vendors. The programs common mission is tomore » provide outreach, identification of cyber vulnerabilities to ICS and mitigation strategies to enhance security postures. The success of these programs indicates that a similar approach can be successfully translated into other sectors including nuclear operations, safeguards, and security. The industry regulating bodies have included cyber security requirements and in some cases, have incorporated sets of standards with penalties for non-compliance such as the North American Electric Reliability Corporation Critical Infrastructure Protection standards. These DOE and DHS programs that address security improvements by both suppliers and end users provide an excellent model for nuclear facility personnel concerned with safeguards and security cyber vulnerabilities and countermeasures. It is not a stretch to imagine complete surreptitious collapse of protection against the removal of nuclear material or even initiation of a criticality event as witnessed at Three Mile Island or Chernobyl in a nuclear ICS inadequately protected against the cyber threat.« less

  2. Green Infrastructure 101

    EPA Science Inventory

    Green Infrastructure 101 • What is it? What does it do? What doesn’t it do? • Green Infrastructure as a stormwater and combined sewer control • GI Controls and Best Management Practices that make sense for Yonkers o (Include operations and maintenance requirements for each)

  3. Assessing Socioeconomic Impacts of Cascading Infrastructure Disruptions in a Dynamic Human-Infrastructure Network

    DTIC Science & Technology

    2016-07-01

    CAC common access card DoD Department of Defense FOUO For Official Use Only GIS geographic information systems GUI graphical user interface HISA...as per requirements of this project, is UNCLASS/For Official Use Only (FOUO), with access re- stricted to DOD common access card (CAC) users. Key...Boko Haram Fuel Dump Discovered in Maiduguru.” Available: http://saharareporters.com/2015/10/01/another-boko-haram-fuel- dump - discovered-maiduguri

  4. Information Operations Team Training & Information Operations Training Aid, Information Warfare Effectiveness (IWE) Program, Delivery Order 8

    DTIC Science & Technology

    2010-03-01

    submenus and toolbar with icon buttons 4. The IFOTA shall conform to Defense Information Infrastructure Common Operating Environment ( DII COE) and...him my business card , but it might come in the package we request via AFRL). PSYOP Instructor IWST is now called IWT (??) SME MD MD Instructor...Engineering and Software Engineering CTA Cognitive Task Analysis DII COE Defense Information Infrastructure Common Operating Environment EJB Enterprise Java

  5. Decontamination of biological agents from drinking water infrastructure: a literature review and summary.

    PubMed

    Szabo, Jeff; Minamyer, Scott

    2014-11-01

    This report summarizes the current state of knowledge on the persistence of biological agents on drinking water infrastructure (such as pipes) along with information on decontamination should persistence occur. Decontamination options for drinking water infrastructure have been explored for some biological agents, but data gaps remain. Data on bacterial spore persistence on common water infrastructure materials such as iron and cement-mortar lined iron show that spores can be persistent for weeks after contamination. Decontamination data show that common disinfectants such as free chlorine have limited effectiveness. Decontamination results with germinant and alternate disinfectants such as chlorine dioxide are more promising. Persistence and decontamination data were collected on vegetative bacteria, such as coliforms, Legionella and Salmonella. Vegetative bacteria are less persistent than spores and more susceptible to disinfection, but the surfaces and water quality conditions in many studies were only marginally related to drinking water systems. However, results of real-world case studies on accidental contamination of water systems with E. coli and Salmonella contamination show that flushing and chlorination can help return a water system to service. Some viral persistence data were found, but decontamination data were lacking. Future research suggestions focus on expanding the available biological persistence data to other common infrastructure materials. Further exploration of non-traditional drinking water disinfectants is recommended for future studies. Published by Elsevier Ltd.

  6. Envri Cluster - a Community-Driven Platform of European Environmental Researcher Infrastructures for Providing Common E-Solutions for Earth Science

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Sorvari, S.; Kutsch, W. L.; Laj, P.

    2017-12-01

    European long-term environmental research infrastructures (often referred as ESFRI RIs) are the core facilities for providing services for scientists in their quest for understanding and predicting the complex Earth system and its functioning that requires long-term efforts to identify environmental changes (trends, thresholds and resilience, interactions and feedbacks). Many of the research infrastructures originally have been developed to respond to the needs of their specific research communities, however, it is clear that strong collaboration among research infrastructures is needed to serve the trans-boundary research requires exploring scientific questions at the intersection of different scientific fields, conducting joint research projects and developing concepts, devices, and methods that can be used to integrate knowledge. European Environmental research infrastructures have already been successfully worked together for many years and have established a cluster - ENVRI cluster - for their collaborative work. ENVRI cluster act as a collaborative platform where the RIs can jointly agree on the common solutions for their operations, draft strategies and policies and share best practices and knowledge. Supporting project for the ENVRI cluster, ENVRIplus project, brings together 21 European research infrastructures and infrastructure networks to work on joint technical solutions, data interoperability, access management, training, strategies and dissemination efforts. ENVRI cluster act as one stop shop for multidisciplinary RI users, other collaborative initiatives, projects and programmes and coordinates and implement jointly agreed RI strategies.

  7. Cycling transport safety quantification

    NASA Astrophysics Data System (ADS)

    Drbohlav, Jiri; Kocourek, Josef

    2018-05-01

    Dynamic interest in cycling transport brings the necessity to design safety cycling infrastructure. In las few years, couple of norms with safety elements have been designed and suggested for the cycling infrastructure. But these were not fully examined. The main parameter of suitable and fully functional transport infrastructure is the evaluation of its safety. Common evaluation of transport infrastructure safety is based on accident statistics. These statistics are suitable for motor vehicle transport but unsuitable for the cycling transport. Cycling infrastructure evaluation of safety is suitable for the traffic conflicts monitoring. The results of this method are fast, based on real traffic situations and can be applied on any traffic situations.

  8. Supervisory Control and Data Acquisition (SCADA) Systems and Cyber-Security: Best Practices to Secure Critical Infrastructure

    ERIC Educational Resources Information Center

    Morsey, Christopher

    2017-01-01

    In the critical infrastructure world, many critical infrastructure sectors use a Supervisory Control and Data Acquisition (SCADA) system. The sectors that use SCADA systems are the electric power, nuclear power and water. These systems are used to control, monitor and extract data from the systems that give us all the ability to light our homes…

  9. !CHAOS: A cloud of controls

    NASA Astrophysics Data System (ADS)

    Angius, S.; Bisegni, C.; Ciuffetti, P.; Di Pirro, G.; Foggetta, L. G.; Galletti, F.; Gargana, R.; Gioscio, E.; Maselli, D.; Mazzitelli, G.; Michelotti, A.; Orrù, R.; Pistoni, M.; Spagnoli, F.; Spigone, D.; Stecchi, A.; Tonto, T.; Tota, M. A.; Catani, L.; Di Giulio, C.; Salina, G.; Buzzi, P.; Checcucci, B.; Lubrano, P.; Piccini, M.; Fattibene, E.; Michelotto, M.; Cavallaro, S. R.; Diana, B. F.; Enrico, F.; Pulvirenti, S.

    2016-01-01

    The paper is aimed to present the !CHAOS open source project aimed to develop a prototype of a national private Cloud Computing infrastructure, devoted to accelerator control systems and large experiments of High Energy Physics (HEP). The !CHAOS project has been financed by MIUR (Italian Ministry of Research and Education) and aims to develop a new concept of control system and data acquisition framework by providing, with a high level of aaabstraction, all the services needed for controlling and managing a large scientific, or non-scientific, infrastructure. A beta version of the !CHAOS infrastructure will be released at the end of December 2015 and will run on private Cloud infrastructures based on OpenStack.

  10. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.

    2015-12-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.

  11. Quantifying Online Learning Contact Hours

    ERIC Educational Resources Information Center

    Powell, Karan; Helm, Jennifer Stephens; Layne, Melissa; Ice, Phil

    2012-01-01

    Technological and pedagogical advances in distance education have accentuated the necessity for higher education to keep pace regarding institutional infrastructures. Each infrastructure--driven by a common mission to provide quality learning--interprets quality according to standards established by various governmental and accrediting entities.…

  12. 77 FR 30589 - SteelRiver Infrastructure Partners LP, SteelRiver Infrastructure Associates LLC, SteelRiver...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. FD 35622] SteelRiver Infrastructure Partners LP, SteelRiver Infrastructure Associates LLC, SteelRiver Infrastructure Fund North America LP, and Patriot Funding LLC--Control Exemption--Patriot Rail Corp., et al. SteelRiver...

  13. ibex: An open infrastructure software platform to facilitate collaborative work in radiomics

    PubMed Central

    Zhang, Lifei; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation. PMID:25735289

  14. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    PubMed

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the IBEX software to be intuitive, powerful, and easy to use. IBEX can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone IBEX and IBEX's source code can be downloaded. The authors successfully implemented IBEX, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

  15. Sustaining a Focus on Health Equity at the Centers for Disease Control and Prevention Through Organizational Structures and Functions.

    PubMed

    Dean, Hazel D; Roberts, George W; Bouye, Karen E; Green, Yvonne; McDonald, Marian

    2016-01-01

    The public health infrastructure required for achieving health equity is multidimensional and complex. The infrastructure should be responsive to current and emerging priorities and capable of providing the foundation for developing, planning, implementing, and evaluating health initiatives. This article discusses these infrastructure requirements by examining how they are operationalized in the organizational infrastructure for promoting health equity at the Centers for Disease Control and Prevention, utilizing the nation's premier public health agency as a lens. Examples from the history of the Centers for Disease Control and Prevention's work in health equity from its centers, institute, and offices are provided to identify those structures and functions that are critical to achieving health equity. Challenges and facilitators to sustaining a health equity organizational infrastructure, as gleaned from the Centers for Disease Control and Prevention's experience, are noted. Finally, we provide additional considerations for expanding and sustaining a health equity infrastructure, which the authors hope will serve as "food for thought" for practitioners in state, tribal, or local health departments, community-based organizations, or nongovernmental organizations striving to create or maintain an impactful infrastructure to achieve health equity.

  16. Infrastructure resources for clinical research in amyotrophic lateral sclerosis.

    PubMed

    Sherman, Alexander V; Gubitz, Amelie K; Al-Chalabi, Ammar; Bedlack, Richard; Berry, James; Conwit, Robin; Harris, Brent T; Horton, D Kevin; Kaufmann, Petra; Leitner, Melanie L; Miller, Robert; Shefner, Jeremy; Vonsattel, Jean Paul; Mitsumoto, Hiroshi

    2013-05-01

    Clinical trial networks, shared clinical databases, and human biospecimen repositories are examples of infrastructure resources aimed at enhancing and expediting clinical and/or patient oriented research to uncover the etiology and pathogenesis of amyotrophic lateral sclerosis (ALS), a rapidly progressive neurodegenerative disease that leads to the paralysis of voluntary muscles. The current status of such infrastructure resources, as well as opportunities and impediments, were discussed at the second Tarrytown ALS meeting held in September 2011. The discussion focused on resources developed and maintained by ALS clinics and centers in North America and Europe, various clinical trial networks, U.S. government federal agencies including the National Institutes of Health (NIH), the Agency for Toxic Substances and Disease Registry (ATSDR) and the Centers for Disease Control and Prevention (CDC), and several voluntary disease organizations that support ALS research activities. Key recommendations included 1) the establishment of shared databases among individual ALS clinics to enhance the coordination of resources and data analyses; 2) the expansion of quality-controlled human biospecimen banks; and 3) the adoption of uniform data standards, such as the recently developed Common Data Elements (CDEs) for ALS clinical research. The value of clinical trial networks such as the Northeast ALS (NEALS) Consortium and the Western ALS (WALS) Consortium was recognized, and strategies to further enhance and complement these networks and their research resources were discussed.

  17. Distinctions between intelligent manufactured and constructed systems and a new discipline for intelligent infrastructure hypersystems

    NASA Astrophysics Data System (ADS)

    Aktan, A. Emin

    2003-08-01

    Although the interconnected systems nature of the infrastructures, and the complexity of interactions between their engineered, socio-technical and natural constituents have been recognized for some time, the principles of effectively operating, protecting and preserving such systems by taking full advantage of "modeling, simulations, optimization, control and decision making" tools developed by the systems engineering and operations research community have not been adequately studied or discussed by many engineers including the writer. Differential and linear equation systems, numerical and finite element modeling techniques, statistical and probabilistic representations are universal, however, different disciplines have developed their distinct approaches to conceptualizing, idealizing and modeling the systems they commonly deal with. The challenge is in adapting and integrating deterministic and stochastic, geometric and numerical, physics-based and "soft (data-or-knowledge based)", macroscopic or microscopic models developed by various disciplines for simulating infrastructure systems. There is a lot to be learned by studying how different disciplines have studied, improved and optimized the systems relating to various processes and products in their domains. Operations research has become a fifty-year old discipline addressing complex systems problems. Its mathematical tools range from linear programming to decision processes and game theory. These tools are used extensively in management and finance, as well as by industrial engineers for optimizing and quality control. Progressive civil engineering academic programs have adopted "systems engineering" as a focal area. However, most of the civil engineering systems programs remain focused on constructing and analyzing highly idealized, often generic models relating to the planning or operation of transportation, water or waste systems, maintenance management, waste management or general infrastructure hazards risk management. We further note that in the last decade there have been efforts for "agent-based" modeling of synthetic infrastructure systems by taking advantage of supercomputers at various DOE Laboratories. However, whether there is any similitude between such synthetic and actual systems needs investigating further.

  18. SeaDataNet - Pan-European infrastructure for marine and ocean data management: Unified access to distributed data sets (www.seadatanet.org)

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Maudire, Gilbert

    2010-05-01

    SeaDataNet is a leading infrastructure in Europe for marine & ocean data management. It is actively operating and further developing a Pan-European infrastructure for managing, indexing and providing access to ocean and marine data sets and data products, acquired via research cruises and other observational activities, in situ and remote sensing. The basis of SeaDataNet is interconnecting 40 National Oceanographic Data Centres and Marine Data Centers from 35 countries around European seas into a distributed network of data resources with common standards for metadata, vocabularies, data transport formats, quality control methods and flags, and access. Thereby most of the NODC's operate and/or are developing national networks to other institutes in their countries to ensure national coverage and long-term stewardship of available data sets. The majority of data managed by SeaDataNet partners concerns physical oceanography, marine chemistry, hydrography, and a substantial volume of marine biology and geology and geophysics. These are partly owned by the partner institutes themselves and for a major part also owned by other organizations from their countries. The SeaDataNet infrastructure is implemented with support of the EU via the EU FP6 SeaDataNet project to provide the Pan-European data management system adapted both to the fragmented observation system and the users need for an integrated access to data, meta-data, products and services. The SeaDataNet project has a duration of 5 years and started in 2006, but builds upon earlier data management infrastructure projects, undertaken over a period of 20 years by an expanding network of oceanographic data centres from the countries around all European seas. Its predecessor project Sea-Search had a strict focus on metadata. SeaDataNet maintains significant interest in the further development of the metadata infrastructure, extending its services with the provision of easy data access and generic data products. Version 1 of its infrastructure upgrade was launched in April 2008 and is now well underway to include all 40 data centres at V1 level. It comprises the network of 40 interconnected data centres (NODCs) and a central SeaDataNet portal. V1 provides users a unified and transparent overview of the metadata and controlled access to the large collections of data sets, that are managed at these data centres. The SeaDataNet V1 infrastructure comprises the following middleware services: • Discovery services = Metadata directories and User interfaces • Vocabulary services = Common vocabularies and Governance • Security services = Authentication, Authorization & Accounting • Delivery services = Requesting and Downloading of data sets • Viewing services = Mapping of metadata • Monitoring services = Statistics on system usage and performance and Registration of data requests and transactions • Maintenance services = Entry and updating of metadata by data centres Also good progress is being made with extending the SeaDataNet infrastructure with V2 services: • Viewing services = Quick views and Visualisation of data and data products • Product services = Generic and standard products • Exchange services = transformation of SeaDataNet portal CDI output to INSPIRE compliance As a basis for the V1 services, common standards have been defined for metadata and data formats, common vocabularies, quality flags, and quality control methods, based on international standards, such as ISO 19115, OGC, NetCDF (CF), ODV, best practices from IOC and ICES, and following INSPIRE developments. An important objective of the SeaDataNet V1 infrastructure is to provide transparent access to the distributed data sets via a unique user interface and download service. In the SeaDataNet V1 architecture the Common Data Index (CDI) V1 metadata service provides the link between discovery and delivery of data sets. The CDI user interface enables users to have a detailed insight of the availability and geographical distribution of marine data, archived at the connected data centres. It provides sufficient information to allow the user to assess the data relevance. Moreover the CDI user interface provides the means for downloading data sets in common formats via a transaction mechanism. The SeaDataNet portal provides registered users access to these distributed data sets via the CDI V1 Directory and a shopping basket mechanism. This allows registered users to locate data of interest and submit their data requests. The requests are forwarded automatically from the portal to the relevant SeaDataNet data centres. This process is controlled via the Request Status Manager (RSM) Web Service at the portal and a Download Manager (DM) java software module, implemented at each of the data centres. The RSM also enables registered users to check regularly the status of their requests and download data sets, after access has been granted. Data centres can follow all transactions for their data sets online and can handle requests which require their consent. The actual delivery of data sets is done between the user and the selected data centre. Very good progress is being made with connecting all SeaDataNet data centres and their data sets to the CDI V1 system. At present the CDI V1 system provides users functionality to discover and download more than 500.000 data sets, a number which is steadily increasing. The SeaDataNet architecture provides a coherent system of the various V1 services and inclusion of the V2 services. For the implementation, a range of technical components have been defined and developed. These make use of recent web technologies, and also comprise Java components, to provide multi-platform support and syntactic interoperability. To facilitate sharing of resources and interoperability, SeaDataNet has adopted the technology of SOAP Web services for various communication tasks. The SeaDataNet architecture has been designed as a multi-disciplinary system from the beginning. It is able to support a wide variety of data types and to serve several sector communities. SeaDataNet is willing to share its technologies and expertise, to spread and expand its approach, and to build bridges to other well established infrastructures in the marine domain. Therefore SeaDataNet has developed a strategy of seeking active cooperation on a national scale with other data holding organisations via its NODC networks and on an international scale with other European and international data management initiatives and networks. This is done with the objective to achieve a wider coverage of data sources and an overall interoperability between data infrastructures in the marine and ocean domains. Recent examples are e.g. the EU FP7 projects Geo-Seas for geology and geophysical data sets, UpgradeBlackSeaScene for a Black Sea data management infrastructure, CaspInfo for a Caspian Sea data management infrastructure, the EU EMODNET pilot projects, for hydrographic, chemical, and biological data sets. All projects are adopting the SeaDataNet standards and extending its services. Also active cooperation takes place with EuroGOOS and MyOcean in the domain of real-time and delayed mode metocean monitoring data. SeaDataNet Partners: IFREMER (France), MARIS (Netherlands), HCMR/HNODC (Greece), ULg (Belgium), OGS (Italy), NERC/BODC (UK), BSH/DOD (Germany), SMHI (Sweden), IEO (Spain), RIHMI/WDC (Russia), IOC (International), ENEA (Italy), INGV (Italy), METU (Turkey), CLS (France), AWI (Germany), IMR (Norway), NERI (Denmark), ICES (International), EC-DG JRC (International), MI (Ireland), IHPT (Portugal), RIKZ (Netherlands), RBINS/MUMM (Belgium), VLIZ (Belgium), MRI (Iceland), FIMR (Finland ), IMGW (Poland), MSI (Estonia), IAE/UL (Latvia), CMR (Lithuania), SIO/RAS (Russia), MHI/DMIST (Ukraine), IO/BAS (Bulgaria), NIMRD (Romania), TSU (Georgia), INRH (Morocco), IOF (Croatia), PUT (Albania), NIB (Slovenia), UoM (Malta), OC/UCY (Cyprus), IOLR (Israel), NCSR/NCMS (Lebanon), CNR-ISAC (Italy), ISMAL (Algeria), INSTM (Tunisia)

  19. Extensible Infrastructure for Browsing and Searching Abstracted Spacecraft Data

    NASA Technical Reports Server (NTRS)

    Wallick, Michael N.; Crockett, Thomas M.; Joswig, Joseph C.; Torres, Recaredo J.; Norris, Jeffrey S.; Fox, Jason M.; Powell, Mark W.; Mittman, David S.; Abramyan, Lucy; Shams, Khawaja S.; hide

    2009-01-01

    A computer program has been developed to provide a common interface for all space mission data, and allows different types of data to be displayed in the same context. This software provides an infrastructure for representing any type of mission data.

  20. Ocean Data Interoperability Platform (ODIP): developing a common framework for global marine data management

    NASA Astrophysics Data System (ADS)

    Glaves, H. M.

    2015-12-01

    In recent years marine research has become increasingly multidisciplinary in its approach with a corresponding rise in the demand for large quantities of high quality interoperable data as a result. This requirement for easily discoverable and readily available marine data is currently being addressed by a number of regional initiatives with projects such as SeaDataNet in Europe, Rolling Deck to Repository (R2R) in the USA and the Integrated Marine Observing System (IMOS) in Australia, having implemented local infrastructures to facilitate the exchange of standardised marine datasets. However, each of these systems has been developed to address local requirements and created in isolation from those in other regions.Multidisciplinary marine research on a global scale necessitates a common framework for marine data management which is based on existing data systems. The Ocean Data Interoperability Platform project is seeking to address this requirement by bringing together selected regional marine e-infrastructures for the purposes of developing interoperability across them. By identifying the areas of commonality and incompatibility between these data infrastructures, and leveraging the development activities and expertise of these individual systems, three prototype interoperability solutions are being created which demonstrate the effective sharing of marine data and associated metadata across the participating regional data infrastructures as well as with other target international systems such as GEO, COPERNICUS etc.These interoperability solutions combined with agreed best practice and approved standards, form the basis of a common global approach to marine data management which can be adopted by the wider marine research community. To encourage implementation of these interoperability solutions by other regional marine data infrastructures an impact assessment is being conducted to determine both the technical and financial implications of deploying them alongside existing services. The associated best practice and common standards are also being disseminated to the user community through relevant accreditation processes and related initiatives such as the Research Data Alliance and the Belmont Forum.

  1. Decontamination of chemical agents from drinking water infrastructure: a literature review and summary.

    PubMed

    Szabo, Jeff; Minamyer, Scott

    2014-11-01

    This report summarizes the current state of knowledge on the persistence of chemical contamination on drinking water infrastructure (such as pipes) along with information on decontamination should persistence occur. Decontamination options for drinking water infrastructure have been explored for some chemical contaminants, but important data gaps remain. In general, data on chemical persistence on drinking water infrastructure is available for inorganics such as arsenic and mercury, as well as select organics such as petroleum products, pesticides and rodenticides. Data specific to chemical warfare agents and pharmaceuticals was not found and data on toxins is scant. Future research suggestions focus on expanding the available chemical persistence data to other common drinking water infrastructure materials. Decontaminating agents that successfully removed persistent contamination from one infrastructure material should be used in further studies. Methods for sampling or extracting chemical agents from water infrastructure surfaces are needed. Published by Elsevier Ltd.

  2. Quantifying the benefits of urban forest systems as a component of the green infrastructure stormwater treatment network

    Treesearch

    Eric Kuehler; Jon Hathaway; Andrew Tirpak

    2017-01-01

    The use of green infrastructure for reducing stormwater runoff is increasingly common. One under‐studied component of the green infrastructure network is the urban forest system. Trees can play an important role as the “first line of defense” for restoring more natural hydrologic regimes in urban watersheds by intercepting rainfall, delaying runoff, infiltrating, and...

  3. Decontamination of Drinking Water Infrastructure ...

    EPA Pesticide Factsheets

    Technical Brief This study examines the effectiveness of decontaminating corroded iron and cement-mortar coupons that have been contaminated with spores of Bacillus atrophaeus subsp. globigii (B. globigii), which is often used as a surrogate for pathogenic B. anthracis (anthrax) in disinfection studies. Bacillus spores are persistent on common drinking water material surfaces like corroded iron, requiring physical or chemical methods to decontaminate the infrastructure. In the United States, free chlorine and monochloramine are the primary chemical disinfectants used by the drinking water industry to inactivate microorganisms. Flushing is also a common, easily implemented practice in drinking water distribution systems, although large volumes of contaminated water needing treatment could be generated. Identifying readily available alternative disinfectant formulations for infrastructure decontamination could give water utilities options for responding to specific types of contamination events. In addition to presenting data on flushing alone, which demonstrated the persistence of spores on water infrastructure in the absence of high levels of disinfectants, data on acidified nitrite, chlorine dioxide, free chlorine, monochloramine, ozone, peracetic acid, and followed by flushing are provided.

  4. Strengthening the Security of ESA Ground Data Systems

    NASA Astrophysics Data System (ADS)

    Flentge, Felix; Eggleston, James; Garcia Mateos, Marc

    2013-08-01

    A common approach to address information security has been implemented in ESA's Mission Operations (MOI) Infrastructure during the last years. This paper reports on the specific challenges to the Data Systems domain within the MOI and how security can be properly managed with an Information Security Management System (ISMS) according to ISO 27001. Results of an initial security risk assessment are reported and the different types of security controls that are being implemented in order to reduce the risks are briefly described.

  5. Science in Action: National Stormwater Calculator (SWC) ...

    EPA Pesticide Factsheets

    Stormwater discharges continue to cause impairment of our Nation’s waterbodies. Regulations that require the retention and/or treatment of frequent, small storms that dominate runoff volumes and pollutant loads are becoming more common. EPA has developed the National Stormwater Calculator (SWC) to help support local, state, and national stormwater management objectives to reduce runoff through infiltration and retention using green infrastructure practices as low impact development (LID) controls. To inform the public on what the Stormwater Calculator is used for.

  6. Anomaly-based intrusion detection for SCADA systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, D.; Usynin, A.; Hines, J. W.

    2006-07-01

    Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper willmore » briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)« less

  7. Programmable logic controller optical fibre sensor interface module

    NASA Astrophysics Data System (ADS)

    Allwood, Gary; Wild, Graham; Hinckley, Steven

    2011-12-01

    Most automated industrial processes use Distributed Control Systems (DCSs) or Programmable Logic Controllers (PLCs) for automated control. PLCs tend to be more common as they have much of the functionality of DCSs, although they are generally cheaper to install and maintain. PLCs in conjunction with a human machine interface form the basis of Supervisory Control And Data Acquisition (SCADA) systems, combined with communication infrastructure and Remote Terminal Units (RTUs). RTU's basically convert different sensor measurands in to digital data that is sent back to the PLC or supervisory system. Optical fibre sensors are becoming more common in industrial processes because of their many advantageous properties. Being small, lightweight, highly sensitive, and immune to electromagnetic interference, means they are an ideal solution for a variety of diverse sensing applications. Here, we have developed a PLC Optical Fibre Sensor Interface Module (OFSIM), in which an optical fibre is connected directly to the OFSIM located next to the PLC. The embedded fibre Bragg grating sensors, are highly sensitive and can detect a number of different measurands such as temperature, pressure and strain without the need for a power supply.

  8. Collaborative Access Control For Critical Infrastructures

    NASA Astrophysics Data System (ADS)

    Baina, Amine; El Kalam, Anas Abou; Deswarte, Yves; Kaaniche, Mohamed

    A critical infrastructure (CI) can fail with various degrees of severity due to physical and logical vulnerabilities. Since many interdependencies exist between CIs, failures can have dramatic consequences on the entire infrastructure. This paper focuses on threats that affect information and communication systems that constitute the critical information infrastructure (CII). A new collaborative access control framework called PolyOrBAC is proposed to address security problems that are specific to CIIs. The framework offers each organization participating in a CII the ability to collaborate with other organizations while maintaining control of its resources and internal security policy. The approach is demonstrated on a practical scenario involving the electrical power grid.

  9. Application of Smart Infrastructure Systems approach to precision medicine.

    PubMed

    Govindaraju, Diddahally R; Annaswamy, Anuradha M

    2015-12-01

    All biological variation is hierarchically organized dynamic network system of genomic components, organelles, cells, tissues, organs, individuals, families, populations and metapopulations. Individuals are axial in this hierarchy, as they represent antecedent, attendant and anticipated aspects of health, disease, evolution and medical care. Humans show individual specific genetic and clinical features such as complexity, cooperation, resilience, robustness, vulnerability, self-organization, latent and emergent behavior during their development, growth and senescence. Accurate collection, measurement, organization and analyses of individual specific data, embedded at all stratified levels of biological, demographic and cultural diversity - the big data - is necessary to make informed decisions on health, disease and longevity; which is a central theme of precision medicine initiative (PMI). This initiative also calls for the development of novel analytical approaches to handle complex multidimensional data. Here we suggest the application of Smart Infrastructure Systems (SIS) approach to accomplish some of the goals set forth by the PMI on the premise that biological systems and the SIS share many common features. The latter has been successfully employed in managing complex networks of non-linear adaptive controls, commonly encountered in smart engineering systems. We highlight their concordance and discuss the utility of the SIS approach in precision medicine programs.

  10. A Case for Data Commons

    PubMed Central

    Grossman, Robert L.; Heath, Allison; Murphy, Mark; Patterson, Maria; Wells, Walt

    2017-01-01

    Data commons collocate data, storage, and computing infrastructure with core services and commonly used tools and applications for managing, analyzing, and sharing data to create an interoperable resource for the research community. An architecture for data commons is described, as well as some lessons learned from operating several large-scale data commons. PMID:29033693

  11. Vehicle-to-infrastructure (V2I) safety applications : performance requirements, vol. 1, introduction and common requirements.

    DOT National Transportation Integrated Search

    2015-08-01

    This document is the first of a seven volume report that describes performance requirements for connected vehicle vehicle-to-infrastructure (V2I) Safety Applications developed for the U.S. Department of Transportation (U.S. DOT). The applications add...

  12. Arid Green Infrastructure for Water Control and Conservation ...

    EPA Pesticide Factsheets

    Green infrastructure is an approach to managing wet weather flows using systems and practices that mimic natural processes. It is designed to manage stormwater as close to its source as possible and protect the quality of receiving waters. Although most green infrastructure practices were first developed in temperate climates, green infrastructure also can be a cost-effective approach to stormwater management and water conservation in arid and semi-arid regions, such as those found in the western and southwestern United States. Green infrastructure practices can be applied at the site, neighborhood and watershed scales. In addition to water management and conservation, implementing green infrastructure confers many social and economic benefits and can address issues of environmental justice. The U.S. Environmental Protection Agency (EPA) commissioned a literature review to identify the state-of-the science practices dealing with water control and conservation in arid and semi-arid regions, with emphasis on these regions in the United States. The search focused on stormwater control measures or practices that slow, capture, treat, infiltrate and/or store runoff at its source (i.e., green infrastructure). The material in Chapters 1 through 3 provides background to EPA’s current activities related to the application of green infrastructure practices in arid and semi-arid regions. An introduction to the topic of green infrastructure in arid and semi-arid regions i

  13. Network information attacks on the control systems of power facilities belonging to the critical infrastructure

    NASA Astrophysics Data System (ADS)

    Loginov, E. L.; Raikov, A. N.

    2015-04-01

    The most large-scale accidents occurred as a consequence of network information attacks on the control systems of power facilities belonging to the United States' critical infrastructure are analyzed in the context of possibilities available in modern decision support systems. Trends in the development of technologies for inflicting damage to smart grids are formulated. A volume matrix of parameters characterizing attacks on facilities is constructed. A model describing the performance of a critical infrastructure's control system after an attack is developed. The recently adopted measures and legislation acts aimed at achieving more efficient protection of critical infrastructure are considered. Approaches to cognitive modeling and networked expertise of intricate situations for supporting the decision-making process, and to setting up a system of indicators for anticipatory monitoring of critical infrastructure are proposed.

  14. 77 FR 5703 - Approval and Promulgation of Implementation Plans; North Carolina; 110(a)(1) and (2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... and Promulgation of Implementation Plans; North Carolina; 110(a)(1) and (2) Infrastructure... Carolina, through the Department of Environment and Natural Resources (NC DENR), Division of Air Quality... is commonly referred to as an ``infrastructure'' SIP. North Carolina certified that the North...

  15. The Anatomy of a Grid portal

    NASA Astrophysics Data System (ADS)

    Licari, Daniele; Calzolari, Federico

    2011-12-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  16. Application of large-scale computing infrastructure for diverse environmental research applications using GC3Pie

    NASA Astrophysics Data System (ADS)

    Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael

    2013-04-01

    The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.

  17. Economic Analysis of Social Common Capital

    NASA Astrophysics Data System (ADS)

    Uzawa, Hirofumi

    2005-06-01

    Social common capital provides members of society with those services and institutional arrangements that are crucial in maintaining human and cultural life. The term æsocial common capital' is comprised of three categories: natural capital, social infrastructure, and institutional capital. Natural capital consists of all natural environment and natural resources including the earth's atmosphere. Social infrastructure consists of roads, bridges, public transportation systems, electricity, and other public utilities. Institutional capital includes hospitals, educational institutions, judicial and police systems, public administrative services, financial and monetary institutions, and cultural capital. This book attempts to modify and extend the theoretical premises of orthodox economic theory to make them broad enough to analyze the economic implications of social common capital. It further aims to find the institutional arrangements and policy measures that will bring about the optimal state of affairs.

  18. Critical Infrastructure Protection- Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bofman, Ryan K.

    Los Alamos National Laboratory (LANL) has been a key facet of Critical National Infrastructure since the nuclear bombing of Hiroshima exposed the nature of the Laboratory’s work in 1945. Common knowledge of the nature of sensitive information contained here presents a necessity to protect this critical infrastructure as a matter of national security. This protection occurs in multiple forms beginning with physical security, followed by cybersecurity, safeguarding of classified information, and concluded by the missions of the National Nuclear Security Administration.

  19. Common Criteria for Information Technology Security Evaluation: Department of Defense Public Key Infrastructure and Key Management Infrastructure Token Protection Profile (Medium Robustness)

    DTIC Science & Technology

    2002-03-22

    may be derived from detailed inspection of the IC itself or from illicit appropriation of design information. Counterfeit smart cards can be mass...Infrastructure (PKI) as the Internet to securely and privately exchange data and money through the use of a public and a private cryptographic key pair...interference devices (SQDIS), electrical testing, and electron beam testing. • Other attacks, such as UV or X-rays or high temperatures, could cause erasure

  20. The Mayor of EarthCube: Cities as an Analogue for Governing Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Pearthree, G. M.; Allison, M. L.; Patten, K.

    2012-12-01

    Historical development of national and global infrastructure follows common paths with common imperatives. The nascent development may be led a by champion, innovator, or incubating organization. Once the infrastructure reaches a tipping point and adoption spreads rapidly, the organization and governance evolves in concert. Ultimately, no wide-spread infrastructure (from canals to highways to the electric grid to radio/television, or the Internet) operates with a single overarching governing body. The NSF EarthCube initiative is a prototype implementation of cyberinfrastructure, using the broad geoscience community as the testbed. Governance for EarthCube is emulating the pattern of other infrastructure, which we argue is a system of systems that can be described by organized complexity, emergent systems, and non-linear thermodynamics. As we consider governance cyberinfrastructure in the geosciences, we might look to cities as analogs: cities provide services such as fire, police, water, and trash collection. Cities issue permits and often oversee zoning, but much of what defines cities is outside the direct control of city government. Businesses choose whether to locate there, where to operate, and what to build. Residents make similar decisions. State and federal agencies make decisions or impose criteria that greatly affect cities, without necessarily getting agreement from them. City government must thus operate at multiple levels - providing oversight and management of city services, interaction with residents, businesses, and visitors, and dealing with actions and decisions made by independent entities over which they have little or no control. Cities have a range of organizational and management models, ranging from city managers, councils, and weak to strong mayors, some elected directly, some chosen from councils. The range and complexity of governance issues in building, operating, and sustaining cyberinfrastructure in the geosciences and beyond, rival those of running a medium to large city. The range of organizational and management structures in meeting community needs and goals are also diverse and may embody a multi-faceted set of governing archetypes, best suited to carry out each of myriad functions. We envision cyberinfrastructure governance to be a community-driven enterprise empowered to carry out a dynamic set of functions, operating within a set of processes (comparable to a city charter) and guiding principles (constitution).

  1. 75 FR 67989 - Agency Information Collection Activities: Office of Infrastructure Protection; Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-04

    ..., National Protection and Programs Directorate, Office of Infrastructure Protection (IP), will submit the... manner.'' DHS designated IP to lead these efforts. Given that the vast majority of the Nation's critical infrastructure and key resources in most sectors are privately owned or controlled, IP's success in achieving the...

  2. Emergence of a Common Modeling Architecture for Earth System Science (Invited)

    NASA Astrophysics Data System (ADS)

    Deluca, C.

    2010-12-01

    Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.

  3. VERA 3.6 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.

    The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.

  4. InterMine Webservices for Phytozome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Hayes, David; Goodstein, David

    2014-01-10

    A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data

  5. COOPEUS - connecting research infrastructures in environmental sciences

    NASA Astrophysics Data System (ADS)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources, the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS research infrastructures. COOPEUS recognizes the potential for the GCI to become an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges. Recommendations from the workshop participants also revealed that in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated. However, also the data policies of the individual research infrastructure, or lack thereof, can prevent the use of the GCI or other portals, due to unclarities regarding data management authority and data ownership. COOPEUS shall continue to promote cross-disciplinary data exchange in the environmental field and will in the future expand to also include other geographical areas.

  6. Experimenting with C2 Applications and Federated Infrastructures for Integrated Full-Spectrum Operational Environments in Support of Collaborative Planning and Interoperable Execution

    DTIC Science & Technology

    2004-06-01

    Situation Understanding) Common Operational Pictures Planning & Decision Support Capabilities Message & Order Processing Common Operational...Pictures Planning & Decision Support Capabilities Message & Order Processing Common Languages & Data Models Modeling & Simulation Domain

  7. A Tale of Two Regimes: Instrumentality and Commons Access

    ERIC Educational Resources Information Center

    Toly, Noah J.

    2005-01-01

    Technical developments have profound social and environmental impacts. Both are observed in the implications of regimes of instrumentality for commons access regimes. Establishing social, material, ecological, intellectual, and moral infrastructures, technologies are partly constitutive of commons access and may militate against governance…

  8. Synthesis Study on Transitions in Signal Infrastructure and Control Algorithms for Connected and Automated Transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Wang, Hong; Young, Stan

    Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology,more » vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal infrastructure to act as the nervous system for urban transportation networks, providing not only signaling, but also observability, surveillance, and measurement capacity. The discussion of the opportunities space includes network optimization and control theory perspectives, and the current states of observability for key system parameters (what can be detected, how frequently can it be reported) as well as controllability of dynamic parameters (this includes adjusting not only the signal phase and timing, but also the ability to alter vehicle trajectories through information or direct control). The perspective of observability and controllability of the dynamic systems provides an appropriate lens to discuss future directions as CAV/CV become more prevalent in the future.« less

  9. Integrated prevalence mapping of schistosomiasis, soil-transmitted helminthiasis and malaria in lakeside and island communities in Lake Victoria, Uganda.

    PubMed

    Kabatereine, Narcis B; Standley, Claire J; Sousa-Figueiredo, Jose C; Fleming, Fiona M; Stothard, J Russell; Talisuna, Ambrose; Fenwick, Alan

    2011-12-13

    It is widely advocated that integrated strategies for the control of neglected tropical diseases (NTDs) are cost-effective in comparison to vertical disease-specific programmes. A prerequisite for implementation of control interventions is the availability of baseline data of prevalence, including the population at risk and disease overlap. Despite extensive literature on the distribution of schistosomiasis on the mainland in Uganda, there has been a knowledge gap for the prevalence of co-infections with malaria, particularly for island communities in Lake Victoria. In this study, nine lakeshore and island districts were surveyed for the prevalence of NTDs and malaria, as well as educational and health infrastructure. A total of 203 communities were surveyed, including over 5000 school-age children. Varying levels of existing health infrastructure were observed between districts, with only Jinja District regularly treating people for NTDs. Community medicine distributors (CMD) were identified and trained in drug delivery to strengthen capacity. Prevalence levels of intestinal schistosomiasis and soil-transmitted helminthiasis were assessed via Kato-Katz thick smears of stool and malaria prevalence determined by microscopy of fingerprick blood samples. Prevalence levels were 40.8%, 26.04% and 46.4%, respectively, while the prevalence of co-infection by Schistosoma mansoni and Plasmodium spp. was 23.5%. Socio-economic status was strongly associated as a risk factor for positive infection status with one or more of these diseases. These results emphasise the challenges of providing wide-scale coverage of health infrastructure and drug distribution in remote lakeshore communities. The data further indicate that co-infections with malaria and NTDs are common, implying that integrated interventions for NTDs and malaria are likely to maximize cost-effectiveness and sustainability of disease control efforts.

  10. Integrated prevalence mapping of schistosomiasis, soil-transmitted helminthiasis and malaria in lakeside and island communities in Lake Victoria, Uganda

    PubMed Central

    2011-01-01

    Background It is widely advocated that integrated strategies for the control of neglected tropical diseases (NTDs) are cost-effective in comparison to vertical disease-specific programmes. A prerequisite for implementation of control interventions is the availability of baseline data of prevalence, including the population at risk and disease overlap. Despite extensive literature on the distribution of schistosomiasis on the mainland in Uganda, there has been a knowledge gap for the prevalence of co-infections with malaria, particularly for island communities in Lake Victoria. In this study, nine lakeshore and island districts were surveyed for the prevalence of NTDs and malaria, as well as educational and health infrastructure. Results A total of 203 communities were surveyed, including over 5000 school-age children. Varying levels of existing health infrastructure were observed between districts, with only Jinja District regularly treating people for NTDs. Community medicine distributors (CMD) were identified and trained in drug delivery to strengthen capacity. Prevalence levels of intestinal schistosomiasis and soil-transmitted helminthiasis were assessed via Kato-Katz thick smears of stool and malaria prevalence determined by microscopy of fingerprick blood samples. Prevalence levels were 40.8%, 26.04% and 46.4%, respectively, while the prevalence of co-infection by Schistosoma mansoni and Plasmodium spp. was 23.5%. Socio-economic status was strongly associated as a risk factor for positive infection status with one or more of these diseases. Conclusions These results emphasise the challenges of providing wide-scale coverage of health infrastructure and drug distribution in remote lakeshore communities. The data further indicate that co-infections with malaria and NTDs are common, implying that integrated interventions for NTDs and malaria are likely to maximize cost-effectiveness and sustainability of disease control efforts. PMID:22166365

  11. A technological review on electric vehicle DC charging stations using photovoltaic sources

    NASA Astrophysics Data System (ADS)

    Youssef, Cheddadi; Fatima, Errahimi; najia, Es-sbai; Chakib, Alaoui

    2018-05-01

    Within the next few years, Electrified vehicles are destined to become the essential component of the transport field. Consequently, the charging infrastructure should be developed in the same time. Among this substructure, Charging stations photovoltaic-assisted are attracting a substantial interest due to increased environmental awareness, cost reduction and rise in efficiency of the PV modules. The intention of this paper is to review the technological status of Photovoltaic–Electric vehicle (PV-EV) charging stations during the last decade. The PV-EV charging station is divided into two categories, which are PV-grid and PV-standalone charging systems. From a practical point view, the distinction between the two architectures is the bidirectional inverter, which is added to link the station to the smart grid. The technological infrastructure includes the common hardware components of every station, namely: PV array, dc-dc converter provided with MPPT control, energy storage unit, bidirectional dc charger and inverter. We investigate, compare and evaluate many valuable researches that contain the design and control of PV-EV charging system. Additionally, this concise overview reports the studies that include charging standards, the power converters topologies that focus on the adoption of Vehicle-to grid technology and the control for both PV–grid and PV standalone DC charging systems.

  12. European environmental research infrastructures are going for common 30 years strategy

    NASA Astrophysics Data System (ADS)

    Asmi, Ari; Konjin, Jacco; Pursula, Antti

    2014-05-01

    Environmental Research infrastructures are facilities, resources, systems and related services that are used by research communities to conduct top-level research. Environmental research is addressing processes at very different time scales, and supporting research infrastructures must be designed as long-term facilities in order to meet the requirements of continuous environmental observation, measurement and analysis. This longevity makes the environmental research infrastructures ideal structures to support the long-term development in environmental sciences. ENVRI project is a collaborative action of the major European (ESFRI) Environmental Research Infrastructures working towards increased co-operation and interoperability between the infrastructures. One of the key products of the ENVRI project is to combine the long-term plans of the individual infrastructures towards a common strategy, describing the vision and planned actions. The envisaged vision for environmental research infrastructures toward 2030 is to support the holistic understanding of our planet and it's behavior. The development of a 'Standard Model of the Planet' is a common ambition, a challenge to define an environmental standard model; a framework of all interactions within the Earth System, from solid earth to near space. Indeed scientists feel challenged to contribute to a 'Standard Model of the Planet' with data, models, algorithms and discoveries. Understanding the Earth System as an interlinked system requires a systems approach. The Environmental Sciences are rapidly moving to become a one system-level science. Mainly since modern science, engineering and society are increasingly facing complex problems that can only be understood in the context of the full overall system. The strategy of the supporting collaborating research infrastructures is based on developing three key factors for the Environmental Sciences: the technological, the cultural and the human capital. The technological capital development concentrates on improving the capacities to measure, observe, preserve and compute. This requires staff, technologies, sensors, satellites, floats, software to integrate and to do analysis and modeling, including data storage, computing platforms and networks. The cultural capital development addresses issues such as open access to data, rules, licenses, citation agreements, IPR agreements, technologies for machine-machine interaction, workflows, metadata, and RI community on the policy level. Human capital actions are based on anticipated need of specialists, including data scientists and 'generalists' that oversee more than just their own discipline. Developing these, as interrelated services, should help the scientific community to enter innovative and large projects contributing to a 'Standard Model of the Planet'. To achieve the overall goal, ENVRI will publish a set of action items that contains intermediate aims, bigger and smaller steps to work towards the development of the 'Standard Model of the Planet' approach. This timeline of actions can used as reference and 'common denominator' in defining new projects and research programs. Either within the various environmental scientific disciplines or when cooperating among these disciplines or even when outreaching towards other disciplines like social sciences, physics/chemistry, medical/life sciences etc.

  13. SeaDataNet II - Second phase of developments for the pan-European infrastructure for marine and ocean data management

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Fichaut, Michele

    2013-04-01

    The second phase of the project SeaDataNet started on October 2011 for another 4 years with the aim to upgrade the SeaDataNet infrastructure built during previous years. The numbers of the project are quite impressive: 59 institutions from 35 different countries are involved. In particular, 45 data centers are sharing human and financial resources in a common efforts to sustain an operationally robust and state-of-the-art Pan-European infrastructure for providing up-to-date and high quality access to ocean and marine metadata, data and data products. The main objective of SeaDataNet II is to improve operations and to progress towards an efficient data management infrastructure able to handle the diversity and large volume of data collected via the Pan-European oceanographic fleet and the new observation systems, both in real-time and delayed mode. The infrastructure is based on a semi-distributed system that incorporates and enhance the existing NODCs network. SeaDataNet aims at serving users from science, environmental management, policy making, and economical sectors. Better integrated data systems are vital for these users to achieve improved scientific research and results, to support marine environmental and integrated coastal zone management, to establish indicators of Good Environmental Status for sea basins, and to support offshore industry developments, shipping, fisheries, and other economic activities. The recent EU communication "MARINE KNOWLEDGE 2020 - marine data and observation for smart and sustainable growth" states that the creation of marine knowledge begins with observation of the seas and oceans. In addition, directives, policies, science programmes require reporting of the state of the seas and oceans in an integrated pan-European manner: of particular note are INSPIRE, MSFD, WISE-Marine and GMES Marine Core Service. These underpin the importance of a well functioning marine and ocean data management infrastructure. SeaDataNet is now one of the major players in informatics in oceanography and collaborative relationships have been created with other EU and non EU projects. In particular SeaDataNet has recognised roles in the continuous serving of common vocabularies, the provision of tools for data management, as well as giving access to metadata, data sets and data products of importance for society. The SeaDataNet infrastructure comprises a network of interconnected data centres and a central SeaDataNet portal. The portal provides users not only background information about SeaDataNet and the various SeaDataNet standards and tools, but also a unified and transparent overview of the metadata and controlled access to the large collections of data sets, managed by the interconnected data centres. The presentation will give information on present services of the SeaDataNet infrastructure and services, and highlight a number of key achievements in SeaDataNet II so far.

  14. Crowdsourced earthquake early warning.

    PubMed

    Minson, Sarah E; Brooks, Benjamin A; Glennie, Craig L; Murray, Jessica R; Langbein, John O; Owen, Susan E; Heaton, Thomas H; Iannucci, Robert A; Hauser, Darren L

    2015-04-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an M w (moment magnitude) 7 earthquake on California's Hayward fault, and real data from the M w 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  15. Crowdsourced earthquake early warning

    PubMed Central

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing. PMID:26601167

  16. Optoelectronic Infrastructure for Radio Frequency and Optical Phased Arrays

    NASA Technical Reports Server (NTRS)

    Cai, Jianhong

    2015-01-01

    Optoelectronic integrated circuits offer radiation-hardened solutions for satellite systems in addition to improved size, weight, power, and bandwidth characteristics. ODIS, Inc., has developed optoelectronic integrated circuit technology for sensing and data transfer in phased arrays. The technology applies integrated components (lasers, amplifiers, modulators, detectors, and optical waveguide switches) to a radio frequency (RF) array with true time delay for beamsteering. Optical beamsteering is achieved by controlling the current in a two-dimensional (2D) array. In this project, ODIS integrated key components to produce common RF-optical aperture operation.

  17. The GEOSS User Requirement Registry (URR): A Cross-Cutting Service-Oriented Infrastructure Linking Science, Society and GEOSS

    NASA Astrophysics Data System (ADS)

    Plag, H.-P.; Foley, G.; Jules-Plag, S.; Ondich, G.; Kaufman, J.

    2012-04-01

    The Group on Earth Observations (GEO) is implementing the Global Earth Observation System of Systems (GEOSS) as a user-driven service infrastructure responding to the needs of users in nine interdependent Societal Benefit Areas (SBAs) of Earth observations (EOs). GEOSS applies an interdisciplinary scientific approach integrating observations, research, and knowledge in these SBAs in order to enable scientific interpretation of the collected observations and the extraction of actionable information. Using EOs to actually produce these societal benefits means getting the data and information to users, i.e., decision-makers. Thus, GEO needs to know what the users need and how they would use the information. The GEOSS User Requirements Registry (URR) is developed as a service-oriented infrastructure enabling a wide range of users, including science and technology (S&T) users, to express their needs in terms of EOs and to understand the benefits of GEOSS for their fields. S&T communities need to be involved in both the development and the use of GEOSS, and the development of the URR accounts for the special needs of these communities. The GEOSS Common Infrastructure (GCI) at the core of GEOSS includes system-oriented registries enabling users to discover, access, and use EOs and derived products and services available through GEOSS. In addition, the user-oriented URR is a place for the collection, sharing, and analysis of user needs and EO requirements, and it provides means for an efficient dialog between users and providers. The URR is a community-based infrastructure for the publishing, viewing, and analyzing of user-need related information. The data model of the URR has a core of seven relations for User Types, Applications, Requirements, Research Needs, Infrastructure Needs, Technology Needs, and Capacity Building Needs. The URR also includes a Lexicon, a number of controlled vocabularies, and

  18. Fifty years of stroke researches in India

    PubMed Central

    Banerjee, Tapas Kumar; Das, Shyamal Kumar

    2016-01-01

    Currently, the stroke incidence in India is much higher than Western industrialized countries. Large vessel intracranial atherosclerosis is the commonest cause of ischemic stroke in India. The common risk factors, that is, hypertension, diabetes, smoking, and dyslipidemia are quite prevalent and inadequately controlled; mainly because of poor public awareness and inadequate infrastructure. Only a small number of ischemic stroke cases are able to have the benefit of thrombolytic therapy. Benefits from stem cell therapy in established stroke cases are under evaluation. Presently, prevention of stroke is the best option considering the Indian scenario through control and/or avoiding risk factors of stroke. Interventional studies are an important need for this scenario. PMID:27011621

  19. Enhancing infrastructure resilience through business continuity planning.

    PubMed

    Fisher, Ronald; Norman, Michael; Klett, Mary

    2017-01-01

    Critical infrastructure is crucial to the functionality and wellbeing of the world around us. It is a complex network that works together to create an efficient society. The core components of critical infrastructure are dependent on one another to function at their full potential. Organisations face unprecedented environmental risks such as increased reliance on information technology and telecommunications, increased infrastructure interdependencies and globalisation. Successful organisations should integrate the components of cyber-physical and infrastructure interdependencies into a holistic risk framework. Physical security plans, cyber security plans and business continuity plans can help mitigate environmental risks. Cyber security plans are becoming the most crucial to have, yet are the least commonly found in organisations. As the reliance on cyber continues to grow, it is imperative that organisations update their business continuity and emergency preparedness activities to include this.

  20. Use of Green Infrastructure Integrated with Conventional Gray Infrastructure for Combined Sewer Overflow Control: Kansas City, MO

    EPA Science Inventory

    Advanced design concepts such as Low Impact Development (LID) and Green Solutions (or upland runoff control techniques) are currently being encouraged by the United States Environmental Protection Agency (EPA) as a management practice to contain and control stormwater at the lot ...

  1. 76 FR 39797 - Approval and Promulgation of Implementation Plans; Connecticut; Infrastructure SIP for the 1997 8...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-07

    .... SUMMARY: EPA is proposing to conditionally approve one element of Connecticut's December 28, 2007... commonly referred to as an infrastructure SIP. The one element of the submittal that EPA is proposing to... conditionally approving one element of Connecticut's December 28, 2007 submittal to meet the Clean Air Act...

  2. The European Network of Analytical and Experimental Laboratories for Geosciences

    NASA Astrophysics Data System (ADS)

    Freda, Carmela; Funiciello, Francesca; Meredith, Phil; Sagnotti, Leonardo; Scarlato, Piergiorgio; Troll, Valentin R.; Willingshofer, Ernst

    2013-04-01

    Integrating Earth Sciences infrastructures in Europe is the mission of the European Plate Observing System (EPOS).The integration of European analytical, experimental, and analogue laboratories plays a key role in this context and is the task of the EPOS Working Group 6 (WG6). Despite the presence in Europe of high performance infrastructures dedicated to geosciences, there is still limited collaboration in sharing facilities and best practices. The EPOS WG6 aims to overcome this limitation by pushing towards national and trans-national coordination, efficient use of current laboratory infrastructures, and future aggregation of facilities not yet included. This will be attained through the creation of common access and interoperability policies to foster and simplify personnel mobility. The EPOS ambition is to orchestrate European laboratory infrastructures with diverse, complementary tasks and competences into a single, but geographically distributed, infrastructure for rock physics, palaeomagnetism, analytical and experimental petrology and volcanology, and tectonic modeling. The WG6 is presently organizing its thematic core services within the EPOS distributed research infrastructure with the goal of joining the other EPOS communities (geologists, seismologists, volcanologists, etc...) and stakeholders (engineers, risk managers and other geosciences investigators) to: 1) develop tools and services to enhance visitor programs that will mutually benefit visitors and hosts (transnational access); 2) improve support and training activities to make facilities equally accessible to students, young researchers, and experienced users (training and dissemination); 3) collaborate in sharing technological and scientific know-how (transfer of knowledge); 4) optimize interoperability of distributed instrumentation by standardizing data collection, archive, and quality control standards (data preservation and interoperability); 5) implement a unified e-Infrastructure for data analysis, numerical modelling, and joint development and standardization of numerical tools (e-science implementation); 6) collect and store data in a flexible inventory database accessible within and beyond the Earth Sciences community(open access and outreach); 7) connect to environmental and hazard protection agencies, stakeholders, and public to raise consciousness of geo-hazards and geo-resources (innovation for society). We will inform scientists and industrial stakeholders on the most recent WG6 achievements in EPOS and we will show how our community is proceeding to design the thematic core services.

  3. Evaluating SafeClub: can risk management training improve the safety activities of community soccer clubs?

    PubMed

    Abbott, K; Klarenaar, P; Donaldson, A; Sherker, S

    2008-06-01

    To evaluate a sports safety-focused risk-management training programme. Controlled before and after test. Four community soccer associations in Sydney, Australia. 76 clubs (32 intervention, 44 control) at baseline, and 67 clubs (27 intervention, 40 control) at post-season and 12-month follow-ups. SafeClub, a sports safety-focused risk-management training programme (3x2 hour sessions) based on adult-learning principles and injury-prevention concepts and models. Changes in mean policy, infrastructure and overall safety scores as measured using a modified version of the Sports Safety Audit Tool. There was no significant difference in the mean policy, infrastructure and overall safety scores of intervention and control clubs at baseline. Intervention clubs achieved higher post-season mean policy (11.9 intervention vs 7.5 controls), infrastructure (15.2 vs 10.3) and overall safety (27.0 vs 17.8) scores than did controls. These differences were greater at the 12-month follow-up: policy (16.4 vs 7.6); infrastructure (24.7 vs 10.7); and overall safety (41.1 vs 18.3). General linear modelling indicated that intervention clubs achieved statistically significantly higher policy (p<0.001), infrastructure (p<0.001) and overall safety (p<0.001) scores compared with control clubs at the post-season and 12-month follow-ups. There was also a significant linear interaction of time and group for all three scores: policy (p<0.001), infrastructure (p<0.001) and overall safety (p<0.001). SafeClub effectively assisted community soccer clubs to improve their sports safety activities, particularly the foundations and processes for good risk-management practice, in a sustainable way.

  4. Vehicle-to-infrastructure program cooperative adaptive cruise control.

    DOT National Transportation Integrated Search

    2015-03-01

    This report documents the work completed by the Crash Avoidance Metrics Partners LLC (CAMP) Vehicle to Infrastructure (V2I) Consortium during the project titled Cooperative Adaptive Cruise Control (CACC). Participating companies in the V2I Cons...

  5. Situating Green Infrastructure in Context: Adaptive Socio-Hydrology for Sustainable Cities - poster

    EPA Science Inventory

    The benefits of green infrastructure (GI) in controlling urban hydrologic processes have largely focused on practical matters like stormwater management, which drives the planning stage. Green Infrastructure design and implementation usually takes into account physical site chara...

  6. The costs of uncoordinated infrastructure management in multi-reservoir river basins

    NASA Astrophysics Data System (ADS)

    Jeuland, Marc; Baker, Justin; Bartlett, Ryan; Lacombe, Guillaume

    2014-10-01

    Though there are surprisingly few estimates of the economic benefits of coordinated infrastructure development and operations in international river basins, there is a widespread belief that improved cooperation is beneficial for managing water scarcity and variability. Hydro-economic optimization models are commonly-used for identifying efficient allocation of water across time and space, but such models typically assume full coordination. In the real world, investment and operational decisions for specific projects are often made without full consideration of potential downstream impacts. This paper describes a tractable methodology for evaluating the economic benefits of infrastructure coordination. We demonstrate its application over a range of water availability scenarios in a catchment of the Mekong located in Lao PDR, the Nam Ngum River Basin. Results from this basin suggest that coordination improves system net benefits from irrigation and hydropower by approximately 3-12% (or US12-53 million/yr) assuming moderate levels of flood control, and that the magnitude of coordination benefits generally increases with the level of water availability and with inflow variability. Similar analyses would be useful for developing a systematic understanding of the factors that increase the costs of non-cooperation in river basin systems worldwide, and would likely help to improve targeting of efforts to stimulate complicated negotiations over water resources.

  7. Projecting the impact of a broadband communication infrastructure on printing, publishing and advertising

    NASA Astrophysics Data System (ADS)

    Smith, Ted

    1994-11-01

    A broadband communication infrastructure (over 150 megabits per second), deployed almost everywhere outside the third world within 20 years, is a common planning assumption of governments, communication carriers, and information providers. The "structure" of this infrastructure has been variously projected as being that of the telephone network, the cable system, or the Internet. An argument is made that the telephone model, with features borrowed from the other two, will prevail. This model is used to project broad features of printing, publishing, and advertising. In support of this projection, printing is modeled purposefully, a document is printed to either archive it, give it to someone else, or use it (read, mark up, take along, etc.). In the broadband future, only the last is sustainable. Publishing is modeled as a four-stage chain of commerce from creator to buyer. The progress of both the document and its chain of payments is considered today and in the broadband scenario. Finally, advertising today and tomorrow is modeled as a 2x2x2 cube. One dimension contrasts the "notify/inform" and "persuade" aspects of advertising; another contrasts the consumer's role as passive recipient vs. active controller of what s/he hears and sees; the third views the institution of advertising as reflecting or setting societal values.

  8. Urban gardens: catalysts for restorative commons infrastructure

    Treesearch

    John Seitz

    2009-01-01

    One of 18 articles inspired by the Meristem 2007 Forum, "Restorative Commons for Community Health." The articles include interviews, case studies, thought pieces, and interdisciplinary theoretical works that explore the relationship between human health and the urban...

  9. The Impact of a Carbapenem-Resistant Enterobacteriaceae Outbreak on Facilitating Development of a National Infrastructure for Infection Control in Israel.

    PubMed

    Schwaber, Mitchell J; Carmeli, Yehuda

    2017-11-29

    In 2006 the Israeli healthcare system faced an unprecedented outbreak of carbapenem-resistant Enterobacteriaceae, primarily involving KPC-producing Klebsiella pneumoniae clonal complex CC258. This public health crisis exposed major gaps in infection control. In response, Israel established a national infection control infrastructure. The steps taken to build this infrastructure and benefits realized from its creation are described here. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  10. Management of wetlands for wildlife

    USGS Publications Warehouse

    Matthew J. Gray,; Heath M. Hagy,; J. Andrew Nyman,; Stafford, Joshua D.

    2013-01-01

    Wetlands are highly productive ecosystems that provide habitat for a diversity of wildlife species and afford various ecosystem services. Managing wetlands effectively requires an understanding of basic ecosystem processes, animal and plant life history strategies, and principles of wildlife management. Management techniques that are used differ depending on target species, coastal versus interior wetlands, and available infrastructure, resources, and management objectives. Ideally, wetlands are managed as a complex, with many successional stages and hydroperiods represented in close proximity. Managing wetland wildlife typically involves manipulating water levels and vegetation in the wetland, and providing an upland buffer. Commonly, levees and water control structures are used to manipulate wetland hydrology in combination with other management techniques (e.g., disking, burning, herbicide application) to create desired plant and wildlife responses. In the United States, several conservation programs are available to assist landowners in developing wetland management infrastructure on their property. Managing wetlands to increase habitat quality for wildlife is critical, considering this ecosystem is one of the most imperiled in the world.

  11. Neighborhood Sociodemographics and Change in Built Infrastructure.

    PubMed

    Hirsch, Jana A; Green, Geoffrey F; Peterson, Marc; Rodriguez, Daniel A; Gordon-Larsen, Penny

    2017-01-01

    While increasing evidence suggests an association between physical infrastructure in neighbourhoods and health outcomes, relatively little research examines how neighbourhoods change physically over time and how these physical improvements are spatially distributed across populations. This paper describes the change over 25 years (1985-2010) in bicycle lanes, off-road trails, bus transit service, and parks, and spatial clusters of changes in these domains relative to neighbourhood sociodemographics in four U.S. cities that are diverse in terms of geography, size and population. Across all four cities, we identified increases in bicycle lanes, off-road trails, and bus transit service, with spatial clustering in these changes that related to neighbourhood sociodemographics. Overall, we found evidence of positive changes in physical infrastructure commonly identified as supportive of physical activity. However, the patterning of infrastructure change by sociodemographic change encourages attention to the equity in infrastructure improvements across neighbourhoods.

  12. Neighborhood Sociodemographics and Change in Built Infrastructure

    PubMed Central

    Hirsch, Jana A.; Green, Geoffrey F.; Peterson, Marc; Rodriguez, Daniel A.; Gordon-Larsen, Penny

    2016-01-01

    While increasing evidence suggests an association between physical infrastructure in neighbourhoods and health outcomes, relatively little research examines how neighbourhoods change physically over time and how these physical improvements are spatially distributed across populations. This paper describes the change over 25 years (1985–2010) in bicycle lanes, off-road trails, bus transit service, and parks, and spatial clusters of changes in these domains relative to neighbourhood sociodemographics in four U.S. cities that are diverse in terms of geography, size and population. Across all four cities, we identified increases in bicycle lanes, off-road trails, and bus transit service, with spatial clustering in these changes that related to neighbourhood sociodemographics. Overall, we found evidence of positive changes in physical infrastructure commonly identified as supportive of physical activity. However, the patterning of infrastructure change by sociodemographic change encourages attention to the equity in infrastructure improvements across neighbourhoods. PMID:28316645

  13. Controlled Hydrogen Fleet and Infrastructure Demonstration and Validation Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stottler, Gary

    General Motors, LLC and energy partner Shell Hydrogen, LLC, deployed a system of hydrogen fuel cell electric vehicles integrated with a hydrogen fueling station infrastructure to operate under real world conditions as part of the U.S. Department of Energy's Controlled Hydrogen Fleet and Infrastructure Validation and Demonstration Project. This technical report documents the performance and describes the learnings from progressive generations of vehicle fuel cell system technology and multiple approaches to hydrogen generation and delivery for vehicle fueling.

  14. Extending the littoral battlespace (ELB)

    NASA Astrophysics Data System (ADS)

    McKinney, Edward J.

    1999-07-01

    The ELB program is a joint Advanced Concept Technology Demonstration funded by the Navy, Marine Corps and the Office of the Secretary of Defence, and managed by the Naval Research. ELB is based on the new warfare paradigm defined by 'joint vision 2010, and on concepts developed by the Navy and Marine Corps in 'From the Sea', 'Forward...from the Sea', 'Ship to Objective Maneuver (STOM)', and 'Operational Maneuver from the Sea'. The objective of ELB is to demonstrate effective operation of dispersed forces in a variety of littoral environments, and to provide those forces timely remote fire support. Successful operation will depend on achieving a common situational awareness among a mobile, distributed command and control, a shortened sensor- to-shooter timeline, and effective utilization of all information source. The glue to hold this system of systems together is a reliable wide band communications system and network infrastructure. This paper will describe the overall architecture of ELB and focus on the core command and control functions associated with achieving a common situational awareness.

  15. Synthesis of common management concerns associated with dam removal

    USGS Publications Warehouse

    Tullos, Desiree D.; Collins, Mathias J.; Bellmore, J. Ryan; Bountry, Jennifer A.; Connolly, Patrick J.; Shafroth, Patrick B.; Wilcox, Andrew C.

    2016-01-01

    Managers make decisions regarding if and how to remove dams in spite of uncertainty surrounding physical and ecological responses, and stakeholders often raise concerns about certain negative effects, regardless of whether or not these concerns are warranted at a particular site. We used a dam-removal science database supplemented with other information sources to explore seven frequently-raised concerns, herein Common Management Concerns (CMCs). We investigate the occurrence of these concerns and the contributing biophysical controls. The CMCs addressed are: degree and rate of reservoir sediment erosion, excessive channel incision upstream of reservoirs, downstream sediment aggradation, elevated downstream turbidity, drawdown impacts on local water infrastructure, colonization of reservoir sediments by non-native plants, and expansion of invasive fish. Biophysical controls emerged for some of the concerns, providing managers with information to assess whether a given concern is likely to occur at a site. To fully assess CMC risk, managers should concurrently evaluate site conditions and identify the ecosystem or human uses that will be negatively affected if the biophysical phenomenon producing the CMC occurs. We show how many CMCs have one or more controls in common, facilitating the identification of multiple risks at a site, and demonstrate why CMC risks should be considered in the context of other factors like natural watershed variability and disturbance history.

  16. Information-theoretic characterization of dynamic energy systems

    NASA Astrophysics Data System (ADS)

    Bevis, Troy Lawson

    The latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy systems themselves. The need for a power grid that is reliable, secure, and efficient is apparent, and there have been several initiatives to provide such a system. These increases in expectations have led to a growth in the renewable energy sources that are being integrated into the grid, a change that increases efficiency and disperses the generation throughout the system. Although this change in the grid infrastructure is beneficial, it leads to grand challenges in system level control and operation. As the number of sources increases and becomes geographically distributed, the control systems are no longer local to the system. This means that communication networks must be enhanced to support multiple devices that must communicate reliably. A common solution to these new systems is to use wide area networks for the communication network, as opposed to point-to-point communication. Although the wide area network will support a large number of devices, it generally comes with a compromise in the form of latency in the communication system. Now the device controller has latency injected into the feedback loop of the system. Also, renewable energy sources are largely non-dispatchable generation. That is, they are never guaranteed to be online and supplying the demanded energy. As renewable generation is typically modeled as stochastic process, it would useful to include this behavior in the control system algorithms. The combination of communication latency and stochastic sources are compounded by the dynamics of the grid itself. Loads are constantly changing, as well as the sources; this can sometimes lead to a quick change in system states. There is a need for a metric to be able to take into consideration all of the factors detailed above; it needs to be able to take into consideration the amount of information that is available in the system and the rate that the information is losing its value. In a dynamic system, the information is only valid for a length of time, and the controller must be able to take into account the decay of currently held information. This thesis will present the information theory metrics in a way that is useful for application to dynamic energy systems. A test case involving synchronization of several generators is presented for analysis and application of the theory. The objective is to synchronize all the generators and connect them to a common bus. As the phase shift of each generator is a random process, the effects of latency and information decay can be directly observed. The results of the experiments clearly show that the expected outcomes are observed and that entropy and information theory is a valid metric for timing requirement extraction.

  17. Transforming Our Cities: High-Performance Green Infrastructure (WERF Report INFR1R11)

    EPA Science Inventory

    The objective of this project is to demonstrate that the highly distributed real-time control (DRTC) technologies for green infrastructure being developed by the research team can play a critical role in transforming our nation’s urban infrastructure. These technologies include a...

  18. Environmental Research Infrastructures providing shared solutions for science and society (ENVRIplus)

    NASA Astrophysics Data System (ADS)

    Kutsch, Werner Leo; Asmi, Ari; Laj, Paolo; Brus, Magdalena; Sorvari, Sanna

    2016-04-01

    ENVRIplus is a Horizon 2020 project bringing together Environmental and Earth System Research Infrastructures, projects and networks together with technical specialist partners to create a more coherent, interdisciplinary and interoperable cluster of Environmental Research Infrastructures (RIs) across Europe. The objective of ENVRIplus is to provide common solutions to shared challenges for these RIs in their efforts to deliver new services for science and society. To reach this overall goal, ENVRIplus brings together the current ESFRI roadmap environmental and associate fields RIs, leading I3 projects, key developing RI networks and specific technical specialist partners to build common synergic solutions for pressing issues in RI construction and implementation. ENVRIplus will be organized along 6 main objectives, further on called "Themes": 1) Improve the RI's abilities to observe the Earth System, particularly in developing and testing new sensor technologies, harmonizing observation methodologies and developing methods to overcome common problems associated with distributed remote observation networks; 2) Generate common solutions for shared information technology and data related challenges of the environmental RIs in data and service discovery and use, workflow documentation, data citations methodologies, service virtualization, and user characterization and interaction; 3) Develop harmonized policies for access (physical and virtual) for the environmental RIs, including access services for the multidisciplinary users; 4) Investigate the interactions between RIs and society: Find common approaches and methodologies how to assess the RIs' ability to answer the economical and societal challenges, develop ethics guidelines for RIs and investigate the possibility to enhance the use Citizen Science approaches in RI products and services; 5) Ensure the cross-fertilisation and knowledge transfer of new technologies, best practices, approaches and policies of the RIs by generating training material for RI personnel to use the new observational, technological and computational tools and facilitate inter-RI knowledge transfer via a staff exchange program; 6) Create RI communication and cooperation framework to coordinate activities of the environmental RIs towards common strategic development, improved user interaction and interdisciplinary cross-RI products and services. The produced solutions, services, systems and other project results are made available to all environmental research infrastructure initiatives.

  19. Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home

    ERIC Educational Resources Information Center

    Zilka, Gila Cohen

    2016-01-01

    Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a) the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school); and (b) the digital…

  20. Transaction-neutral implanted data collection interface as EMR driver: a model for emerging distributed medical technologies.

    PubMed

    Lorence, Daniel; Sivaramakrishnan, Anusha; Richards, Michael

    2010-08-01

    Electronic Medical Record (EMR) and Electronic Health Record (EHR) adoption continues to lag across the US. Cost, inconsistent formats, and concerns about control of patient information are among the most common reasons for non-adoption in physician practice settings. The emergence of wearable and implanted mobile technologies, employed in distributed environments, promises a fundamentally different information infrastructure, which could serve to minimize existing adoption resistance. Proposed here is one technology model for overcoming adoption inconsistency and high organization-specific implementation costs, using seamless, patient controlled data collection. While the conceptual applications employed in this technology set are provided by way of illustration, they may also serve as a transformative model for emerging EMR/EHR requirements.

  1. Sandia SCADA Program -- High Surety SCADA LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CARLSON, ROLF E.

    2002-04-01

    Supervisory Control and Data Acquisition (SCADA) systems are a part of the nation's critical infrastructure that is especially vulnerable to attack or disruption. Sandia National Laboratories is developing a high-security SCADA specification to increase the national security posture of the U.S. Because SCADA security is an international problem and is shaped by foreign and multinational interests, Sandia is working to develop a standards-based solution through committees such as the IEC TC 57 WG 15, the IEEE Substation Committee, and the IEEE P1547-related activity on communications and controls. The accepted standards are anticipated to take the form of a Common Criteriamore » Protection Profile. This report provides the status of work completed and discusses several challenges ahead.« less

  2. Inventory on the dietary assessment tools available and needed in africa: a prerequisite for setting up a common methodological research infrastructure for nutritional surveillance, research, and prevention of diet-related non-communicable diseases.

    PubMed

    Pisa, Pedro T; Landais, Edwige; Margetts, Barrie; Vorster, Hester H; Friedenreich, Christine M; Huybrechts, Inge; Martin-Prevel, Yves; Branca, Francesco; Lee, Warren T K; Leclercq, Catherine; Jerling, Johann; Zotor, Francis; Amuna, Paul; Al Jawaldeh, Ayoub; Aderibigbe, Olaide Ruth; Amoussa, Waliou Hounkpatin; Anderson, Cheryl A M; Aounallah-Skhiri, Hajer; Atek, Madjid; Benhura, Chakare; Chifamba, Jephat; Covic, Namukolo; Dary, Omar; Delisle, Hélène; El Ati, Jalila; El Hamdouchi, Asmaa; El Rhazi, Karima; Faber, Mieke; Kalimbira, Alexander; Korkalo, Liisa; Kruger, Annamarie; Ledo, James; Machiweni, Tatenda; Mahachi, Carol; Mathe, Nonsikelelo; Mokori, Alex; Mouquet-Rivier, Claire; Mutie, Catherine; Nashandi, Hilde Liisa; Norris, Shane A; Onabanjo, Oluseye Olusegun; Rambeloson, Zo; Saha, Foudjo Brice U; Ubaoji, Kingsley Ikechukwu; Zaghloul, Sahar; Slimani, Nadia

    2018-01-02

    To carry out an inventory on the availability, challenges, and needs of dietary assessment (DA) methods in Africa as a pre-requisite to provide evidence, and set directions (strategies) for implementing common dietary methods and support web-research infrastructure across countries. The inventory was performed within the framework of the "Africa's Study on Physical Activity and Dietary Assessment Methods" (AS-PADAM) project. It involves international institutional and African networks. An inventory questionnaire was developed and disseminated through the networks. Eighteen countries responded to the dietary inventory questionnaire. Various DA tools were reported in Africa; 24-Hour Dietary Recall and Food Frequency Questionnaire were the most commonly used tools. Few tools were validated and tested for reliability. Face-to-face interview was the common method of administration. No computerized software or other new (web) technologies were reported. No tools were standardized across countries. The lack of comparable DA methods across represented countries is a major obstacle to implement comprehensive and joint nutrition-related programmes for surveillance, programme evaluation, research, and prevention. There is a need to develop new or adapt existing DA methods across countries by employing related research infrastructure that has been validated and standardized in other settings, with the view to standardizing methods for wider use.

  3. A data infrastructure for the assessment of health care performance: lessons from the BRIDGE-health project.

    PubMed

    Bernal-Delgado, Enrique; Estupiñán-Romero, Francisco

    2018-01-01

    The integration of different administrative data sources from a number of European countries has been shown useful in the assessment of unwarranted variations in health care performance. This essay describes the procedures used to set up a data infrastructure (e.g., data access and exchange, definition of the minimum common wealth of data required, and the development of the relational logic data model) and, the methods to produce trustworthy healthcare performance measurements (e.g., ontologies standardisation and quality assurance analysis). The paper ends providing some hints on how to use these lessons in an eventual European infrastructure on public health research and monitoring. Although the relational data infrastructure developed has been proven accurate, effective to compare health system performance across different countries, and efficient enough to deal with hundred of millions of episodes, the logic data model might not be responsive if the European infrastructure aims at including electronic health records and carrying out multi-cohort multi-intervention comparative effectiveness research. The deployment of a distributed infrastructure based on semantic interoperability, where individual data remain in-country and open-access scripts for data management and analysis travel around the hubs composing the infrastructure, might be a sensible way forward.

  4. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  5. Crowdsourced earthquake early warning

    USGS Publications Warehouse

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Kathleen; Lopez, Hugo; Cairns, Julie

    An overview of the main North American codes and standards associated with hydrogen safety sensors is provided. The distinction between a code and a standard is defined, and the relationship between standards and codes is clarified, especially for those circumstances where a standard or a certification requirement is explicitly referenced within a code. The report identifies three main types of standards commonly applied to hydrogen sensors (interface and controls standards, shock and hazard standards, and performance-based standards). The certification process and a list and description of the main standards and model codes associated with the use of hydrogen safety sensorsmore » in hydrogen infrastructure are presented.« less

  7. The role of private developers in local infrastructure provision in Malaysia

    NASA Astrophysics Data System (ADS)

    Salleh, Dani; Okinono, Otega

    2016-08-01

    Globally, the challenge of local infrastructure provision has attracted much debate amongst different nations including Malaysia, on how to achieve an effective and efficient infrastructural management. This approach therefore, has intensified the efforts of local authorities in incorporating private developers in their developmental agenda in attaining a sustainable infrastructural development in local areas. Basically, the knowledge of the need for adequate provision of local infrastructure is well understood by both local and private authorities. Likewise, the divergent opinions on the usage of private delivery services. Notwithstanding the common perception, significant loopholes have been identified on the most appropriate and ideal approach and practices to adopt in enhancing local infrastructure development. The study therefore examined the role of private developers in local infrastructure provision and procedure adopted by both local authorities and the privates sector in local infrastructure development. Data was obtained using the questionnaire through purposive sampling, administered to 22 local authorities and 16 developers which was descriptively analysed. Emanating from the study findings, the most frequently approved practices by local authorities are joint venture and complete public delivery systems. Likewise, negotiation was identified as a vital tool for stimulating the acquisition of local infrastructure provision. It was also discovered the one of the greatest challenge in promoting private sector involvement in local infrastructure development is due to unregulated-procedure. The study therefore recommends, there is need for local authorities to adopt a collective and integrated approach, nevertheless, cognisance and priority should be given to developing a well-structured and systematic process of local infrastructure provision and development.

  8. LAGUNA DESIGN STUDY, Underground infrastructures and engineering

    NASA Astrophysics Data System (ADS)

    Nuijten, Guido Alexander

    2011-07-01

    The European Commission has awarded the LAGUNA project a grant of 1.7 million euro for a Design Study from the seventh framework program of research and technology development (FP7-INFRASTRUCTURES - 2007-1) in 2008. The purpose of this two year work is to study the feasibility of the considered experiments and prepare a conceptual design of the required underground infrastructure. It is due to deliver a report that allows the funding agencies to decide on the realization of the experiment and to select the site and the technology. The result of this work is the first step towards fulfilling the goals of LAGUNA. The work will continue with EU funding to study the possibilities more thoroughly. The LAGUNA project is included in the future plans prepared by European funding organizations. (Astroparticle physics in Europe). It is recommended that a new large European infrastructure is put forward, as a future international multi-purpose facility for improved studies on proton decay and low-energy neutrinos from astrophysical origin. The three detection techniques being studied for such large detectors in Europe, Water-Cherenkov (like MEMPHYS), liquid scintillator (like LENA) and liquid argon (like GLACIER), are evaluated in the context of a common design study which should also address the underground infrastructure and the possibility of an eventual detection of future accelerator neutrino beams. The design study is also to take into account worldwide efforts and converge, on a time scale of 2010, to a common proposal.

  9. A case study of physical and social barriers to hygiene and child growth in remote Australian Aboriginal communities

    PubMed Central

    McDonald, Elizabeth; Bailie, Ross; Grace, Jocelyn; Brewster, David

    2009-01-01

    Background Despite Australia's wealth, poor growth is common among Aboriginal children living in remote communities. An important underlying factor for poor growth is the unhygienic state of the living environment in these communities. This study explores the physical and social barriers to achieving safe levels of hygiene for these children. Methods A mixed qualitative and quantitative approach included a community level cross-sectional housing infrastructure survey, focus groups, case studies and key informant interviews in one community. Results We found that a combination of crowding, non-functioning essential housing infrastructure and poor standards of personal and domestic hygiene underlie the high burden of infection experienced by children in this remote community. Conclusion There is a need to address policy and the management of infrastructure, as well as key parenting and childcare practices that allow the high burden of infection among children to persist. The common characteristics of many remote Aboriginal communities in Australia suggest that these findings may be more widely applicable. PMID:19761623

  10. 76 FR 41088 - Approval and Promulgation of Implementation Plans; Kentucky; 110(a)(1) and (2) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ...EPA is taking final action to approve the December 13, 2007, submission by the Commonwealth of Kentucky, through the Kentucky Division of Air Quality (KDAQ) as demonstrating that the Commonwealth meets the state implementation plan (SIP) requirements of sections 110(a)(1) and (2) of the Clean Air Act (CAA or the Act) for the 1997 8- hour ozone national ambient air quality standards (NAAQS). Section 110(a) of the CAA requires that each state adopt and submit a SIP for the implementation, maintenance, and enforcement of each NAAQS promulgated by the EPA, which is commonly referred to as an ``infrastructure'' SIP. Kentucky certified that the Kentucky SIP contains provisions that ensure the 1997 8-hour ozone NAAQS is implemented, enforced, and maintained in Kentucky (hereafter referred to as ``infrastructure submission''). Kentucky's infrastructure submission, provided to EPA on December 13, 2007, addressed all the required infrastructure elements for the 1997 8-hour ozone NAAQS. Additionally, EPA is responding to adverse comments received on EPA's March 17, 2011, proposed approval of Kentucky's December 13, 2007, infrastructure submission.

  11. 76 FR 41100 - Approval and Promulgation of Implementation Plans; Alabama; 110(a)(1) and (2) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ...EPA is taking final action to approve the December 10, 2007, submission by the State of Alabama, through the Alabama Department of Environmental Management (ADEM) as demonstrating that the State meets the state implementation plan (SIP) requirements of sections 110(a)(1) and (2) of the Clean Air Act (CAA or the Act) for the 1997 8-hour ozone national ambient air quality standards (NAAQS). Section 110(a) of the CAA requires that each state adopt and submit a SIP for the implementation, maintenance, and enforcement of each NAAQS promulgated by the EPA, which is commonly referred to as an ``infrastructure'' SIP. Alabama certified that the Alabama SIP contains provisions that ensure the 1997 8-hour ozone NAAQS is implemented, enforced, and maintained in Alabama (hereafter referred to as ``infrastructure submission''). Alabama's infrastructure submission, provided to EPA on December 10, 2007, addressed all the required infrastructure elements for the 1997 8-hour ozone NAAQS. Additionally, EPA is responding to adverse comments received on EPA's March 17, 2011, proposed approval of Alabama's December 10, 2007, infrastructure submission.

  12. 76 FR 41123 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(1) and (2) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ...EPA is taking final action to approve the December 7, 2007, submission by the State of Mississippi, through the Mississippi Department of Environmental Quality (MDEQ) as demonstrating that the State meets the implementation plan (SIP) requirements of sections 110(a)(1) and (2) of the Clean Air Act (CAA or the Act) for the 1997 8- hour ozone national ambient air quality standards (NAAQS). Section 110(a) of the CAA requires that each state adopt and submit a SIP for the implementation, maintenance, and enforcement of each NAAQS promulgated by the EPA, which is commonly referred to as an ``infrastructure'' SIP. Mississippi certified that the Mississippi SIP contains provisions that ensure the 1997 8-hour ozone NAAQS is implemented, enforced, and maintained in Mississippi (hereafter referred to as ``infrastructure submission''). Mississippi's infrastructure submission, provided to EPA on December 7, 2007, addressed all the required infrastructure elements for the 1997 8-hour ozone NAAQS. Additionally, EPA is responding to adverse comments received on EPA's March 17, 2011, proposed approval of Mississippi's December 7, 2007, infrastructure submission.

  13. Demonstration of Green/Gray Infrastructure for Combined Sewer Overflow Control

    EPA Science Inventory

    This project is a major national demonstration of the integration of green and gray infrastructure for combined sewer overflow (CSO) control in a cost-effective and environmentally friendly manner. It will use Kansas City, MO, as a case example. The project will have a major in...

  14. Developing a data life cycle for carbon and greenhouse gas measurements: challenges, experiences and visions

    NASA Astrophysics Data System (ADS)

    Kutsch, W. L.

    2015-12-01

    Environmental research infrastructures and big data integration networks require common data policies, standardized workflows and sophisticated e-infrastructure to optimise the data life cycle. This presentation summarizes the experiences in developing the data life cycle for the Integrated Carbon Observation System (ICOS), a European Research Infrastructure. It will also outline challenges that still exist and visions for future development. As many other environmental research infrastructures ICOS RI built on a large number of distributed observational or experimental sites. Data from these sites are transferred to Thematic Centres and quality checked, processed and integrated there. Dissemination will be managed by the ICOS Carbon Portal. This complex data life cycle has been defined in detail by developing protocols and assigning responsibilities. Since data will be shared under an open access policy there is a strong need for common data citation tracking systems that allow data providers to identify downstream usage of their data so as to prove their importance and show the impact to stakeholders and the public. More challenges arise from interoperating with other infrastructures or providing data for global integration projects as done e.g. in the framework of GEOSS or in global integration approaches such as fluxnet or SOCAt. Here, common metadata systems are the key solutions for data detection and harvesting. The metadata characterises data, services, users and ICT resources (including sensors and detectors). Risks may arise when data of high and low quality are mixed during this process or unexperienced data scientists without detailed knowledge on the data aquisition derive scientific theories through statistical analyses. The vision of fully open data availability is expressed in a recent GEO flagship initiative that will address important issues needed to build a connected and interoperable global network for carbon cycle and greenhouse gas observations and aims to meet the most urgent needs for integration between different information sources and methodologies, between different regional networks and from data providers to users.

  15. Arid Green Infrastructure for Water Control and Conservation State of the Science and Research Needs for Arid/Semi-Arid Regions

    EPA Science Inventory

    Green infrastructure is an approach to managing wet weather flows using systems and practices that mimic natural processes. It is designed to manage stormwater as close to its source as possible and protect the quality of receiving waters. Although most green infrastructure pract...

  16. Standing Naval Forces and Global Security

    DTIC Science & Technology

    1993-06-04

    standards an- good engineering practices. The team submits a r:-,cr: to !PPC recommending that the prcject be accepted b NATO. 8. Audit . The...established. A system of common funds and trailing audits must be in effect to pay for the infrastructure. NATO infrastructure appears to be a good example to...Search And Rescue and maritime safety monitor marine polution 6. sharing maritime inteiiigence1 5 Commodore Bateman foresees coupling these activities or

  17. Security Economics and Critical National Infrastructure

    NASA Astrophysics Data System (ADS)

    Anderson, Ross; Fuloria, Shailendra

    There has been considerable effort and expenditure since 9/11 on the protection of ‘Critical National Infrastructure' against online attack. This is commonly interpreted to mean preventing online sabotage against utilities such as electricity,oil and gas, water, and sewage - including pipelines, refineries, generators, storage depots and transport facilities such as tankers and terminals. A consensus is emerging that the protection of such assets is more a matter of business models and regulation - in short, of security economics - than of technology. We describe the problems, and the state of play, in this paper. Industrial control systems operate in a different world from systems previously studied by security economists; we find the same issues (lock-in, externalities, asymmetric information and so on) but in different forms. Lock-in is physical, rather than based on network effects, while the most serious externalities result from correlated failure, whether from cascade failures, common-mode failures or simultaneous attacks. There is also an interesting natural experiment happening, in that the USA is regulating cyber security in the electric power industry, but not in oil and gas, while the UK is not regulating at all but rather encouraging industry's own efforts. Some European governments are intervening, while others are leaving cybersecurity entirely to plant owners to worry about. We already note some perverse effects of the U.S. regulation regime as companies game the system, to the detriment of overall dependability.

  18. Control Systems Cyber Security:Defense in Depth Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Kuipers; Mark Fabro

    2006-05-01

    Information infrastructures across many public and private domains share several common attributes regarding IT deployments and data communications. This is particularly true in the control systems domain. A majority of the systems use robust architectures to enhance business and reduce costs by increasing the integration of external, business, and control system networks. However, multi-network integration strategies often lead to vulnerabilities that greatly reduce the security of an organization, and can expose mission-critical control systems to cyber threats. This document provides guidance and direction for developing ‘defense-in-depth’ strategies for organizations that use control system networks while maintaining a multi-tier information architecturemore » that requires: Maintenance of various field devices, telemetry collection, and/or industrial-level process systems Access to facilities via remote data link or modem Public facing services for customer or corporate operations A robust business environment that requires connections among the control system domain, the external Internet, and other peer organizations.« less

  19. Control Systems Cyber Security: Defense-in-Depth Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark Fabro

    2007-10-01

    Information infrastructures across many public and private domains share several common attributes regarding IT deployments and data communications. This is particularly true in the control systems domain. A majority of the systems use robust architectures to enhance business and reduce costs by increasing the integration of external, business, and control system networks. However, multi-network integration strategies often lead to vulnerabilities that greatly reduce the security of an organization, and can expose mission-critical control systems to cyber threats. This document provides guidance and direction for developing ‘defense-in-depth’ strategies for organizations that use control system networks while maintaining a multi-tier information architecturemore » that requires: • Maintenance of various field devices, telemetry collection, and/or industrial-level process systems • Access to facilities via remote data link or modem • Public facing services for customer or corporate operations • A robust business environment that requires connections among the control system domain, the external Internet, and other peer organizations.« less

  20. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control

    NASA Astrophysics Data System (ADS)

    He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.

  1. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  2. Common Technologies for Environmental Research Infrastructures in ENVRIplus

    NASA Astrophysics Data System (ADS)

    Paris, Jean-Daniel

    2016-04-01

    Environmental and geoscientific research infrastructures (RIs) are dedicated to distinct aspects of the ocean, atmosphere, ecosystems, or solid Earth research, yet there is significant commonality in the way they conceive, develop, operate and upgrade their observation systems and platforms. Many environmental Ris are distributed network of observatories (be it drifting buoys, geophysical observatories, ocean-bottom stations, atmospheric measurements sites) with needs for remote operations. Most RIs have to deal with calibration and standardization issues. RIs use a variety of measurements technologies, but this variety is based on a small, common set of physical principles. All RIs have set their own research and development priorities, and developed their solution to their problems - however many problems are common across RIs. Finally, RIs may overlap in terms of scientific perimeter. In ENVRIplus we aim, for the first time, to identify common opportunities for innovation, to support common research and development across RIs on promising issues, and more generally to create a forum to spread state of the art techniques among participants. ENVRIplus activities include 1) measurement technologies: where are the common types of measurement for which we can share expertise or common development? 2) Metrology : how do we tackle together the diversified challenge of quality assurance and standardization? 3) Remote operations: can we address collectively the need for autonomy, robustness and distributed data handling? And 4) joint operations for research: are we able to demonstrate that together, RIs are able to provide relevant information to support excellent research. In this process we need to nurture an ecosystem of key players. Can we involve all the key technologists of the European RIs for a greater mutual benefit? Can we pave the way to a growing common market for innovative European SMEs, with a common programmatic approach conducive to targeted R&D? Can we develop a common metrological language adapted to the observation of our environment? We aim at creating a space for exchange on the "hardware" issues of our networks of observatories, a forum that allows fast transmission across RIs of best practices and state of the art technology, a laboratory for joint research and co-development, where research infrastructures and their communities join efforts on well-identified objectives.

  3. Cost Comparison of Conventional Gray Combined Sewer Overflow Control Infrastructure versus a Green/Gray Combination

    EPA Science Inventory

    This paper outlines a life-cycle cost analysis comparing a green (rain gardens) and gray (tunnels) infrastructure combination to a gray-only option to control combined sewer overflow in the Turkey Creek Combined Sewer Overflow Basin, in Kansas City, MO. The plan area of this Bas...

  4. Controlling Infrastructure Costs: Right-Sizing the Mission Control Facility

    NASA Technical Reports Server (NTRS)

    Martin, Keith; Sen-Roy, Michael; Heiman, Jennifer

    2009-01-01

    Johnson Space Center's Mission Control Center is a space vehicle, space program agnostic facility. The current operational design is essentially identical to the original facility architecture that was developed and deployed in the mid-90's. In an effort to streamline the support costs of the mission critical facility, the Mission Operations Division (MOD) of Johnson Space Center (JSC) has sponsored an exploratory project to evaluate and inject current state-of-the-practice Information Technology (IT) tools, processes and technology into legacy operations. The general push in the IT industry has been trending towards a data-centric computer infrastructure for the past several years. Organizations facing challenges with facility operations costs are turning to creative solutions combining hardware consolidation, virtualization and remote access to meet and exceed performance, security, and availability requirements. The Operations Technology Facility (OTF) organization at the Johnson Space Center has been chartered to build and evaluate a parallel Mission Control infrastructure, replacing the existing, thick-client distributed computing model and network architecture with a data center model utilizing virtualization to provide the MCC Infrastructure as a Service. The OTF will design a replacement architecture for the Mission Control Facility, leveraging hardware consolidation through the use of blade servers, increasing utilization rates for compute platforms through virtualization while expanding connectivity options through the deployment of secure remote access. The architecture demonstrates the maturity of the technologies generally available in industry today and the ability to successfully abstract the tightly coupled relationship between thick-client software and legacy hardware into a hardware agnostic "Infrastructure as a Service" capability that can scale to meet future requirements of new space programs and spacecraft. This paper discusses the benefits and difficulties that a migration to cloud-based computing philosophies has uncovered when compared to the legacy Mission Control Center architecture. The team consists of system and software engineers with extensive experience with the MCC infrastructure and software currently used to support the International Space Station (ISS) and Space Shuttle program (SSP).

  5. Effect of infrastructure design on commons dilemmas in social-ecological system dynamics.

    PubMed

    Yu, David J; Qubbaj, Murad R; Muneepeerakul, Rachata; Anderies, John M; Aggarwal, Rimjhim M

    2015-10-27

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social-ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses.

  6. Effect of infrastructure design on commons dilemmas in social−ecological system dynamics

    PubMed Central

    Yu, David J.; Qubbaj, Murad R.; Muneepeerakul, Rachata; Anderies, John M.; Aggarwal, Rimjhim M.

    2015-01-01

    The use of shared infrastructure to direct natural processes for the benefit of humans has been a central feature of human social organization for millennia. Today, more than ever, people interact with one another and the environment through shared human-made infrastructure (the Internet, transportation, the energy grid, etc.). However, there has been relatively little work on how the design characteristics of shared infrastructure affect the dynamics of social−ecological systems (SESs) and the capacity of groups to solve social dilemmas associated with its provision. Developing such understanding is especially important in the context of global change where design criteria must consider how specific aspects of infrastructure affect the capacity of SESs to maintain vital functions in the face of shocks. Using small-scale irrigated agriculture (the most ancient and ubiquitous example of public infrastructure systems) as a model system, we show that two design features related to scale and the structure of benefit flows can induce fundamental changes in qualitative behavior, i.e., regime shifts. By relating the required maintenance threshold (a design feature related to infrastructure scale) to the incentives facing users under different regimes, our work also provides some general guidance on determinants of robustness of SESs under globalization-related stresses. PMID:26460043

  7. A Common Communications, Navigation and Surveillance Infrastructure for Accommodating Space Vehicles in the Next Generation Air Transportation System

    NASA Technical Reports Server (NTRS)

    VanSuetendael, RIchard; Hayes, Alan; Birr, Richard

    2008-01-01

    Suborbital space flight and space tourism are new potential markets that could significantly impact the National Airspace System (NAS). Numerous private companies are developing space flight capabilities to capture a piece of an emerging commercial space transportation market. These entrepreneurs share a common vision that sees commercial space flight as a profitable venture. Additionally, U.S. space exploration policy and national defense will impose significant additional demands on the NAS. Air traffic service providers must allow all users fair access to limited airspace, while ensuring that the highest levels of safety, security, and efficiency are maintained. The FAA's Next Generation Air Transportation System (NextGen) will need to accommodate spacecraft transitioning to and from space through the NAS. To accomplish this, space and air traffic operations will need to be seamlessly integrated under some common communications, navigation and surveillance (CNS) infrastructure. As part of NextGen, the FAA has been developing the Automatic Dependent Surveillance Broadcast (ADS-B) which utilizes the Global Positioning System (GPS) to track and separate aircraft. Another key component of NextGen, System-Wide Information Management/ Network Enabled Operations (SWIM/NEO), is an open architecture network that will provide NAS data to various customers, system tools and applications. NASA and DoD are currently developing a space-based range (SBR) concept that also utilizes GPS, communications satellites and other CNS assets. The future SBR will have very similar utility for space operations as ADS-B and SWIM has for air traffic. Perhaps the FAA, NASA, and DoD should consider developing a common space-based CNS infrastructure to support both aviation and space transportation operations. This paper suggests specific areas of research for developing a CNS infrastructure that can accommodate spacecraft and other new types of vehicles as an integrated part of NextGen.

  8. ECHO Services: Foundational Middleware for a Science Cyberinfrastructure

    NASA Technical Reports Server (NTRS)

    Burnett, Michael

    2005-01-01

    This viewgraph presentation describes ECHO, an interoperability middleware solution. It uses open, XML-based APIs, and supports net-centric architectures and solutions. ECHO has a set of interoperable registries for both data (metadata) and services, and provides user accounts and a common infrastructure for the registries. It is built upon a layered architecture with extensible infrastructure for supporting community unique protocols. It has been operational since November, 2002 and it available as open source.

  9. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  10. EUDAT and EPOS moving towards the efficient management of scientific data sets

    NASA Astrophysics Data System (ADS)

    Fiameni, Giuseppe; Bailo, Daniele; Cacciari, Claudio

    2016-04-01

    This abstract presents the collaboration between the European Collaborative Data Infrastructure (EUDAT) and the pan-European infrastructure for solid Earth science (EPOS) which draws on the management of scientific data sets through a reciprocal support agreement. EUDAT is a Consortium of European Data Centers and Scientific Communities whose focus is the development and realisation of the Collaborative Data Infrastructure (CDI), a common model for managing data spanning all European research data centres and data repositories and providing an interoperable layer of common data services. The EUDAT Service Suite is a set of a) implementations of the CDI model and b) standards, developed and offered by members of the EUDAT Consortium. These EUDAT Services include a baseline of CDI-compliant interface and API services - a "CDI Gateway" - plus a number of web-based GUIs and command-line client tools. On the other hand,the EPOS initiative aims at creating a pan-European infrastructure for the solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the mission of EPOS is to integrate the diverse and advanced European Research Infrastructures for solid Earth Science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. Through the integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. To achieve this integration challenge and the interoperability among all involved communities, EPOS has designed an architecture capable to organize and manage distributed discipline-oriented centers (called Thematic Core Services - TCS). Such design envisage the creation of an integrating e-Infrastructure called Integrated Core Service (ICS), whose aim is to collect and integrate Data, Data Products, Software and Services, and provide homogeneous access to them to the end user, hiding all the complexity of the underlying network of TCS and National data centers. Therefore, EPOS can take advantage of EUDAT CDI at different levels: at the TCS level, providing technologies, knowledge and B2* services to discipline-oriented communities, and at the ICS level, by facilitating the integration and interoperability of different communities with different level of maturity in terms of technology expertise. EUDAT services are particularly suitable to facilitate this process as they can be deployed across the community centers to complement or augment existing services of more mature communities as well as be used by less mature communities as a gateway towards the EPOS integration. To this purpose, a pilot is being carried on in the context of the EPOS Seismological community to foster the uptake of EUDAT services among centers and thus ensure the efficient and sustainable management of scientific data sets. Data sets, e.g. seismic waveforms, collected through the Italian Seismic Network and the ORFEUS organization, are currently replicated onto EUDAT resources to ensure their long-term preservation and accessibility. The pilot will be extend to cover other use cases such as the management of meta-data and the fine-grained control of access.

  11. Tank waste remediation system privatization infrastructure program, configuration management implementation plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaus, P.S.

    This Configuration Management Implementation Plan (CMIP) was developed to assist in managing systems, structures, and components (SSCS), to facilitate the effective control and statusing of changes to SSCS, and to ensure technical consistency between design, performance, and operational requirements. Its purpose is to describe the approach Privatization Infrastructure will take in implementing a configuration management program, to identify the Program`s products that need configuration management control, to determine the rigor of control, and to identify the mechanisms for that control.

  12. Policy Model of Sustainable Infrastructure Development (Case Study : Bandarlampung City, Indonesia)

    NASA Astrophysics Data System (ADS)

    Persada, C.; Sitorus, S. R. P.; Marimin; Djakapermana, R. D.

    2018-03-01

    Infrastructure development does not only affect the economic aspect, but also social and environmental, those are the main dimensions of sustainable development. Many aspects and actors involved in urban infrastructure development requires a comprehensive and integrated policy towards sustainability. Therefore, it is necessary to formulate an infrastructure development policy that considers various dimensions of sustainable development. The main objective of this research is to formulate policy of sustainable infrastructure development. In this research, urban infrastructure covers transportation, water systems (drinking water, storm water, wastewater), green open spaces and solid waste. This research was conducted in Bandarlampung City. This study use a comprehensive modeling, namely the Multi Dimensional Scaling (MDS) with Rapid Appraisal of Infrastructure (Rapinfra), it uses of Analytic Network Process (ANP) and it uses system dynamics model. The findings of the MDS analysis showed that the status of Bandarlampung City infrastructure sustainability is less sustainable. The ANP analysis produces 8 main indicators of the most influential in the development of sustainable infrastructure. The system dynamics model offered 4 scenarios of sustainable urban infrastructure policy model. The best scenario was implemented into 3 policies consist of: the integrated infrastructure management, the population control, and the local economy development.

  13. Lessons learned from a practice-based, multi-site intervention study with nurse participants

    PubMed Central

    Friese, Christopher R.; Mendelsohn-Victor, Kari; Ginex, Pamela; McMahon, Carol M.; Fauer, Alex J.; McCullagh, Marjorie C.

    2016-01-01

    Purpose To identify challenges and solutions to the efficient conduct of a multi-site, practice-based randomized controlled trial to improve nurses’ adherence to personal protective equipment use in ambulatory oncology settings. Design The Drug Exposure Feedback and Education for Nurses’ Safety (DEFENS) study is a clustered, randomized, controlled trial. Participating sites are randomized to web-based feedback on hazardous drug exposures in the sites plus tailored messages to address barriers versus a control intervention of a web-based continuing education video. Approach The study principal investigator, the study coordinator, and two site leaders identified challenges to study implementation and potential solutions, plus potential methods to prevent logistical challenges in future studies. Findings Noteworthy challenges included variation in human subjects protection policies, grants and contracts budgeting, infrastructure for nursing-led research, and information technology variation. Successful strategies included scheduled web conferences, site-based study champions, site visits by the principal investigator, and centrally-based document preparation. Strategies to improve efficiency in future studies include early and continued engagement with contract personnel in sites, and proposed changes to the common rule concerning human subjects. The DEFENS study successfully recruited 393 nurses across 12 sites. To date, 369 have completed surveys and 174 nurses have viewed educational materials. Conclusions Multi-site studies of nursing personnel are rare and challenging to existing infrastructure. These barriers can be overcome with strong engagement and planning. Clinical Relevance Leadership engagement, onsite staff support, and continuous communication can facilitate successful recruitment to a workplace-based randomized, controlled behavioral trial. PMID:28098951

  14. A Virtual Environment for Resilient Infrastructure Modeling and Design

    DTIC Science & Technology

    2015-09-01

    Security CI Critical Infrastructure CID Center for Infrastructure Defense CSV Comma Separated Value DAD Defender-Attacker-Defender DHS Department...responses to disruptive events (e.g., cascading failure behavior) in a context- rich , controlled environment for exercises, education, and training...The general attacker-defender (AD) and defender-attacker-defender ( DAD ) models for CI are defined in Brown et al. (2006). These models help

  15. Building an intellectual infrastructure for space commerce

    NASA Technical Reports Server (NTRS)

    Stone, Barbara A.; Struthers, Jeffrey L.

    1992-01-01

    Competition in commerce requires an 'intellectual infrastructure', that is, a work force with extensive scientific and technical knowledge and a thorough understanding of the business world. This paper focuses on the development of such intellectual infrastructure for space commerce. Special consideration is given to the contributions to this development by the 17 Centers for the Commercial Development of Space Program conducting commercially oriented research in eight specialized areas: automation and robotics, remote sensing, life sciences, materials processing in space, space power, space propulsion, space structures and materials, and advanced satellite communications. Attention is also given to the Space Business Development Center concept aimed at addressing a variety of barriers common to the development of space commerce.

  16. Effective Utilization of Resources and Infrastructure for a Spaceport Network Architecture

    NASA Technical Reports Server (NTRS)

    Gill, Tracy; Larson, Wiley; Mueller, Robert; Roberson, Luke

    2012-01-01

    Providing routine, affordable access to a variety of orbital and deep space destinations requires an intricate network of ground, planetary surface, and space-based spaceports like those on Earth (land and sea), in various Earth orbits, and on other extraterrestrial surfaces. Advancements in technology and international collaboration are critical to establish a spaceport network that satisfies the requirements for private and government research, exploration, and commercial objectives. Technologies, interfaces, assembly techniques, and protocols must be adapted to enable mission critical capabilities and interoperability throughout the spaceport network. The conceptual space mission architecture must address the full range of required spaceport services, from managing propellants for a variety of spacecraft to governance structure. In order to accomplish affordability and sustainability goals, the network architecture must consider deriving propellants from in situ planetary resources to the maximum extent possible. Water on the Moon and Mars, Mars' atmospheric CO2, and O2 extracted from lunar regolith are examples of in situ resources that could be used to generate propellants for various spacecraft, orbital stages and trajectories, and the commodities to support habitation and human operations at these destinations. The ability to use in-space fuel depots containing in situ derived propellants would drastically reduce the mass required to launch long-duration or deep space missions from Earth's gravity well. Advances in transformative technologies and common capabilities, interfaces, umbilicals, commodities, protocols, and agreements will facilitate a cost-effective, safe, reliable infrastructure for a versatile network of Earth- and extraterrestrial spaceports. Defining a common infrastructure on Earth, planetary surfaces, and in space, as well as deriving propellants from in situ planetary resources to construct in-space propellant depots to serve the spaceport network, will reduce exploration costs due to standardization of infrastructure commonality and reduction in number and types of interfaces and commodities.

  17. Resurrecting social infrastructure as a determinant of urban tuberculosis control in Delhi, India

    PubMed Central

    2014-01-01

    Background The key to universal coverage in tuberculosis (TB) management lies in community participation and empowerment of the population. Social infrastructure development generates social capital and addresses the crucial social determinants of TB, thereby improving program performance. Recently, there has been renewed interest in the concept of social infrastructure development for TB control in developing countries. This study aims to revive this concept and highlight the fact that documentation on ways to operationalize urban TB control is required from a holistic development perspective. Further, it explains how development of social infrastructure impacts health and development outcomes, especially with respect to TB in urban settings. Methods A wide range of published Government records pertaining to social development parameters and TB program surveillance, between 2001 and 2011 in Delhi, were studied. Social infrastructure development parameters like human development index along with other indicators reflecting patient profile and habitation in urban settings were selected as social determinants of TB. These include adult literacy rates, per capita income, net migration rates, percentage growth in slum population, and percentage of urban population living in one-room dwelling units. The impact of the Revised National Tuberculosis Control Program on TB incidence was assessed as an annual decline in new TB cases notified under the program. Univariate linear regression was employed to examine the interrelationship between social development parameters and TB program outcomes. Results The decade saw a significant growth in most of the social development parameters in the State. TB program performance showed 46% increment in lives saved among all types of TB cases per 100,000 population. The 7% reduction in new TB case notifications from the year 2001 to 2011, translates to a logarithmic decline of 5.4 new TB cases per 100,000 population. Except per capita income, literacy, and net migration rates, other social determinants showed significant correlation with decline in new TB cases per 100,000 population. Conclusions Social infrastructure development leads to social capital generation which engenders positive growth in TB program outcomes. Strategies which promote social infrastructure development should find adequate weightage in the overall policy framework for urban TB control in developing countries. PMID:24438431

  18. Resurrecting social infrastructure as a determinant of urban tuberculosis control in Delhi, India.

    PubMed

    Chandra, Shivani; Sharma, Nandini; Joshi, Kulanand; Aggarwal, Nishi; Kannan, Anjur Tupil

    2014-01-17

    The key to universal coverage in tuberculosis (TB) management lies in community participation and empowerment of the population. Social infrastructure development generates social capital and addresses the crucial social determinants of TB, thereby improving program performance. Recently, there has been renewed interest in the concept of social infrastructure development for TB control in developing countries. This study aims to revive this concept and highlight the fact that documentation on ways to operationalize urban TB control is required from a holistic development perspective. Further, it explains how development of social infrastructure impacts health and development outcomes, especially with respect to TB in urban settings. A wide range of published Government records pertaining to social development parameters and TB program surveillance, between 2001 and 2011 in Delhi, were studied. Social infrastructure development parameters like human development index along with other indicators reflecting patient profile and habitation in urban settings were selected as social determinants of TB. These include adult literacy rates, per capita income, net migration rates, percentage growth in slum population, and percentage of urban population living in one-room dwelling units. The impact of the Revised National Tuberculosis Control Program on TB incidence was assessed as an annual decline in new TB cases notified under the program. Univariate linear regression was employed to examine the interrelationship between social development parameters and TB program outcomes. The decade saw a significant growth in most of the social development parameters in the State. TB program performance showed 46% increment in lives saved among all types of TB cases per 100,000 population. The 7% reduction in new TB case notifications from the year 2001 to 2011, translates to a logarithmic decline of 5.4 new TB cases per 100,000 population. Except per capita income, literacy, and net migration rates, other social determinants showed significant correlation with decline in new TB cases per 100,000 population. Social infrastructure development leads to social capital generation which engenders positive growth in TB program outcomes. Strategies which promote social infrastructure development should find adequate weightage in the overall policy framework for urban TB control in developing countries.

  19. A Study to Identify the Critical Success Factors for ERP Implementation in an Indian SME: A Case Based Approach

    NASA Astrophysics Data System (ADS)

    Upadhyay, Parijat; Dan, Pranab K.

    To achieve synergy across product lines, businesses are implementing a set of standard business applications and consistent data definitions across all business units. ERP packages are extremely useful in integrating a global company and provide a "common language" throughout the company. Companies are not only implementing a standardized application but is also moving to a common architecture and infrastructure. For many companies, a standardized software rollout is a good time to do some consolidation of their IT infrastructure across various locations. Companies are also finding that the ERP solutions help them get rid of their legacy systems, most of which may not be compliant with the modern day business requirements.

  20. The development of network infrastructure in rural areas and problems in applying IT to the medical field.

    PubMed

    Ooe, Yosuke; Anamizu, Hiromitsu; Tatsumi, Haruyuki; Tanaka, Hiroshi

    2008-07-01

    The financial condition of the Japanese health insurance system is said to be compounded with the aging of the population. The government argues that the application of IT and networking is required in order to streamline health care services while avoiding its collapse. The Internet environment has been furnished with broadband connection and multimedia in the span of one year or shorter, and is becoming more and more convenient. It is true that the Internet is now a part of Tokyo's infrastructure along with electricity and water supply, as it is the center of politics. However, in local cities, development of the Internet environment is still insufficient. In order to use the network as a common infrastructure at health care facilities, we need to be aware of this digital divide. This study investigated the development status of network infrastructure in regional cities.

  1. Quantifying habitat impacts of natural gas infrastructure to facilitate biodiversity offsetting

    PubMed Central

    Jones, Isabel L; Bull, Joseph W; Milner-Gulland, Eleanor J; Esipov, Alexander V; Suttle, Kenwyn B

    2014-01-01

    Habitat degradation through anthropogenic development is a key driver of biodiversity loss. One way to compensate losses is “biodiversity offsetting” (wherein biodiversity impacted is “replaced” through restoration elsewhere). A challenge in implementing offsets, which has received scant attention in the literature, is the accurate determination of residual biodiversity losses. We explore this challenge for offsetting gas extraction in the Ustyurt Plateau, Uzbekistan. Our goal was to determine the landscape extent of habitat impacts, particularly how the footprint of “linear” infrastructure (i.e. roads, pipelines), often disregarded in compensation calculations, compares with “hub” infrastructure (i.e. extraction facilities). We measured vegetation cover and plant species richness using the line-intercept method, along transects running from infrastructure/control sites outward for 500 m, accounting for wind direction to identify dust deposition impacts. Findings from 24 transects were extrapolated to the broader plateau by mapping total landscape infrastructure network using GPS data and satellite imagery. Vegetation cover and species richness were significantly lower at development sites than controls. These differences disappeared within 25 m of the edge of the area physically occupied by infrastructure. The current habitat footprint of gas infrastructure is 220 ± 19 km2 across the Ustyurt (total ∼ 100,000 km2), 37 ± 6% of which is linear infrastructure. Vegetation impacts diminish rapidly with increasing distance from infrastructure, and localized dust deposition does not conspicuously extend the disturbance footprint. Habitat losses from gas extraction infrastructure cover 0.2% of the study area, but this reflects directly eliminated vegetation only. Impacts upon fauna pose a more difficult determination, as these require accounting for behavioral and demographic responses to disturbance by elusive mammals, including threatened species. This study demonstrates that impacts of linear infrastructure in regions such as the Ustyurt should be accounted for not just with respect to development sites but also associated transportation and delivery routes. PMID:24455163

  2. Quantifying habitat impacts of natural gas infrastructure to facilitate biodiversity offsetting.

    PubMed

    Jones, Isabel L; Bull, Joseph W; Milner-Gulland, Eleanor J; Esipov, Alexander V; Suttle, Kenwyn B

    2014-01-01

    Habitat degradation through anthropogenic development is a key driver of biodiversity loss. One way to compensate losses is "biodiversity offsetting" (wherein biodiversity impacted is "replaced" through restoration elsewhere). A challenge in implementing offsets, which has received scant attention in the literature, is the accurate determination of residual biodiversity losses. We explore this challenge for offsetting gas extraction in the Ustyurt Plateau, Uzbekistan. Our goal was to determine the landscape extent of habitat impacts, particularly how the footprint of "linear" infrastructure (i.e. roads, pipelines), often disregarded in compensation calculations, compares with "hub" infrastructure (i.e. extraction facilities). We measured vegetation cover and plant species richness using the line-intercept method, along transects running from infrastructure/control sites outward for 500 m, accounting for wind direction to identify dust deposition impacts. Findings from 24 transects were extrapolated to the broader plateau by mapping total landscape infrastructure network using GPS data and satellite imagery. Vegetation cover and species richness were significantly lower at development sites than controls. These differences disappeared within 25 m of the edge of the area physically occupied by infrastructure. The current habitat footprint of gas infrastructure is 220 ± 19 km(2) across the Ustyurt (total ∼ 100,000 km(2)), 37 ± 6% of which is linear infrastructure. Vegetation impacts diminish rapidly with increasing distance from infrastructure, and localized dust deposition does not conspicuously extend the disturbance footprint. Habitat losses from gas extraction infrastructure cover 0.2% of the study area, but this reflects directly eliminated vegetation only. Impacts upon fauna pose a more difficult determination, as these require accounting for behavioral and demographic responses to disturbance by elusive mammals, including threatened species. This study demonstrates that impacts of linear infrastructure in regions such as the Ustyurt should be accounted for not just with respect to development sites but also associated transportation and delivery routes.

  3. Commercial Space with Technology Maturation

    NASA Technical Reports Server (NTRS)

    McCleskey, Carey M.; Rhodes, Russell E.; Robinson, John W.

    2013-01-01

    To provide affordable space transportation we must be capable of using common fixed assets and the infrastructure for multiple purposes simultaneously. The Space Shuttle was operated for thirty years, but was not able to establish an effective continuous improvement program because of the high risk to the crew on every mission. An unmanned capability is needed to provide an acceptable risk to the primary mission. This paper is intended to present a case where a commercial space venture could share the large fixed cost of operating the infrastructure with the government while the government provides new advanced technology that is focused on reduced operating cost to the common launch transportation system. A conceivable commercial space venture could provide educational entertainment for the country's youth that would stimulate their interest in the science, technology, engineering, and mathematics (STEM) through access at entertainment parks or the existing Space Visitor Centers. The paper uses this example to demonstrate how growing public-private space market demand will re-orient space transportation industry priorities in flight and ground system design and technology development, and how the infrastructure is used and shared.

  4. ROSE Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, D.; Yi, Q.; Buduc, R.

    2005-02-17

    ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less

  5. Systematic literature review of built environment effects on physical activity and active transport - an update and new findings on health equity.

    PubMed

    Smith, Melody; Hosking, Jamie; Woodward, Alistair; Witten, Karen; MacMillan, Alexandra; Field, Adrian; Baas, Peter; Mackie, Hamish

    2017-11-16

    Evidence is mounting to suggest a causal relationship between the built environment and people's physical activity behaviours, particularly active transport. The evidence base has been hindered to date by restricted consideration of cost and economic factors associated with built environment interventions, investigation of socioeconomic or ethnic differences in intervention effects, and an inability to isolate the effect of the built environment from other intervention types. The aims of this systematic review were to identify which environmental interventions increase physical activity in residents at the local level, and to build on the evidence base by considering intervention cost, and the differential effects of interventions by ethnicity and socioeconomic status. A systematic database search was conducted in June 2015. Articles were eligible if they reported a quantitative empirical study (natural experiment or a prospective, retrospective, experimental, or longitudinal research) investigating the relationship between objectively measured built environment feature(s) and physical activity and/or travel behaviours in children or adults. Quality assessment was conducted and data on intervention cost and whether the effect of the built environment differed by ethnicity or socioeconomic status were extracted. Twenty-eight studies were included in the review. Findings showed a positive effect of walkability components, provision of quality parks and playgrounds, and installation of or improvements in active transport infrastructure on active transport, physical activity, and visits or use of settings. There was some indication that infrastructure improvements may predominantly benefit socioeconomically advantaged groups. Studies were commonly limited by selection bias and insufficient controlling for confounders. Heterogeneity in study design and reporting limited comparability across studies or any clear conclusions to be made regarding intervention cost. Improving neighbourhood walkability, quality of parks and playgrounds, and providing adequate active transport infrastructure is likely to generate positive impacts on activity in children and adults. The possibility that the benefits of infrastructure improvements may be inequitably distributed requires further investigation. Opportunities to improve the quality of evidence exist, including strategies to improve response rates and representativeness, use of valid and reliable measurement tools, cost-benefit analyses, and adequate controlling for confounders.

  6. Experiences and Lessons Learnt with Collaborative e-Research Infrastructure and the application of Identity Management and Access Control for the Centre for Environmental Data Analysis

    NASA Astrophysics Data System (ADS)

    Kershaw, P.

    2016-12-01

    CEDA, the Centre for Environmental Data Analysis, hosts a range of services on behalf of NERC (Natural Environment Research Council) for the UK environmental sciences community and its work with international partners. It is host to four data centres covering atmospheric science, earth observation, climate and space data domain areas. It holds this data on behalf of a number of different providers each with their own data policies which has thus required the development of a comprehensive system to manage access. With the advent of CMIP5, CEDA committed to be one of a number of centres to host the climate model outputs and make them available through the Earth System Grid Federation, a globally distributed software infrastructure developed for this purpose. From the outset, a means for restricting access to datasets was required, necessitating the development a federated system for authentication and authorisation so that access to data could be managed across multiple providers around the world. From 2012, CEDA has seen a further evolution with the development of JASMIN, a multi-petabyte data analysis facility. Hosted alongside the CEDA archive, it provides a range of services for users including a batch compute cluster, group workspaces and a community cloud. This has required significant changes and enhancements to the access control system. In common with many other examples in the research community, the experiences of the above underline the difficulties of developing collaborative e-Research infrastructures. Drawing from these there are some recurring themes: Clear requirements need to be established at the outset recognising that implementing strict access policies can incur additional development and administrative overhead. An appropriate balance is needed between ease of access desired by end users and metrics and monitoring required by resource providers. The major technical challenge is not with security technologies themselves but their effective integration with services and resources which they must protect. Effective policy and governance structures are needed for ongoing operations Federated identity infrastructures often exist only at the national level making it difficult for international research collaborations to exploit them.

  7. Green Infrastructure Research and Demonstration at the Edison Environmental Center

    EPA Science Inventory

    This presentation will review the need for storm water control practices and will present a portion of the green infrastructure research and demonstration being performed at the Edison Environmental Center.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Chu, Peter; Gadh, Rajit

    Currently, when Electric Vehicles (EVs) are charging, they only have the option to charge at a selected current or not charge. When during the day there is a power shortage, the charging infrastructure should have the options to either shut off the power to the charging stations or to lower the power to the EVs in order to satisfy the needs of the grid. There is a need for technology that controls the current being disbursed to these electric vehicles. This paper proposes a design for a smart charging infrastructure capable of providing power to several EVs from one circuitmore » by multiplexing power and providing charge control. The smart charging infrastructure includes the server and the smart charging station. With this smart charging infrastructure, the shortage of energy in a local grid could be solved by our EV management system« less

  9. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  10. Uncertainty in Predicted Neighborhood-Scale Green Stormwater Infrastructure Performance Informed by field monitoring of Hydrologic Abstractions

    NASA Astrophysics Data System (ADS)

    Smalls-Mantey, L.; Jeffers, S.; Montalto, F. A.

    2013-12-01

    Human alterations to the environment provide infrastructure for housing and transportation but have drastically changed local hydrology. Excess stormwater runoff from impervious surfaces generates erosion, overburdens sewer infrastructure, and can pollute receiving bodies. Increased attention to green stormwater management controls is based on the premise that some of these issues can be mitigated by capturing or slowing the flow of stormwater. However, our ability to predict actual green infrastructure facility performance using physical or statistical methods needs additional validation, and efforts to incorporate green infrastructure controls into hydrologic models are still in their infancy stages. We use more than three years of field monitoring data to derive facility specific probability density functions characterizing the hydrologic abstractions provided by a stormwater treatment wetland, streetside bioretention facility, and a green roof. The monitoring results are normalized by impervious area treated, and incorporated into a neighborhood-scale agent model allowing probabilistic comparisons of the stormwater capture outcomes associated with alternative urban greening scenarios. Specifically, we compare the uncertainty introduced into the model by facility performance (as represented by the variability in the abstraction), to that introduced by both precipitation variability, and spatial patterns of emergence of different types of green infrastructure. The modeling results are used to update a discussion about the potential effectiveness of urban green infrastructure implementation plans.

  11. Practical Use of the GEOSS Common Infrastructure by Environmental Research Infrastructures - Lessons from a COOPEUS-GEOSS Workshop

    NASA Astrophysics Data System (ADS)

    Waldmann, H. C.; Koop-Jakobsen, K.

    2014-12-01

    The GEOSS Common Infrastructure (GCI) enables earth observations data providers to make their resources available in a global context and allow users of earth observations data to search, access and use data, tools and services available through the Global Earth Observation System of Systems. COOPEUS views the GCI as an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges, and the research infrastructures (RIs) in COOPEUS are currently in the process of registering resources and services within the GCI. To promote this work, COOPEUS and GEOSS held a joint workshop in July 2014, where the main scope was to get data managers of the COOPEUS RIs involved and establish the GCI as part of the COOPEUS interoperability framework. The workshop revealed that data policies of the individual RIs can often be the first impediment for their use of the GCI. As many RIs are administering data from many sources, permission to distribute the data must be in place before registration in the GCI. Through hands-on exercises registering resources from the COOPEUS RIs, the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS RIs. These exercises gave important feedback for the practical implementation of the GCI as well as the challenges lying ahead. For the COOPEUS RIs providing data the benefits includes improved discovery and access to data and information, increased visibility of available data, information and services, which will promote the structuring of the existing environmental research infrastructure landscape and improve the interoperability. However, in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated like for instance allowing for bulk data registration; the resource registration and feedback by COOPEUS partners can play an important role in these efforts.

  12. GEARS: An Enterprise Architecture Based On Common Ground Services

    NASA Astrophysics Data System (ADS)

    Petersen, S.

    2014-12-01

    Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.

  13. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  14. Soak Up the Rain New England Webinar Series: National ...

    EPA Pesticide Factsheets

    Presenters will provide an introduction to the most recent EPA green infrastructure tools to R1 stakeholders; and their use in making decisions about implementing green infrastructure. We will discuss structuring your green infrastructure decision, finding appropriate information and tools, evaluating options and selecting the right Best Management Practices mix for your needs.WMOST (Watershed Management Optimization Support Tool)- for screening a wide range of practices for cost-effectiveness in achieving watershed or water utilities management goals.GIWiz (Green Infrastructure Wizard)- a web application connecting communities to EPA Green Infrastructure tools and resources.Opti-Tool-designed to assist in developing technically sound and optimized cost-effective Stormwater management plans. National Stormwater Calculator- a desktop application for estimating the impact of land cover change and green infrastructure controls on stormwater runoff. DASEES-GI (Decision Analysis for a Sustainable Environment, Economy, and Society) – a framework for linking objectives and measures with green infrastructure methods. Presenters will provide an introduction to the most recent EPA green infrastructure tools to R1 stakeholders; and their use in making decisions about implementing green infrastructure. We will discuss structuring your green infrastructure decision, finding appropriate information and tools, evaluating options and selecting the right Best Management Pr

  15. The Component Model of Infrastructure: A Practical Approach to Understanding Public Health Program Infrastructure

    PubMed Central

    Snyder, Kimberly; Rieker, Patricia P.

    2014-01-01

    Functioning program infrastructure is necessary for achieving public health outcomes. It is what supports program capacity, implementation, and sustainability. The public health program infrastructure model presented in this article is grounded in data from a broader evaluation of 18 state tobacco control programs and previous work. The newly developed Component Model of Infrastructure (CMI) addresses the limitations of a previous model and contains 5 core components (multilevel leadership, managed resources, engaged data, responsive plans and planning, networked partnerships) and 3 supporting components (strategic understanding, operations, contextual influences). The CMI is a practical, implementation-focused model applicable across public health programs, enabling linkages to capacity, sustainability, and outcome measurement. PMID:24922125

  16. ENVRI PLUS: European initiative towards technical and research cultural solutions for across-disciplines accessible Research Infrastructure products

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Kutsch, W. L.

    2015-12-01

    Environmental Research Infrastructures are often built as bottom-up initiatives to provide products for specific target group, which often is very discipline specific. However, the societal or environmental challenges are typically not concentrated on specific disciplines, and require usage of data sets from many RIs. ENVRI PLUS is an initiative where the European environmental RIs work together to provide common technical background (in physical observation technologies and in data products and descriptions) to make the RI products more usable to user groups outside of the original RI target groups. ENVRI PLUS also includes many policy and dissemination concentrated actions to make the RI operations coherent and understandable to both scientists and other potential users. The actions include building common technological capital of the RIs (physical and data-oriented), creating common access procedures (especially for cross-diciplinary access), developing ethical guidelines and related policies, distributing know-how between RIs and building common communication and collaboration system for European environmental RIs. All ENVRI PLUS products are free to use, e.g. for use of new or existing environmental RIs worldwide.

  17. Testbeds for Assessing Critical Scenarios in Power Control Systems

    NASA Astrophysics Data System (ADS)

    Dondossola, Giovanna; Deconinck, Geert; Garrone, Fabrizio; Beitollahi, Hakem

    The paper presents a set of control system scenarios implemented in two testbeds developed in the context of the European Project CRUTIAL - CRitical UTility InfrastructurAL Resilience. The selected scenarios refer to power control systems encompassing information and communication security of SCADA systems for grid teleoperation, impact of attacks on inter-operator communications in power emergency conditions, impact of intentional faults on the secondary and tertiary control in power grids with distributed generators. Two testbeds have been developed for assessing the effect of the attacks and prototyping resilient architectures.

  18. Adaptive Harmonic Detection Control of Grid Interfaced Solar Photovoltaic Energy System with Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Singh, B.; Goel, S.

    2015-03-01

    This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.

  19. Integration of Water, Sanitation, and Hygiene for the Prevention and Control of Neglected Tropical Diseases: A Rationale for Inter-Sectoral Collaboration

    PubMed Central

    Jacobson, Julie; Abbott, Daniel; Addiss, David G.; Amnie, Asrat G.; Beckwith, Colin; Cairncross, Sandy; Callejas, Rafael; Colford, Jack M.; Emerson, Paul M.; Fenwick, Alan; Fishman, Rebecca; Gallo, Kerry; Grimes, Jack; Karapetyan, Gagik; Keene, Brooks; Lammie, Patrick J.; MacArthur, Chad; Lochery, Peter; Petach, Helen; Platt, Jennifer; Prabasi, Sarina; Rosenboom, Jan Willem; Roy, Sharon; Saywell, Darren; Schechtman, Lisa; Tantri, Anupama; Velleman, Yael; Utzinger, Jürg

    2013-01-01

    Improvements of water, sanitation, and hygiene (WASH) infrastructure and appropriate health-seeking behavior are necessary for achieving sustained control, elimination, or eradication of many neglected tropical diseases (NTDs). Indeed, the global strategies to fight NTDs include provision of WASH, but few programs have specific WASH targets and approaches. Collaboration between disease control programs and stakeholders in WASH is a critical next step. A group of stakeholders from the NTD control, child health, and WASH sectors convened in late 2012 to discuss opportunities for, and barriers to, collaboration. The group agreed on a common vision, namely “Disease-free communities that have adequate and equitable access to water and sanitation, and that practice good hygiene.” Four key areas of collaboration were identified, including (i) advocacy, policy, and communication; (ii) capacity building and training; (iii) mapping, data collection, and monitoring; and (iv) research. We discuss strategic opportunities and ways forward for enhanced collaboration between the WASH and the NTD sectors. PMID:24086781

  20. Integration of robotic resources into FORCEnet

    NASA Astrophysics Data System (ADS)

    Nguyen, Chinh; Carroll, Daniel; Nguyen, Hoa

    2006-05-01

    The Networked Intelligence, Surveillance, and Reconnaissance (NISR) project integrates robotic resources into Composeable FORCEnet to control and exploit unmanned systems over extremely long distances. The foundations are built upon FORCEnet-the U.S. Navy's process to define C4ISR for net-centric operations-and the Navy Unmanned Systems Common Control Roadmap to develop technologies and standards for interoperability, data sharing, publish-and-subscribe methodology, and software reuse. The paper defines the goals and boundaries for NISR with focus on the system architecture, including the design tradeoffs necessary for unmanned systems in a net-centric model. Special attention is given to two specific scenarios demonstrating the integration of unmanned ground and water surface vehicles into the open-architecture web-based command-and-control information-management system of Composeable FORCEnet. Planned spiral development for NISR will improve collaborative control, expand robotic sensor capabilities, address multiple domains including underwater and aerial platforms, and extend distributive communications infrastructure for battlespace optimization for unmanned systems in net-centric operations.

  1. Integration of water, sanitation, and hygiene for the prevention and control of neglected tropical diseases: a rationale for inter-sectoral collaboration.

    PubMed

    Freeman, Matthew C; Ogden, Stephanie; Jacobson, Julie; Abbott, Daniel; Addiss, David G; Amnie, Asrat G; Beckwith, Colin; Cairncross, Sandy; Callejas, Rafael; Colford, Jack M; Emerson, Paul M; Fenwick, Alan; Fishman, Rebecca; Gallo, Kerry; Grimes, Jack; Karapetyan, Gagik; Keene, Brooks; Lammie, Patrick J; Macarthur, Chad; Lochery, Peter; Petach, Helen; Platt, Jennifer; Prabasi, Sarina; Rosenboom, Jan Willem; Roy, Sharon; Saywell, Darren; Schechtman, Lisa; Tantri, Anupama; Velleman, Yael; Utzinger, Jürg

    2013-01-01

    Improvements of water, sanitation, and hygiene (WASH) infrastructure and appropriate health-seeking behavior are necessary for achieving sustained control, elimination, or eradication of many neglected tropical diseases (NTDs). Indeed, the global strategies to fight NTDs include provision of WASH, but few programs have specific WASH targets and approaches. Collaboration between disease control programs and stakeholders in WASH is a critical next step. A group of stakeholders from the NTD control, child health, and WASH sectors convened in late 2012 to discuss opportunities for, and barriers to, collaboration. The group agreed on a common vision, namely "Disease-free communities that have adequate and equitable access to water and sanitation, and that practice good hygiene." Four key areas of collaboration were identified, including (i) advocacy, policy, and communication; (ii) capacity building and training; (iii) mapping, data collection, and monitoring; and (iv) research. We discuss strategic opportunities and ways forward for enhanced collaboration between the WASH and the NTD sectors.

  2. Green infrastructure monitoring in Camden, NJ

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...

  3. Infrastructure Vulnerability Assessment Model (I-VAM).

    PubMed

    Ezell, Barry Charles

    2007-06-01

    Quantifying vulnerability to critical infrastructure has not been adequately addressed in the literature. Thus, the purpose of this article is to present a model that quantifies vulnerability. Vulnerability is defined as a measure of system susceptibility to threat scenarios. This article asserts that vulnerability is a condition of the system and it can be quantified using the Infrastructure Vulnerability Assessment Model (I-VAM). The model is presented and then applied to a medium-sized clean water system. The model requires subject matter experts (SMEs) to establish value functions and weights, and to assess protection measures of the system. Simulation is used to account for uncertainty in measurement, aggregate expert assessment, and to yield a vulnerability (Omega) density function. Results demonstrate that I-VAM is useful to decisionmakers who prefer quantification to qualitative treatment of vulnerability. I-VAM can be used to quantify vulnerability to other infrastructures, supervisory control and data acquisition systems (SCADA), and distributed control systems (DCS).

  4. Security middleware infrastructure for DICOM images in health information systems.

    PubMed

    Kallepalli, Vijay N V; Ehikioya, Sylvanus A; Camorlinga, Sergio; Rueda, Jose A

    2003-12-01

    In health care, it is mandatory to maintain the privacy and confidentiality of medical data. To achieve this, a fine-grained access control and an access log for accessing medical images are two important aspects that need to be considered in health care systems. Fine-grained access control provides access to medical data only to authorized persons based on priority, location, and content. A log captures each attempt to access medical data. This article describes an overall middleware infrastructure required for secure access to Digital Imaging and Communication in Medicine (DICOM) images, with an emphasis on access control and log maintenance. We introduce a hybrid access control model that combines the properties of two existing models. A trust relationship between hospitals is used to make the hybrid access control model scalable across hospitals. We also discuss events that have to be logged and where the log has to be maintained. A prototype of security middleware infrastructure is implemented.

  5. Agile Infrastructure Monitoring

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.

    2014-06-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  6. MISSION: Mission and Safety Critical Support Environment. Executive overview

    NASA Technical Reports Server (NTRS)

    Mckay, Charles; Atkinson, Colin

    1992-01-01

    For mission and safety critical systems it is necessary to: improve definition, evolution and sustenance techniques; lower development and maintenance costs; support safe, timely and affordable system modifications; and support fault tolerance and survivability. The goal of the MISSION project is to lay the foundation for a new generation of integrated systems software providing a unified infrastructure for mission and safety critical applications and systems. This will involve the definition of a common, modular target architecture and a supporting infrastructure.

  7. The ORAC-DR data reduction pipeline

    NASA Astrophysics Data System (ADS)

    Cavanagh, B.; Jenness, T.; Economou, F.; Currie, M. J.

    2008-03-01

    The ORAC-DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC-DR.

  8. Critical Infrastructure Protection II, The International Federation for Information Processing, Volume 290.

    NASA Astrophysics Data System (ADS)

    Papa, Mauricio; Shenoi, Sujeet

    The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.

  9. Advancing vector biology research: a community survey for future directions, research applications and infrastructure requirements

    PubMed Central

    Kohl, Alain; Pondeville, Emilie; Schnettler, Esther; Crisanti, Andrea; Supparo, Clelia; Christophides, George K.; Kersey, Paul J.; Maslen, Gareth L.; Takken, Willem; Koenraadt, Constantianus J. M.; Oliva, Clelia F.; Busquets, Núria; Abad, F. Xavier; Failloux, Anna-Bella; Levashina, Elena A.; Wilson, Anthony J.; Veronesi, Eva; Pichard, Maëlle; Arnaud Marsh, Sarah; Simard, Frédéric; Vernick, Kenneth D.

    2016-01-01

    Vector-borne pathogens impact public health, animal production, and animal welfare. Research on arthropod vectors such as mosquitoes, ticks, sandflies, and midges which transmit pathogens to humans and economically important animals is crucial for development of new control measures that target transmission by the vector. While insecticides are an important part of this arsenal, appearance of resistance mechanisms is increasingly common. Novel tools for genetic manipulation of vectors, use of Wolbachia endosymbiotic bacteria, and other biological control mechanisms to prevent pathogen transmission have led to promising new intervention strategies, adding to strong interest in vector biology and genetics as well as vector–pathogen interactions. Vector research is therefore at a crucial juncture, and strategic decisions on future research directions and research infrastructure investment should be informed by the research community. A survey initiated by the European Horizon 2020 INFRAVEC-2 consortium set out to canvass priorities in the vector biology research community and to determine key activities that are needed for researchers to efficiently study vectors, vector-pathogen interactions, as well as access the structures and services that allow such activities to be carried out. We summarize the most important findings of the survey which in particular reflect the priorities of researchers in European countries, and which will be of use to stakeholders that include researchers, government, and research organizations. PMID:27677378

  10. Control Infrastructure for a Pulsed Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Regis, M. J.; Stettler, M. W.; Vytla, V. K.

    2016-10-01

    We report on updates to the accelerator controls for the Neutralized Drift Compression Experiment II, a pulsed induction-type accelerator for heavy ions. The control infrastructure is built around a LabVIEW interface combined with an Apache Cassandra backend for data archiving. Recent upgrades added the storing and retrieving of device settings into the database, as well as ZeroMQ as a message broker that replaces LabVIEW's shared variables. Converting to ZeroMQ also allows easy access via other programming languages, such as Python.

  11. Control Infrastructure for a Pulsed Ion Accelerator

    DOE PAGES

    Persaud, A.; Regis, M. J.; Stettler, M. W.; ...

    2016-07-27

    We report on updates to the accelerator controls for the Neutralized Drift Compression Experiment II, a pulsed induction-type accelerator for heavy ions. The control infrastructure is built around a LabVIEW interface combined with an Apache Cassandra backend for data archiving. Recent upgrades added the storing and retrieving of device settings into the database, as well as ZeroMQ as a message broker that replaces LabVIEW's shared variables. Converting to ZeroMQ also allows easy access via other programming languages, such as Python.

  12. The GEOSS solution for enabling data interoperability and integrative research.

    PubMed

    Nativi, Stefano; Mazzetti, Paolo; Craglia, Max; Pirrone, Nicola

    2014-03-01

    Global sustainability research requires an integrative research effort underpinned by digital infrastructures (systems) able to harness data and heterogeneous information across disciplines. Digital data and information sharing across systems and applications is achieved by implementing interoperability: a property of a product or system to work with other products or systems, present or future. There are at least three main interoperability challenges a digital infrastructure must address: technological, semantic, and organizational. In recent years, important international programs and initiatives are focusing on such an ambitious objective. This manuscript presents and combines the studies and the experiences carried out by three relevant projects, focusing on the heavy metal domain: Global Mercury Observation System, Global Earth Observation System of Systems (GEOSS), and INSPIRE. This research work recognized a valuable interoperability service bus (i.e., a set of standards models, interfaces, and good practices) proposed to characterize the integrative research cyber-infrastructure of the heavy metal research community. In the paper, the GEOSS common infrastructure is discussed implementing a multidisciplinary and participatory research infrastructure, introducing a possible roadmap for the heavy metal pollution research community to join GEOSS as a new Group on Earth Observation community of practice and develop a research infrastructure for carrying out integrative research in its specific domain.

  13. Decontamination of radiological agents from drinking water infrastructure: a literature review and summary.

    PubMed

    Szabo, Jeff; Minamyer, Scott

    2014-11-01

    This report summarizes the current state of knowledge on the persistence of radiological agents on drinking water infrastructure (such as pipes) along with information on decontamination should persistence occur. Decontamination options for drinking water infrastructure have been explored for some important radiological agents (cesium, strontium and cobalt), but important data gaps remain. Although some targeted experiments have been published on cesium, strontium and cobalt persistence on drinking water infrastructure, most of the data comes from nuclear clean-up sites. Furthermore, the studies focused on drinking water systems use non-radioactive surrogates. Non-radioactive cobalt was shown to be persistent on iron due to oxidation with free chlorine in drinking water and precipitation on the iron surface. Decontamination with acidification was an effective removal method. Strontium persistence on iron was transient in tap water, but adherence to cement-mortar has been demonstrated and should be further explored. Cesium persistence on iron water infrastructure was observed when flow was stagnant, but not with water flow present. Future research suggestions focus on expanding the available cesium, strontium and cobalt persistence data to other common infrastructure materials, specifically cement-mortar. Further exploration chelating agents and low pH treatment is recommended for future decontamination studies. Published by Elsevier Ltd.

  14. Integrating Automation into a Multi-Mission Operations Center

    NASA Technical Reports Server (NTRS)

    Surka, Derek M.; Jones, Lori; Crouse, Patrick; Cary, Everett A, Jr.; Esposito, Timothy C.

    2007-01-01

    NASA Goddard Space Flight Center's Space Science Mission Operations (SSMO) Project is currently tackling the challenge of minimizing ground operations costs for multiple satellites that have surpassed their prime mission phase and are well into extended mission. These missions are being reengineered into a multi-mission operations center built around modern information technologies and a common ground system infrastructure. The effort began with the integration of four SMEX missions into a similar architecture that provides command and control capabilities and demonstrates fleet automation and control concepts as a pathfinder for additional mission integrations. The reengineered ground system, called the Multi-Mission Operations Center (MMOC), is now undergoing a transformation to support other SSMO missions, which include SOHO, Wind, and ACE. This paper presents the automation principles and lessons learned to date for integrating automation into an existing operations environment for multiple satellites.

  15. Tailoring Green Infrastructure Implementation Scenarios based on Stormwater Management Objectives

    EPA Science Inventory

    Green infrastructure (GI) refers to stormwater management practices that mimic nature by soaking up, storing, and controlling onsite. GI practices can contribute reckonable benefits towards meeting stormwater management objectives, such as runoff peak shaving, volume reduction, f...

  16. Green Infrastructure Models and Tools

    EPA Science Inventory

    The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...

  17. A model for simulating adaptive, dynamic flows on networks: Application to petroleum infrastructure

    DOE PAGES

    Corbet, Thomas F.; Beyeler, Walt; Wilson, Michael L.; ...

    2017-10-03

    Simulation models can greatly improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, thenmore » on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.« less

  18. A model for simulating adaptive, dynamic flows on networks: Application to petroleum infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbet, Thomas F.; Beyeler, Walt; Wilson, Michael L.

    Simulation models can greatly improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, thenmore » on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.« less

  19. INcreasing Security and Protection through Infrastructure REsilience: The INSPIRE Project

    NASA Astrophysics Data System (ADS)

    D'Antonio, Salvatore; Romano, Luigi; Khelil, Abdelmajid; Suri, Neeraj

    The INSPIRE project aims at enhancing the European potential in the field of security by ensuring the protection of critical information infrastructures through (a) the identification of their vulnerabilities and (b) the development of innovative techniques for securing networked process control systems. To increase the resilience of such systems INSPIRE will develop traffic engineering algorithms, diagnostic processes and self-reconfigurable architectures along with recovery techniques. Hence, the core idea of the INSPIRE project is to protect critical information infrastructures by appropriately configuring, managing, and securing the communication network which interconnects the distributed control systems. A working prototype will be implemented as a final demonstrator of selected scenarios. Controls/Communication Experts will support project partners in the validation and demonstration activities. INSPIRE will also contribute to standardization process in order to foster multi-operator interoperability and coordinated strategies for securing lifeline systems.

  20. International Convergence on Geoscience Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Allison, M. L.; Atkinson, R.; Arctur, D. K.; Cox, S.; Jackson, I.; Nativi, S.; Wyborn, L. A.

    2012-04-01

    There is growing international consensus on addressing the challenges to cyber(e)-infrastructure for the geosciences. These challenges include: Creating common standards and protocols; Engaging the vast number of distributed data resources; Establishing practices for recognition of and respect for intellectual property; Developing simple data and resource discovery and access systems; Building mechanisms to encourage development of web service tools and workflows for data analysis; Brokering the diverse disciplinary service buses; Creating sustainable business models for maintenance and evolution of information resources; Integrating the data management life-cycle into the practice of science. Efforts around the world are converging towards de facto creation of an integrated global digital data network for the geosciences based on common standards and protocols for data discovery and access, and a shared vision of distributed, web-based, open source interoperable data access and integration. Commonalities include use of Open Geospatial Consortium (OGC) and ISO specifications and standardized data interchange mechanisms. For multidisciplinarity, mediation, adaptation, and profiling services have been successfully introduced to leverage the geosciences standards which are commonly used by the different geoscience communities -introducing a brokering approach which extends the basic SOA archetype. Principal challenges are less technical than cultural, social, and organizational. Before we can make data interoperable, we must make people interoperable. These challenges are being met by increased coordination of development activities (technical, organizational, social) among leaders and practitioners in national and international efforts across the geosciences to foster commonalities across disparate networks. In doing so, we will 1) leverage and share resources, and developments, 2) facilitate and enhance emerging technical and structural advances, 3) promote interoperability across scientific domains, 4) support the promulgation and institutionalization of agreed-upon standards, protocols, and practice, and 5) enhance knowledge transfer not only across the community, but into the domain sciences, 6) lower existing entry barriers for users and data producers, 7) build on the existing disciplinary infrastructures leveraging their service buses. . All of these objectives are required for establishing a permanent and sustainable cyber(e)-infrastructure for the geosciences. The rationale for this approach is well articulated in the AuScope mission statement: "Many of these problems can only be solved on a national, if not global scale. No single researcher, research institution, discipline or jurisdiction can provide the solutions. We increasingly need to embrace e-Research techniques and use the internet not only to access nationally distributed datasets, instruments and compute infrastructure, but also to build online, 'virtual' communities of globally dispersed researchers." Multidisciplinary interoperability can be successfully pursued by adopting a "system of systems" or a "Network of Networks" philosophy. This approach aims to: (a) supplement but not supplant systems mandates and governance arrangements; (b) keep the existing capacities as autonomous as possible; (c) lower entry barriers; (d) Build incrementally on existing infrastructures (information systems); (e) incorporate heterogeneous resources by introducing distribution and mediation functionalities. This approach has been adopted by the European INSPIRE (Infrastructure for Spatial Information in the European Community) initiative and by the international GEOSS (Global Earth Observation System of Systems) programme.

  1. Control System Applicable Use Assessment of the Secure Computing Corporation - Secure Firewall (Sidewinder)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Mark D.; Clements, Samuel L.

    2009-01-01

    Battelle’s National Security & Defense objective is, “applying unmatched expertise and unique facilities to deliver homeland security solutions. From detection and protection against weapons of mass destruction to emergency preparedness/response and protection of critical infrastructure, we are working with industry and government to integrate policy, operational, technological, and logistical parameters that will secure a safe future”. In an ongoing effort to meet this mission, engagements with industry that are intended to improve operational and technical attributes of commercial solutions that are related to national security initiatives are necessary. This necessity will ensure that capabilities for protecting critical infrastructure assets aremore » considered by commercial entities in their development, design, and deployment lifecycles thus addressing the alignment of identified deficiencies and improvements needed to support national cyber security initiatives. The Secure Firewall (Sidewinder) appliance by Secure Computing was assessed for applicable use in critical infrastructure control system environments, such as electric power, nuclear and other facilities containing critical systems that require augmented protection from cyber threat. The testing was performed in the Pacific Northwest National Laboratory’s (PNNL) Electric Infrastructure Operations Center (EIOC). The Secure Firewall was tested in a network configuration that emulates a typical control center network and then evaluated. A number of observations and recommendations are included in this report relating to features currently included in the Secure Firewall that support critical infrastructure security needs.« less

  2. Autonomous watersheds: Reducing flooding and stream erosion through real-time control

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Wong, B. P.

    2017-12-01

    We introduce an analytical toolchain, based on dynamical system theory and feedback control, to determine how many control points (valves, gates, pumps, etc.) are needed to transform urban watersheds from static to adaptive. Advances and distributed sensing and control stand to fundamentally change how we manage urban watersheds. In lieu of new and costly infrastructure, the real-time control of stormwater systems will reduce flooding, mitigate stream erosion, and improve the treatment of polluted runoff. We discuss the how open source technologies, in the form of wireless sensor nodes and remotely-controllable valves (open-storm.org), have been deployed to build "smart" stormwater systems in the Midwestern US. Unlike "static" infrastructure, which cannot readily adapt to changing inputs and land uses, these distributed control assets allow entire watersheds to be reconfigured on a storm-by-storm basis. Our results show how the control of even just a few valves within urban catchments (1-10km^2) allows for the real-time "shaping" of hydrographs, which reduces downstream erosion and flooding. We also introduce an equivalence framework that can be used by decision-makers to objectively compare investments into "smart" system to more traditional solutions, such as gray and green stormwater infrastructure.

  3. Army Corrosion Prevention and Control (CPC) Program for Facilities and Infrastructure

    DTIC Science & Technology

    2010-02-01

    FY2009 - 2011 • Benefits: Reduced corrosion due to elimination of metallic rebar , reduced weight equates to reduced dead load and increased dynamic...Decks as Replacement for Steel Reinforced Concrete Decks F09AR04: Corrosion Resistant Roofs with Integrated Sustainable PV Power Systems • Where...Army Corrosion Prevention and Control (CPC) Program for Facilities and Infrastructure Dr. Craig E. College Deputy Assistant Chief of Staff for

  4. Tank waste remediation system privatization infrastructure program requirements and document management process guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ROOT, R.W.

    1999-05-18

    This guide provides the Tank Waste Remediation System Privatization Infrastructure Program management with processes and requirements to appropriately control information and documents in accordance with the Tank Waste Remediation System Configuration Management Plan (Vann 1998b). This includes documents and information created by the program, as well as non-program generated materials submitted to the project. It provides appropriate approval/control, distribution and filing systems.

  5. A Flight Control System Architecture for the NASA AirSTAR Flight Test Infrastructure

    NASA Technical Reports Server (NTRS)

    Murch, Austin M.

    2008-01-01

    A flight control system architecture for the NASA AirSTAR infrastructure has been designed to address the challenges associated with safe and efficient flight testing of research control laws in adverse flight conditions. The AirSTAR flight control system provides a flexible framework that enables NASA Aviation Safety Program research objectives, and includes the ability to rapidly integrate and test research control laws, emulate component or sensor failures, inject automated control surface perturbations, and provide a baseline control law for comparison to research control laws and to increase operational efficiency. The current baseline control law uses an angle of attack command augmentation system for the pitch axis and simple stability augmentation for the roll and yaw axes.

  6. Effects of stormwater management and stream restoration on watershed nitrogen retention

    EPA Science Inventory

    Restoring urban infrastructure and managing the nitrogen cycle represent emerging challenges for urban water quality. We investigated whether stormwater control measures (SCMs), a form of green infrastructure, integrated into restored and degraded urban stream networks can influ...

  7. First year update on green infrastructure monitoring in Camden, NJ

    EPA Science Inventory

    The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...

  8. Characterization of the relative importance of human- and infrastructure-associated bacteria in grey water: a case study.

    PubMed

    Keely, S P; Brinkman, N E; Zimmerman, B D; Wendell, D; Ekeren, K M; De Long, S K; Sharvelle, S; Garland, J L

    2015-07-01

    Development of efficacious grey water (GW) treatment systems would benefit from detailed knowledge of the bacterial composition of GW. Thus, the aim of this study was to characterize the bacterial composition from (i) various points throughout a GW recycling system that collects shower and sink handwash (SH) water into an equalization tank (ET) prior to treatment and (ii) laundry (LA) water effluent of a commercial-scale washer. Bacterial composition was analysed by high-throughput pyrosequencing of the 16S rRNA gene. LA was dominated by skin-associated bacteria, with Corynebacterium, Staphylococcus, Micrococcus, Propionibacterium and Lactobacillus collectively accounting for nearly 50% of the total sequences. SH contained a more evenly distributed community than LA, with some overlap (e.g. Propionibacterium), but also contained distinct genera common to wastewater infrastructure (e.g. Zoogloea). The ET contained many of these same wastewater infrastructure-associated bacteria, but was dominated by genera adapted for anaerobic conditions. The data indicate that a relatively consistent set of skin-associated genera are the dominant human-associated bacteria in GW, but infrastructure-associated bacteria from the GW collection system and ET used for transient storage will be the most common bacteria entering GW treatment and reuse systems. This study is the first to use high-throughput sequencing to identify the bacterial composition of various GW sources. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  9. Ocean Data Interoperability Platform (ODIP): Developing a Common Framework for Marine Data Management on a Global Scale

    NASA Astrophysics Data System (ADS)

    Glaves, H. M.; Schaap, D.

    2014-12-01

    As marine research becomes increasingly multidisciplinary in its approach there has been a corresponding rise in the demand for large quantities of high quality interoperable data. A number of regional initiatives are already addressing this requirement through the establishment of e-infrastructures to improve the discovery and access of marine data. Projects such as Geo-Seas and SeaDataNet in Europe, Rolling Deck to Repository (R2R) in the USA and IMOS in Australia have implemented local infrastructures to facilitate the exchange of standardised marine datasets. However, each of these regional initiatives has been developed to address their own requirements and independently of other regions. To establish a common framework for marine data management on a global scale these is a need to develop interoperability solutions that can be implemented across these initiatives.Through a series of workshops attended by the relevant domain specialists, the Ocean Data Interoperability Platform (ODIP) project has identified areas of commonality between the regional infrastructures and used these as the foundation for the development of three prototype interoperability solutions addressing: the use of brokering services for the purposes of providing access to the data available in the regional data discovery and access services including via the GEOSS portal the development of interoperability between cruise summary reporting systems in Europe, the USA and Australia for routine harvesting of cruise data for delivery via the Partnership for Observation of Global Oceans (POGO) portal the establishment of a Sensor Observation Service (SOS) for selected sensors installed on vessels and in real-time monitoring systems using sensor web enablement (SWE) These prototypes will be used to underpin the development of a common global approach to the management of marine data which can be promoted to the wider marine research community. ODIP is a community lead project that is currently focussed on regional initiatives in Europe, the USA and Australia but which is seeking to expand this framework to include other regional marine data infrastructures.

  10. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  11. The Common Risk Model for Dams: A Portfolio Approach to Security Risk Assessments

    DTIC Science & Technology

    2013-06-01

    and threat estimates in a way that accounts for the relationships among these variables. The CRM -D can effectively quantify the benefits of...consequence, vulnerability, and threat estimates in a way that properly accounts for the relationships among these variables. The CRM -D can effectively...Common RiskModel ( CRM ) for evaluating and comparing risks associated with the nation’s critical infrastructure. This model incorporates commonly used risk

  12. Modeling and Managing Risk in Billing Infrastructures

    NASA Astrophysics Data System (ADS)

    Baiardi, Fabrizio; Telmon, Claudio; Sgandurra, Daniele

    This paper discusses risk modeling and risk management in information and communications technology (ICT) systems for which the attack impact distribution is heavy tailed (e.g., power law distribution) and the average risk is unbounded. Systems with these properties include billing infrastructures used to charge customers for services they access. Attacks against billing infrastructures can be classified as peripheral attacks and backbone attacks. The goal of a peripheral attack is to tamper with user bills; a backbone attack seeks to seize control of the billing infrastructure. The probability distribution of the overall impact of an attack on a billing infrastructure also has a heavy-tailed curve. This implies that the probability of a massive impact cannot be ignored and that the average impact may be unbounded - thus, even the most expensive countermeasures would be cost effective. Consequently, the only strategy for managing risk is to increase the resilience of the infrastructure by employing redundant components.

  13. USGS perspectives on an integrated approach to watershed and coastal management

    USGS Publications Warehouse

    Larsen, Matthew C.; Hamilton, Pixie A.; Haines, John W.; Mason, Jr., Robert R.

    2010-01-01

    The writers discuss three critically important steps necessary for achieving the goal for improved integrated approaches on watershed and coastal protection and management. These steps involve modernization of monitoring networks, creation of common data and web services infrastructures, and development of modeling, assessment, and research tools. Long-term monitoring is needed for tracking the effectiveness approaches for controlling land-based sources of nutrients, contaminants, and invasive species. The integration of mapping and monitoring with conceptual and mathematical models, and multidisciplinary assessments is important in making well-informed decisions. Moreover, a better integrated data network is essential for mapping, statistical, and modeling applications, and timely dissemination of data and information products to a broad community of users.

  14. Technical Challenges and Opportunities of Centralizing Space Science Mission Operations (SSMO) at NASA Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Ido, Haisam; Burns, Rich

    2015-01-01

    The NASA Goddard Space Science Mission Operations project (SSMO) is performing a technical cost-benefit analysis for centralizing and consolidating operations of a diverse set of missions into a unified and integrated technical infrastructure. The presentation will focus on the notion of normalizing spacecraft operations processes, workflows, and tools. It will also show the processes of creating a standardized open architecture, creating common security models and implementations, interfaces, services, automations, notifications, alerts, logging, publish, subscribe and middleware capabilities. The presentation will also discuss how to leverage traditional capabilities, along with virtualization, cloud computing services, control groups and containers, and possibly Big Data concepts.

  15. The impact of range anxiety and home, workplace, and public charging infrastructure on simulated battery electric vehicle lifetime utility

    NASA Astrophysics Data System (ADS)

    Neubauer, Jeremy; Wood, Eric

    2014-07-01

    Battery electric vehicles (BEVs) offer the potential to reduce both oil imports and greenhouse gas emissions, but have a limited utility due to factors including driver range anxiety and access to charging infrastructure. In this paper we apply NREL's Battery Lifetime Analysis and Simulation Tool for Vehicles (BLAST-V) to examine the sensitivity of BEV utility to range anxiety and different charging infrastructure scenarios, including variable time schedules, power levels, and locations (home, work, and public installations). We find that the effects of range anxiety can be significant, but are reduced with access to additional charging infrastructure. We also find that (1) increasing home charging power above that provided by a common 15 A, 120 V circuit offers little added utility, (2) workplace charging offers significant utility benefits to select high mileage commuters, and (3) broadly available public charging can bring many lower mileage drivers to near-100% utility while strongly increasing the achieved miles of high mileage drivers.

  16. Research infrastructure support to address ecosystem dynamics

    NASA Astrophysics Data System (ADS)

    Los, Wouter

    2014-05-01

    Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.

  17. Ground Penetrating Radar technique for railway track characterization in Portugal

    NASA Astrophysics Data System (ADS)

    De Chiara, Francesca; Fontul, Simona; Fortunato, Eduardo; D'Andrea, Antonio

    2013-04-01

    Maintenance actions are significant for transport infrastructures but, today, costs have to be necessary limited. A proper quality control since the construction phase is a key factor for a long life cycle and for a good economy policy. For this reason, suitable techniques have to be chosen and non-destructive tests represent an efficient solution, as they allow to evaluate infrastructure characteristics in a continuous or quasi-continuous way, saving time and costs, enabling to make changes if tests results do not comply with the project requirements. Ground Penetrating Radar (GPR) is a quick and effective technique to evaluate infrastructure condition in a continuous manner, replacing or reducing the use of traditional drilling method. GPR application to railways infrastructures, during construction and monitoring phase, is relatively recent. It is based on the measuring of layers thicknesses and detection of structural changes. It also enables the assessment of materials properties that constitute the infrastructure and the evaluation of the different types of defects such as ballast pockets, fouled ballast, poor drainage, subgrade settlement and transitions problems. These deteriorations are generally the causes of vertical deviations in track geometry and they cannot be detected by the common monitoring procedures, namely the measurements of track geometry. Moreover, the development of new GPR systems with higher antenna frequencies, better data acquisition systems, more user friendly software and new algorithms for calculation of materials properties can lead to a regular use of GPR. Therefore, it represents a reliable technique to assess track geometry problems and consequently to improve maintenance planning. In Portugal, rail inspection is performed with Plasser & Theurer EM120 equipment and recently 400 MHz IDS antennas were installed on it. GPR tests were performed on the Portuguese rail network and, as case study in this paper, a renewed track was considered. The aim was to detect, along the track, changes of the layers in terms of both thicknesses and materials characteristics by using specific software, Railwaydoctor. Different test campaigns were studied in order to determine and compare the materials dielectric constants that can be influenced by water content values, due to measurements performed in different seasons.

  18. Evaluating Green/Gray Infrastructure for CSO/Stormwater Control

    EPA Science Inventory

    The NRMRL is conducting this project to evaluate the water quality and quantity benefits of a large-scale application of green infrastructure (low-impact development/best management practices) retrofits in an entire subcatchment. It will document ORD's effort to demonstrate the e...

  19. Community Needs Assessment and Portal Prototype Development for an Arctic Spatial Data Infrastructure (ASDI)

    NASA Astrophysics Data System (ADS)

    Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A. G.

    2007-12-01

    As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing science through a variety of cyber-infrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is being collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circum-arctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through an assessment of community needs, readiness, and resources and through the development of a prototype web-mapping portal.

  20. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  1. The role of trees in urban stormwater management | Science ...

    EPA Pesticide Factsheets

    Urban impervious surfaces convert precipitation to stormwater runoff, which causes water quality and quantity problems. While traditional stormwater management has relied on gray infrastructure such as piped conveyances to collect and convey stormwater to wastewater treatment facilities or into surface waters, cities are exploring green infrastructure to manage stormwater at its source. Decentralized green infrastructure leverages the capabilities of soil and vegetation to infiltrate, redistribute, and otherwise store stormwater volume, with the potential to realize ancillary environmental, social, and economic benefits. To date, green infrastructure science and practice have largely focused on infiltration-based technologies that include rain gardens, bioswales, and permeable pavements. However, a narrow focus on infiltration overlooks other losses from the hydrologic cycle, and we propose that arboriculture – the cultivation of trees and other woody plants – deserves additional consideration as a stormwater control measure. Trees interact with the urban hydrologic cycle by intercepting incoming precipitation, removing water from the soil via transpiration, enhancing infiltration, and bolstering the performance of other green infrastructure technologies. However, many of these interactions are inadequately understood, particularly at spatial and temporal scales relevant to stormwater management. As such, the reliable use of trees for stormwater control depe

  2. Optical stabilization for time transfer infrastructure

    NASA Astrophysics Data System (ADS)

    Vojtech, Josef; Altmann, Michal; Skoda, Pavel; Horvath, Tomas; Slapak, Martin; Smotlacha, Vladimir; Havlis, Ondrej; Munster, Petr; Radil, Jan; Kundrat, Jan; Altmannova, Lada; Velc, Radek; Hula, Miloslav; Vohnout, Rudolf

    2017-08-01

    In this paper, we propose and present verification of all-optical methods for stabilization of the end-to-end delay of an optical fiber link. These methods are verified for deployment within infrastructure for accurate time and stable frequency distribution, based on sharing of fibers with research and educational network carrying live data traffic. Methods range from path length control, through temperature conditioning method to transmit wavelength control. Attention is given to achieve continuous control for relatively broad range of delays. We summarize design rules for delay stabilization based on the character and the total delay jitter.

  3. Multi-Level Data-Security and Data-Protection in a Distributed Search Infrastructure for Digital Medical Samples.

    PubMed

    Witt, Michael; Krefting, Dagmar

    2016-01-01

    Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.

  4. Dynamic VM Provisioning for TORQUE in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.

    2014-06-01

    Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.

  5. Common Capabilities for Trust and Security in Service Oriented Infrastructures

    NASA Astrophysics Data System (ADS)

    Brossard, David; Colombo, Maurizio

    In order to achieve agility of the enterprise and shorter concept-to-market timescales for new services, IT and communication providers and their customers increasingly use technologies and concepts which come together under the banner of the Service Oriented Infrastructure (SOI) approach. In this paper we focus on the challenges relating to SOI security. The solutions presented cover the following areas: i) identity federation, ii) distributed usage & access management, and iii) context-aware secure messaging, routing & transformation. We use a scenario from the collaborative engineering space to illustrate the challenges and the solutions.

  6. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  7. Commuting and health in Cambridge: a study of a 'natural experiment' in the provision of new transport infrastructure

    PubMed Central

    2010-01-01

    Background Modifying transport infrastructure to support active travel (walking and cycling) could help to increase population levels of physical activity. However, there is limited evidence for the effects of interventions in this field, and to the best of our knowledge no study has convincingly demonstrated an increase in physical activity directly attributable to this type of intervention. We have therefore taken the opportunity presented by a 'natural experiment' in Cambridgeshire, UK to establish a quasi-experimental study of the effects of a major transport infrastructural intervention on travel behaviour, physical activity and related wider health impacts. Design and methods The Commuting and Health in Cambridge study comprises three main elements: a cohort study of adults who travel to work in Cambridge, using repeated postal questionnaires and basic objective measurement of physical activity using accelerometers; in-depth quantitative studies of physical activity energy expenditure, travel and movement patterns and estimated carbon emissions using household travel diaries, combined heart rate and movement sensors and global positioning system (GPS) receivers; and a longitudinal qualitative interview study to elucidate participants' attitudes, experiences and practices and to understand how environmental and social factors interact to influence travel behaviour, for whom and in what circumstances. The impacts of a specific intervention - the opening of the Cambridgeshire Guided Busway - and of other changes in the physical environment will be examined using a controlled quasi-experimental design within the overall cohort dataset. Discussion Addressing the unresolved research and policy questions in this area is not straightforward. The challenges include those of effectively combining different disciplinary perspectives on the research problems, developing common methodological ground in measurement and evaluation, implementing robust quantitative measurement of travel and physical activity behaviour in an unpredictable 'natural experiment' setting, defining exposure to the intervention, defining controls, and conceptualising an appropriate longitudinal analytical strategy. PMID:21080928

  8. Global sand trade is paving the way for a tragedy of the sand commons

    NASA Astrophysics Data System (ADS)

    Torres, A.; Brandt, J.; Lear, K.; Liu, J.

    2016-12-01

    In the first 40 years of the 21st century, planet Earth is highly likely to experience more urban land expansion than in all of history, an increase in transportation infrastructure by more than a third, and a great variety of land reclamation projects. While scientists are beginning to quantify the deep imprint of human infrastructure on biodiversity at large scales, its off-site impacts and linkages to sand mining and trade have been largely ignored. Sand is the most widely used building material in the world. With an ever-increasing demand for this resource, sand is being extracted at rates that far exceed its replenishment, and is becoming increasingly scarce. This has already led to conflicts around the world and will likely lead to a "tragedy of the sand commons" if sustainable sand mining and trade cannot be achieved. We investigate the environmental and socioeconomic interactions over large distances (telecouplings) of infrastructure development and sand mining and trade across diverse systems through transdisciplinary research and the recently proposed telecoupling framework. Our research is generating a thorough understanding of the telecouplings driven by an increasing demand for sand. In particular, we address three main research questions: 1) Where are the conflicts related to sand mining occurring?; 2) What are the major "sending" and "receiving" systems of sand?; and 3) What are the main components (e.g. causes, effects, agents, etc.) of telecoupled systems involving sand mining and trade? Our results highlight the role of global sand trade as a driver of environmental degradation that threatens the integrity of natural systems and their capacity to deliver key ecosystem services. In addition, infrastructure development and sand mining and trade have important implications for other sustainability challenges such as over-fishing and global warming. This knowledge will help to identify opportunities and tools to better promote a more sustainable use of sand, ultimately helping avoid a "tragedy of the sand commons".

  9. A Case Study on Nitrogen Uptake and Denitrification in a Restored Urban Stream in Baltimore, Maryland

    EPA Science Inventory

    Restoring urban infrastructure and managing the nitrogen cycle represent emerging challenges for urban water quality. We investigated whether stormwater control measures (SCMs), a form of green infrastructure, integrated into restored and degraded urban stream networks can influe...

  10. A flexible framework for process-based hydraulic and water quality modeling of stormwater green infrastructure performance

    EPA Science Inventory

    Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However,...

  11. Cyber Vulnerabilities Within Critical Infrastructure: The Flaws of Industrial Control Systems in the Oil and Gas Industry

    NASA Astrophysics Data System (ADS)

    Alpi, Danielle Marie

    The 16 sectors of critical infrastructure in the US are susceptible to cyber-attacks. Potential attacks come from internal and external threats. These attacks target the industrial control systems (ICS) of companies within critical infrastructure. Weakness in the energy sector's ICS, specifically the oil and gas industry, can result in economic and ecological disaster. The purpose of this study was to establish means for oil companies to identify and stop cyber-attacks specifically APT threats. This research reviewed current cyber vulnerabilities and ways in which a cyber-attack may be deterred. This research found that there are insecure devices within ICS that are not regularly updated. Therefore, security issues have amassed. Safety procedures and training thereof are often neglected. Jurisdiction is unclear in regard to critical infrastructure. The recommendations this research offers are further examination of information sharing methods, development of analytic platforms, and better methods for the implementation of defense-in-depth security measures.

  12. Status report of the SRT radiotelescope control software: the DISCOS project

    NASA Astrophysics Data System (ADS)

    Orlati, A.; Bartolini, M.; Buttu, M.; Fara, A.; Migoni, C.; Poppi, S.; Righini, S.

    2016-08-01

    The Sardinia Radio Telescope (SRT) is a 64-m fully-steerable radio telescope. It is provided with an active surface to correct for gravitational deformations, allowing observations from 300 MHz to 100 GHz. At present, three receivers are available: a coaxial LP-band receiver (305-410 MHz and 1.5-1.8 GHz), a C-band receiver (5.7-7.7 GHz) and a 7-feed K-band receiver (18-26.5 GHz). Several back-ends are also available in order to perform the different data acquisition and analysis procedures requested by scientific projects. The design and development of the SRT control software started in 2004, and now belongs to a wider project called DISCOS (Development of the Italian Single-dish COntrol System), which provides a common infrastructure to the three Italian radio telescopes (Medicina, Noto and SRT dishes). DISCOS is based on the Alma Common Software (ACS) framework, and currently consists of more than 500k lines of code. It is organized in a common core and three specific product lines, one for each telescope. Recent developments, carried out after the conclusion of the technical commissioning of the instrument (October 2013), consisted in the addition of several new features in many parts of the observing pipeline, spanning from the motion control to the digital back-ends for data acquisition and data formatting; we brie y describe such improvements. More importantly, in the last two years we have supported the astronomical validation of the SRT radio telescope, leading to the opening of the first public call for proposals in late 2015. During this period, while assisting both the engineering and the scientific staff, we massively employed the control software and were able to test all of its features: in this process we received our first feedback from the users and we could verify how the system performed in a real-life scenario, drawing the first conclusions about the overall system stability and performance. We examine how the system behaves in terms of network load and system load, how it reacts to failures and errors, and what components and services seem to be the most critical parts of our architecture, showing how the ACS framework impacts on these aspects. Moreover, the exposure to public utilization has highlighted the major flaws in our development and software management process, which had to be tuned and improved in order to achieve faster release cycles in response to user feedback, and safer deploy operations. In this regard we show how the introduction of testing practices, along with continuous integration, helped us to meet higher quality standards. Having identified the most critical aspects of our software, we conclude showing our intentions for the future development of DISCOS, both in terms of software features and software infrastructures.

  13. The Fundamental Spatial Data in the Public Administration Registers

    NASA Astrophysics Data System (ADS)

    Čada, V.; Janečka, K.

    2016-06-01

    The system of basic registers was launched in the Czech Republic in 2012. The system provides a unique solution to centralize and keep actual most common and widely used information as a part of the eGovernment. The basic registers are the central information source for information systems of public authorities. In October 2014, the Czech government approved the conception of The Strategy for the Development of the Infrastructure for Spatial Information in the Czech Republic to 2020 (GeoInfoStrategy) that serves as a basis for the NSDI. The paper describes the challenges in building the National Spatial Data Infrastructure (NSDI) in the Czech Republic with focus on the fundamental spatial data and related basic registers. The GeoInfoStrategy should also contribute to increasing of the competitiveness of the economy. Therefore the paper also reflects the Directive 2014/61/EU of the European Parliament and of the Council on measures to reduce the cost of deploying high-speed electronic communication networks. The Directive states that citizens as well as the private and public sectors must have the opportunity to be part of the digital economy. A high quality digital infrastructure underpins virtually all sectors of a modern and innovative economy. To ensure a development of such infrastructure in the Czech Republic, the Register of passive infrastructure providing information on the features of passive infrastructure has to be established.

  14. Infrastructure Commons in Economic Perspective

    NASA Astrophysics Data System (ADS)

    Frischmann, Brett M.

    This chapter briefly summarizes a theory (developed in substantial detail elsewhere)1 that explains why there are strong economic arguments for managing and sustaining infrastructure resources in an openly accessible manner. This theory facilitates a better understanding of two related issues: how society benefits from infrastructure resources and how decisions about how to manage or govern infrastructure resources affect a wide variety of public and private interests. The key insights from this analysis are that infrastructure resources generate value as inputs into a wide range of productive processes and that the outputs from these processes are often public goods and nonmarket goods that generate positive externalities that benefit society as a whole. Managing such resources in an openly accessible manner may be socially desirable from an economic perspective because doing so facilitates these downstream productive activities. For example, managing the Internet infrastructure in an openly accessible manner facilitates active citizen involvement in the production and sharing of many different public and nonmarket goods. Over the last decade, this has led to increased opportunities for a wide range of citizens to engage in entrepreneurship, political discourse, social network formation, and community building, among many other activities. The chapter applies these insights to the network neutrality debate and suggests how the debate might be reframed to better account for the wide range of private and public interests at stake.

  15. ENVRIplus - European collaborative development of environmental infrastructures

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Brus, M.; Kutsch, W. L.; Laj, P.

    2016-12-01

    European Research Infrastructures (RI) are built using ESFRI process, which dictates the steps towards a common European RIs. Building each RI separately creates unnessary barriers towards service users (e.g. on differing standards) and is not effiicient in e.g. e-science tool or data system development. To answer these inter-RI issues, the European Commission has funded several large scale cluster projectsto bring these RIs together already in planning and development phases to develop common tools, standards and methodologies, as well as learn from the exisiting systems. ENVRIplus is the cluster project for the environmental RIs in Europe, and provides platform for common development and sharing within the RI community. The project is organized around different themes, each having several workpackages with specific tasks. Major themesof the ENVRIplus are: Technical innovation, including tasks such as RI technology transfer, new observation techniques, autonomous operation, etc.; Data for science, with tasks such as RI reference model development, data discovery and citation, data publication, processing, etc.; Access to RIs, with specific tasks on interdicplinary and transnational access to RI services, and common access governance; Societal relevance and understanding, tackling on ethical issues on RI operations and understanding on human-environmental system and citizen science approaches, among others; Knowledge transfer, particularly between the RIs, and with developing RI organizations, organizing training and staff exchange; and Communication and dissemination, working towards a common environmental RI community (ENVRI community platform), and creating an own advisory RI discussion board (BEERi), and disseminating the ENVRIplus products globally. Importantly, all ENVRIplus results are open to any users from any country. Also, collaboration with international RIs and user communities are crucial to the success of the ENVRI initiatives. Overall goal is to do science globally, to answer global and regional critical challenges. The presentation will not only present the project, its state after nearly 2 years of operation, but will alsop present ideas towards building international and even more interdiciplinary collaboration on research infrastructures and their users.

  16. Data quality can make or break a research infrastructure

    NASA Astrophysics Data System (ADS)

    Pastorello, G.; Gunter, D.; Chu, H.; Christianson, D. S.; Trotta, C.; Canfora, E.; Faybishenko, B.; Cheah, Y. W.; Beekwilder, N.; Chan, S.; Dengel, S.; Keenan, T. F.; O'Brien, F.; Elbashandy, A.; Poindexter, C.; Humphrey, M.; Papale, D.; Agarwal, D.

    2017-12-01

    Research infrastructures (RIs) commonly support observational data provided by multiple, independent sources. Uniformity in the data distributed by such RIs is important in most applications, e.g., in comparative studies using data from two or more sources. Achieving uniformity in terms of data quality is challenging, especially considering that many data issues are unpredictable and cannot be detected until a first occurrence of the issue. With that, many data quality control activities within RIs require a manual, human-in-the-loop element, making it an expensive activity. Our motivating example is the FLUXNET2015 dataset - a collection of ecosystem-level carbon, water, and energy fluxes between land and atmosphere from over 200 sites around the world, some sites with over 20 years of data. About 90% of the human effort to create the dataset was spent in data quality related activities. Based on this experience, we have been working on solutions to increase the automation of data quality control procedures. Since it is nearly impossible to fully automate all quality related checks, we have been drawing from the experience with techniques used in software development, which shares a few common constraints. In both managing scientific data and writing software, human time is a precious resource; code bases, as Science datasets, can be large, complex, and full of errors; both scientific and software endeavors can be pursued by individuals, but collaborative teams can accomplish a lot more. The lucrative and fast-paced nature of the software industry fueled the creation of methods and tools to increase automation and productivity within these constraints. Issue tracking systems, methods for translating problems into automated tests, powerful version control tools are a few examples. Terrestrial and aquatic ecosystems research relies heavily on many types of observational data. As volumes of data collection increases, ensuring data quality is becoming an unwieldy challenge for RIs. Business as usual approaches to data quality do not work with larger data volumes. We believe RIs can benefit greatly from adapting and imitating this body of theory and practice from software quality into data quality, enabling systematic and reproducible safeguards against errors and mistakes in datasets as much as in software.

  17. Ketamine and international regulations.

    PubMed

    Liao, Yanhui; Tang, Yi-Lang; Hao, Wei

    2017-09-01

    Ketamine is an anesthetic commonly used in low-income countries and has recently been shown to be effective for treatment-resistant depression. However, the illicit manufacturing, trafficking, and nonmedical use of ketamine are increasing globally, and its illicit use poses major public health challenges in many countries. To review the nonmedical use of ketamine in selected countries and its regulatory control. We conducted a review of literature identified from searches of the China National Knowledge Infrastructure (CNKI) (1979-2016) and PubMed databases, supplemented by additional references identified by the authors. Special attention was given to the regulation of ketamine. Illicit manufacturing, trafficking, and use of ketamine appear to have begun on a large scale in several Asian nations, and it has subsequently spread to other regions. Regulations governing availability of ketamine vary across countries, but there is a clear trend toward tighter regulations. As nonmedical use of ketamine and its harmful consequences have worsened globally, stricter controls are necessary. Appropriate regulation of ketamine is important for international efforts to control ketamine's cross-border trafficking and its nonmedical use.

  18. Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Michael Roy; Bridges, Robert A; Combs, Frank L

    Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper providesmore » the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).« less

  19. Wenxin Keli for atrial fibrillation

    PubMed Central

    He, Zhuogen; Zheng, Minan; Xie, Pingchang; Wang, Yuanping; Yan, Xia; Deng, Dingwei

    2018-01-01

    Abstract Background: Atrial fibrillation (AF) is a most common cardiac arrhythmia in clinical practice. In China, Wenxin Keli (WXKL) therapy is a common treatment for AF, but its effects and safety remain uncertain. This protocol is to provide the methods used to assess the effectiveness and safety of WXKL for the treatment of patients with AF. Methods: We will search comprehensively the 4 English databases EMBASE, the Cochrane Central Register of Controlled Trials (Cochrane Library), PubMed, and Medline and 3 Chinese databases China National Knowledge Infrastructure (CNKI), Chinese Biomedical Literature Database (CBM), and Chinese Science and Technology Periodical database (VIP) on computer on March 2018 for the randomized controlled trials (RCTs) regarding WXKL for AF. The therapeutic effects according to the sinus rhythm and p-wave dispersion (Pwd) will be accepted as the primary outcomes. We will use RevMan V.5.3 software as well to compute the data synthesis carefully when a meta-analysis is allowed. Results: This study will provide a high-quality synthesis of current evidence of WXKL for AF. Conclusion: The conclusion of our systematic review will provide evidence to judge whether WXKL is an effective intervention for patient with AF. PROSPERO registration number: PROSPERO CRD 42018082045. PMID:29702984

  20. 75 FR 81249 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ...: By name, Social Security Number (SSN), and/or date of birth. Safeguards: System login is accomplished by DoD Common Access Card (CAC). Public Key Infrastructure (PKI) network login is required and allows...

  1. 78 FR 42482 - Approval and Promulgation of Air Quality Implementation Plans; Pennsylvania; Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-16

    ... Promulgation of Air Quality Implementation Plans; Pennsylvania; Infrastructure Requirements for the 2008 Lead National Ambient Air Quality Standards AGENCY: Environmental Protection Agency (EPA). ACTION: Proposed rule... Environmental Protection, Bureau of Air Quality Control, P.O. Box 8468, 400 Market Street, Harrisburg...

  2. High Resolution Sensing and Control of Urban Water Networks

    NASA Astrophysics Data System (ADS)

    Bartos, M. D.; Wong, B. P.; Kerkez, B.

    2016-12-01

    We present a framework to enable high-resolution sensing, modeling, and control of urban watersheds using (i) a distributed sensor network based on low-cost cellular-enabled motes, (ii) hydraulic models powered by a cloud computing infrastructure, and (iii) automated actuation valves that allow infrastructure to be controlled in real time. This platform initiates two major advances. First, we achieve a high density of measurements in urban environments, with an anticipated 40+ sensors over each urban area of interest. In addition to new measurements, we also illustrate the design and evaluation of a "smart" control system for real-world hydraulic networks. This control system improves water quality and mitigates flooding by using real-time hydraulic models to adaptively control releases from retention basins. We evaluate the potential of this platform through two ongoing deployments: (i) a flood monitoring network in the Dallas-Fort Worth metropolitan area that detects and anticipates floods at the level of individual roadways, and (ii) a real-time hydraulic control system in the city of Ann Arbor, MI—soon to be one of the most densely instrumented urban watersheds in the United States. Through these applications, we demonstrate that distributed sensing and control of water infrastructure can improve flash flood predictions, emergency response, and stormwater contaminant mitigation.

  3. Public Health System Response to Extreme Weather Events.

    PubMed

    Hunter, Mark D; Hunter, Jennifer C; Yang, Jane E; Crawley, Adam W; Aragón, Tomás J

    2016-01-01

    Extreme weather events, unpredictable and often far-reaching, constitute a persistent challenge for public health preparedness. The goal of this research is to inform public health systems improvement through examination of extreme weather events, comparing across cases to identify recurring patterns in event and response characteristics. Structured telephone-based interviews were conducted with representatives from health departments to assess characteristics of recent extreme weather events and agencies' responses. Response activities were assessed using the Centers for Disease Control and Prevention Public Health Emergency Preparedness Capabilities framework. Challenges that are typical of this response environment are reported. Forty-five local health departments in 20 US states. Respondents described public health system responses to 45 events involving tornadoes, flooding, wildfires, winter weather, hurricanes, and other storms. Events of similar scale were infrequent for a majority (62%) of the communities involved; disruption to critical infrastructure was universal. Public Health Emergency Preparedness Capabilities considered most essential involved environmental health investigations, mass care and sheltering, surveillance and epidemiology, information sharing, and public information and warning. Unanticipated response activities or operational constraints were common. We characterize extreme weather events as a "quadruple threat" because (1) direct threats to population health are accompanied by damage to public health protective and community infrastructure, (2) event characteristics often impose novel and pervasive burdens on communities, (3) responses rely on critical infrastructures whose failure both creates new burdens and diminishes response capacity, and (4) their infrequency and scale further compromise response capacity. Given the challenges associated with extreme weather events, we suggest opportunities for organizational learning and preparedness improvements.

  4. Ocean Data Interoperability Platform (ODIP): developing a common framework for marine data management on a global scale

    NASA Astrophysics Data System (ADS)

    Glaves, Helen; Schaap, Dick

    2016-04-01

    The increasingly ocean basin level approach to marine research has led to a corresponding rise in the demand for large quantities of high quality interoperable data. This requirement for easily discoverable and readily available marine data is currently being addressed by initiatives such as SeaDataNet in Europe, Rolling Deck to Repository (R2R) in the USA and the Australian Ocean Data Network (AODN) with each having implemented an e-infrastructure to facilitate the discovery and re-use of standardised multidisciplinary marine datasets available from a network of distributed repositories, data centres etc. within their own region. However, these regional data systems have been developed in response to the specific requirements of their users and in line with the priorities of the funding agency. They have also been created independently of the marine data infrastructures in other regions often using different standards, data formats, technologies etc. that make integration of marine data from these regional systems for the purposes of basin level research difficult. Marine research at the ocean basin level requires a common global framework for marine data management which is based on existing regional marine data systems but provides an integrated solution for delivering interoperable marine data to the user. The Ocean Data Interoperability Platform (ODIP/ODIP II) project brings together those responsible for the management of the selected marine data systems and other relevant technical experts with the objective of developing interoperability across the regional e-infrastructures. The commonalities and incompatibilities between the individual data infrastructures are identified and then used as the foundation for the specification of prototype interoperability solutions which demonstrate the feasibility of sharing marine data across the regional systems and also with relevant larger global data services such as GEO, COPERNICUS, IODE, POGO etc. The potential impact for the individual regional data infrastructures of implementing these prototype interoperability solutions is also being evaluated to determine both the technical and financial implications of their integration within existing systems. These impact assessments form part of the strategy to encourage wider adoption of the ODIP solutions and approach beyond the current scope of the project which is focussed on regional marine data systems in Europe, Australia, the USA and, more recently, Canada.

  5. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    PubMed

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.

  6. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions

    PubMed Central

    Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas

    2015-01-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191

  7. Ocean Data Interoperability Platform (ODIP): developing a common global framework for marine data management through international collaboration

    NASA Astrophysics Data System (ADS)

    Glaves, Helen

    2015-04-01

    Marine research is rapidly moving away from traditional discipline specific science to a wider ecosystem level approach. This more multidisciplinary approach to ocean science requires large amounts of good quality, interoperable data to be readily available for use in an increasing range of new and complex applications. Significant amounts of marine data and information are already available throughout the world as a result of e-infrastructures being established at a regional level to manage and deliver marine data to the end user. However, each of these initiatives has been developed to address specific regional requirements and independently of those in other regions. Establishing a common framework for marine data management on a global scale necessitates that there is interoperability across these existing data infrastructures and active collaboration between the organisations responsible for their management. The Ocean Data Interoperability Platform (ODIP) project is promoting co-ordination between a number of these existing regional e-infrastructures including SeaDataNet and Geo-Seas in Europe, the Integrated Marine Observing System (IMOS) in Australia, the Rolling Deck to Repository (R2R) in the USA and the international IODE initiative. To demonstrate this co-ordinated approach the ODIP project partners are currently working together to develop several prototypes to test and evaluate potential interoperability solutions for solving the incompatibilities between the individual regional marine data infrastructures. However, many of the issues being addressed by the Ocean Data Interoperability Platform are not specific to marine science. For this reason many of the outcomes of this international collaborative effort are equally relevant and transferable to other domains.

  8. Transformational Spaceport and Range Concept of Operations: A Vision to Transform Ground and Launch Operations

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.

  9. Analysis of Instrumentation to Monitor the Hydrologic Performance of Green Infrastructure at the Edison Environmental Center

    EPA Science Inventory

    Infiltration is one of the primary functional mechanisms of green infrastructure stormwater controls, so this study explored selection and placement of embedded soil moisture and water level sensors to monitor surface infiltration and infiltration into the underlying soil for per...

  10. Green Infrastructure for Stormwater Control: Gauging its Effectiveness with Community Partners, Summary of EPA GI Reports

    EPA Science Inventory

    This document is a summary of the green infrastructure reports, journal articles, and conference proceedings published to date. This summary will be updated as more reports are completed. The Environmental Protection Agency’s Office of Research and Development has an ambitious ...

  11. Update on Kansas City Middle Blue River Green Infrastructure Pilot Project - seminar

    EPA Science Inventory

    In 2010, Kansas City, MO (KCMO) signed a consent degree with EPA on combined sewer overflows. The City decided to use adaptive management in order to extensively utilize green infrastructure (GI) in lieu of, and in addition to, structural controls. KCMO installed 130 GI storm con...

  12. Identifying green infrastructure BMPs for reducing nitrogen export to a Chesapeake Bay agricultural stream: model synthesis and extension of experimental data

    EPA Science Inventory

    Background/Question/Methods The effectiveness of riparian forest buffers and other green infrastructure for reducing nitrogen export to agricultural streams has been well described experimentally, but a clear understanding of process-level hydrological and biogeochemical control...

  13. Development of a queue warning system utilizing ATM infrastructure system development and field-testing : final report.

    DOT National Transportation Integrated Search

    2017-06-13

    MnDOT has already deployed an extensive infrastructure for Active Traffic Management (ATM) on I-35W and I-94 with plans to expand on other segments of the Twin Cities freeway network. The ATM system includes intelligent lane control signals (ILCS) sp...

  14. Update on Kansas City Middle Blue River Green Infrastructure Pilot Project

    EPA Science Inventory

    In 2010, Kansas City, MO (KCMO) signed a consent degree with EPA on combined sewer overflows. The City decided to use adaptive management in order to extensively utilize green infrastructure (GI) in lieu of, and in addition to, gray structural controls. KCMO installed 130 GI sto...

  15. Using the GlideinWMS System as a Common Resource Provisioning Layer in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcas, J.; Belforte, S.; Bockelman, B.

    2015-12-23

    CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid,more » cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.« less

  16. Developing a National-Level Concept Dictionary for EHR Implementations in Kenya.

    PubMed

    Keny, Aggrey; Wanyee, Steven; Kwaro, Daniel; Mulwa, Edwin; Were, Martin C

    2015-01-01

    The increasing adoption of Electronic Health Records (EHR) by developing countries comes with the need to develop common terminology standards to assure semantic interoperability. In Kenya, where the Ministry of Health has rolled out an EHR at 646 sites, several challenges have emerged including variable dictionaries across implementations, inability to easily share data across systems, lack of expertise in dictionary management, lack of central coordination and custody of a terminology service, inadequately defined policies and processes, insufficient infrastructure, among others. A Concept Working Group was constituted to address these challenges. The country settled on a common Kenya data dictionary, initially derived as a subset of the Columbia International eHealth Laboratory (CIEL)/Millennium Villages Project (MVP) dictionary. The initial dictionary scope largely focuses on clinical needs. Processes and policies around dictionary management are being guided by the framework developed by Bakhshi-Raiez et al. Technical and infrastructure-based approaches are also underway to streamline workflow for dictionary management and distribution across implementations. Kenya's approach on comprehensive common dictionary can serve as a model for other countries in similar settings.

  17. Intelligent transportation systems infrastructure initiative

    DOT National Transportation Integrated Search

    1997-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  18. Near Fault Observatories (NFO) services and integration plan for European Plate Observing System (EPOS) Implementation Phase

    NASA Astrophysics Data System (ADS)

    Chiaraluce, Lauro

    2016-04-01

    Coherently with the EPOS vision aimed at creating a pan-European infrastructure for Earth Sciences supporting research for a more sustainable society, we are working on the integration of NFOs and services implementation facilitating their data and products discovery and usage. NFOs are National Research Infrastructures (NRI) consisting of advanced networks of multi-parametric sensors continuously monitoring the chemical and physical processes related to the common underlying Earth instabilities governing active faults evolution and the genesis of earthquakes. These infrastructures will enable advancements in understanding of earthquakes generation processes and associated ground shaking due to their high-quality near-source multidisciplinary data. In EPOS-IP seven NFOs are going to be linked: 1) the Altotiberina and 2) Irpinia Observatories in Italy, 3) Corinth in Greece, 4) South-Iceland Seismic Zone, 5) Valais in Switzerland, 6) Marmara Sea (GEO Supersite) in Turkey and 7) Vrancea in Romania. EPOS-IP aims to implement integrated services from a technical, legal, governance and financial point of view. Accordingly, our first effort within this first core group of NFOs will be establishing legal governance for such a young community to ensure a long-term sustainability of the envisaged services including the full adoption of the EPOS data policy. The establishment of a Board including representatives of each NFO formally appointed by the Institutions supporting the NRI is a basic requirement to provide and validate a stable governance mechanism supporting the initiatives finalised to the services provision. Extremely dense networks and less common instruments deserve an extraordinary work on data quality control and description. We will work on linking all the NFOs in a single distributed network of observatories with instrumental and monitoring standards based on common protocols for observation, analysis, and data access and distributed channels. We will rely on the services provided by other Thematic Core Services for the standard data (e.g. seismic and geodetic) and on the direct access to the e-infrastructures of individual NFOs via the Integrated Core Services web services for access and distribution of non standard data (e.g. strain- and tilt-meters, geochemical and electro- magneto-telluric data). We will collaborate with the other groups possessing the same data on data harmonization in terms of both format and metadata description to optimise and facilitate the integration and interoperability processes. The services will include a Virtual Laboratory, novel visualization tools for data and products describing the anatomy of active faults and the physical processes governing earthquake generation. VL is an online engagement and knowledge sharing initiative for communicating to the other scientists, stockholders and the public the state of scientific knowledge concerning earthquake source and tectonic processes generating catastrophic events. The availability of real-time data provides the unique opportunity of observing all phases of the earthquake rupture. It is thus of crucial importance developing methodologies to follow in real-time the evolution of the event (e.g. Earthquake Early Warning systems). NFOs are ideal infrastructures for hosting testing centers where a variety of scientific algorithms for real-time monitoring can be independently evaluated. Besides the interest for fundamental science, such developments have a societal impact and can attract new stakeholders such as industry partners who are interested in adopting in such (e.g. EEW) technologies.

  19. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  20. Impact modeling and prediction of attacks on cyber targets

    NASA Astrophysics Data System (ADS)

    Khalili, Aram; Michalk, Brian; Alford, Lee; Henney, Chris; Gilbert, Logan

    2010-04-01

    In most organizations, IT (information technology) infrastructure exists to support the organization's mission. The threat of cyber attacks poses risks to this mission. Current network security research focuses on the threat of cyber attacks to the organization's IT infrastructure; however, the risks to the overall mission are rarely analyzed or formalized. This connection of IT infrastructure to the organization's mission is often neglected or carried out ad-hoc. Our work bridges this gap and introduces analyses and formalisms to help organizations understand the mission risks they face from cyber attacks. Modeling an organization's mission vulnerability to cyber attacks requires a description of the IT infrastructure (network model), the organization mission (business model), and how the mission relies on IT resources (correlation model). With this information, proper analysis can show which cyber resources are of tactical importance in a cyber attack, i.e., controlling them enables a large range of cyber attacks. Such analysis also reveals which IT resources contribute most to the organization's mission, i.e., lack of control over them gravely affects the mission. These results can then be used to formulate IT security strategies and explore their trade-offs, which leads to better incident response. This paper presents our methodology for encoding IT infrastructure, organization mission and correlations, our analysis framework, as well as initial experimental results and conclusions.

  1. Availability of health data: requirements and solutions.

    PubMed

    Espinosa, A L

    1998-03-01

    There is an increasing recognition of the importance of the health data available for the corporate healthcare system model with the electronic patient record as the central unit of the healthcare information systems. There is also increasing recognition of the importance of developing simple international standards for record components, including clinical and administrative requirements. Aspects of security and confidentiality have to be reviewed in detail. The advantages of having health data available when and where it is required will modify healthcare delivery and support cost control with economies of scale and sharing of resources. The infrastructure necessary to make this model a reality is being developed through different international initiatives, which have to be integrated and co-ordinated to have common disaster planning strategies and better funding alternatives.

  2. Research Infrastructure and Scientific Collections: The Supply and Demand of Scientific Research

    NASA Astrophysics Data System (ADS)

    Graham, E.; Schindel, D. E.

    2016-12-01

    Research infrastructure is essential in both experimental and observational sciences and is commonly thought of as single-sited facilities. In contrast, object-based scientific collections are distributed in nearly every way, including by location, taxonomy, geologic epoch, discipline, collecting processes, benefits sharing rules, and many others. These diffused collections may have been amassed for a particular discipline, but their potential for use and impact in other fields needs to be explored. Through a series of cross-disciplinary activities, Scientific Collections International (SciColl) has explored and developed new ways in which the supply of scientific collections can meet the demand of researchers in unanticipated ways. From cross-cutting workshops on emerging infectious diseases and food security, to an online portal of collections, SciColl aims to illustrate the scope and value of object-based scientific research infrastructure. As distributed infrastructure, the full impact of scientific collections to the research community is a result of discovering, utilizing, and networking these resources. Examples and case studies from infectious disease research, food security topics, and digital connectivity will be explored.

  3. A service-oriented approach to assessing the infrastructure value index.

    PubMed

    Amaral, R; Alegre, H; Matos, J S

    Many national and regional administrations are currently facing challenges to ensure long-term sustainability of urban water services, as infrastructures continue to accumulate alarming levels of deferred maintenance and rehabilitation. The infrastructure value index (IVI) has proven to be an effective tool to support long-term planning, in particular by facilitating the ability to communicate and to create awareness. It is given by the ratio between current value of an infrastructure and its replacement cost. Current value is commonly estimated according to an asset-oriented approach, which is based on the concept of useful life of individual components. The standard values assumed for the useful lives can vary significantly, which leads to valuations that are just as different. Furthermore, with water companies increasingly focused on the customer, effective service-centric asset management is essential now more than ever. This paper shows results of on-going research work, which aims to explore a service-oriented approach for assessing the IVI. The paper presents the fundamentals underlying this approach, discusses and compares results obtained from both perspectives and points to challenges that still need to be addressed.

  4. 78 FR 21281 - Approval and Promulgation of Implementation Plans; State of Missouri; Infrastructure SIP...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ...EPA is proposing action on four Missouri State Implementation Plan (SIP) submissions. First, EPA is proposing to approve portions of two SIP submissions from the State of Missouri addressing the applicable requirements of Clean Air Act (CAA) for the 1997 and 2006 National Ambient Air Quality Standards (NAAQS) for fine particulate matter (PM2.5). The CAA requires that each state adopt and submit a SIP to support implementation, maintenance, and enforcement of each new or revised NAAQS promulgated by EPA. These SIPs are commonly referred to as ``infrastructure'' SIPs. The infrastructure requirements are designed to ensure that the structural components of each state's air quality management program are adequate to meet the state's responsibilities under the CAA. EPA is also proposing to approve two additional SIP submissions from Missouri, one addressing the Prevention of Significant Deterioration (PSD) program in Missouri, and another addressing the requirements applicable to any board or body which approves permits or enforcement orders of the CAA, both of which support requirements associated with infrastructure SIPs.

  5. Incorporating Quality Control Information in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Kunkel, Ralf; Bogena, Heye

    2013-04-01

    The rapid development of sensing technologies had led to the creation of large amounts of heterogeneous environmental observations. The Sensor Web provides a wider access to sensors and observations via common protocols and specifications. Observations typically go through several levels of quality control, and aggregation before they are made available to end-users. Raw data are usually inspected, and related quality flags are assigned. Data are gap-filled, and errors are removed. New data series may also be derived from one or more corrected data sets. Until now, it is unclear how these kinds of information can be captured in the Sensor Web Enablement (SWE) framework. Apart from the quality measures (e.g., accuracy, precision, tolerance, or confidence), the levels of observational series, the changes applied, and the methods involved must be specified. It is important that this kind of quality control information is well described and communicated to end-users to allow for a better usage and interpretation of data products. In this paper, we describe how quality control information can be incorporated into the SWE framework. Concerning this, first, we introduce the TERENO (TERrestrial ENvironmental Observatories), an initiative funded by the large research infrastructure program of the Helmholtz Association in Germany. The main goal of the initiative is to facilitate the study of long-term effects of climate and land use changes. The TERENO Online Data RepOsitORry (TEODOOR) is a software infrastructure that supports acquisition, provision, and management of observations within TERENO via SWE specifications and several other OGC web services. Next, we specify changes made to the existing observational data model to incorporate quality control information. Here, we describe the underlying TERENO data policy in terms of provision and maintenance issues. We present data levels, and their implementation within TEODOOR. The data levels are adapted from those used by other similar systems such as CUAHSI, EarthScope and WMO. Finally, we outline recommendations for future work.

  6. Cultured Construction: Global Evidence of the Impact of National Values on Piped-to-Premises Water Infrastructure Development.

    PubMed

    Kaminsky, Jessica A

    2016-07-19

    In 2016, the global community undertook the Sustainable Development Goals. One of these goals seeks to achieve universal and equitable access to safe and affordable drinking water for all people by the year 2030. In support of this undertaking, this paper seeks to discover the cultural work done by piped water infrastructure across 33 nations with developed and developing economies that have experienced change in the percentage of population served by piped-to-premises water infrastructure at the national level of analysis. To do so, I regressed the 1990-2012 change in piped-to-premises water infrastructure coverage against Hofstede's cultural dimensions, controlling for per capita GDP, the 1990 baseline level of coverage, percent urban population, overall 1990-2012 change in improved sanitation (all technologies), and per capita freshwater resources. Separate analyses were carried out for the urban, rural, and aggregate national contexts. Hofstede's dimensions provide a measure of cross-cultural difference; high or low scores are not in any way intended to represent better or worse but rather serve as a quantitative way to compare aggregate preferences for ways of being and doing. High scores in the cultural dimensions of Power Distance, Individualism-Collectivism, and Uncertainty Avoidance explain increased access to piped-to-premises water infrastructure in the rural context. Higher Power Distance and Uncertainty Avoidance scores are also statistically significant for increased coverage in the urban and national aggregate contexts. These results indicate that, as presently conceived, piped-to-premises water infrastructure fits best with spatial contexts that prefer hierarchy and centralized control. Furthermore, water infrastructure is understood to reduce uncertainty regarding the provision of individually valued benefits. The results of this analysis identify global trends that enable engineers and policy makers to design and manage more culturally appropriate and socially sustainable water infrastructure by better fitting technologies to user preferences.

  7. Resilience in social insect infrastructure systems

    PubMed Central

    2016-01-01

    Both human and insect societies depend on complex and highly coordinated infrastructure systems, such as communication networks, supply chains and transportation networks. Like human-designed infrastructure systems, those of social insects are regularly subject to disruptions such as natural disasters, blockages or breaks in the transportation network, fluctuations in supply and/or demand, outbreaks of disease and loss of individuals. Unlike human-designed systems, there is no deliberate planning or centralized control system; rather, individual insects make simple decisions based on local information. How do these highly decentralized, leaderless systems deal with disruption? What factors make a social insect system resilient, and which factors lead to its collapse? In this review, we bring together literature on resilience in three key social insect infrastructure systems: transportation networks, supply chains and communication networks. We describe how systems differentially invest in three pathways to resilience: resistance, redirection or reconstruction. We suggest that investment in particular resistance pathways is related to the severity and frequency of disturbance. In the final section, we lay out a prospectus for future research. Human infrastructure networks are rapidly becoming decentralized and interconnected; indeed, more like social insect infrastructures. Human infrastructure management might therefore learn from social insect researchers, who can in turn make use of the mature analytical and simulation tools developed for the study of human infrastructure resilience. PMID:26962030

  8. Data Management challenges in Astronomy and Astroparticle Physics

    NASA Astrophysics Data System (ADS)

    Lamanna, Giovanni

    2015-12-01

    Astronomy and Astroparticle Physics domains are experiencing a deluge of data with the next generation of facilities prioritised in the European Strategy Forum on Research Infrastructures (ESFRI), such as SKA, CTA, KM3Net and with other world-class projects, namely LSST, EUCLID, EGO, etc. The new ASTERICS-H2020 project brings together the concerned scientific communities in Europe to work together to find common solutions to their Big Data challenges, their interoperability, and their data access. The presentation will highlight these new challenges and the work being undertaken also in cooperation with e-infrastructures in Europe.

  9. Guide to Nongovernmental Organizations for the Military. A primer for the military about private, voluntary, and nongovernmental organizations operating in humanitarian emergencies globally

    DTIC Science & Technology

    2009-01-01

    mission or charter; can o en become too removed and lose in uence over NGO o cers and sta . Problema c if poor decision- making becomes common...4,604.87 $11,565.24 40 Social infrastructure and services $ 1,216.31 $3,252.96 37 Economic infrastructure $3,121.84 $11,793.81 26 Agriculture, forestry...bibliography/en. �“Public health leaders using social media to convey emergencies: New tools a boon.�” Social media tools such as Twi er and

  10. Towards a single seismological service infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T.

    2012-04-01

    In the last five year services and data providers, within the seismological community in Europe, focused their efforts in migrating the way of opening their archives towards a Service Oriented Architecture (SOA). This process tries to follow pragmatically the technological trends and available solutions aiming at effectively improving all the data stewardship activities. These advancements are possible thanks to the cooperation and the follow-ups of several EC infrastructural projects that, by looking at general purpose techniques, combine their developments envisioning a multidisciplinary platform for the earth observation as the final common objective (EPOS, Earth Plate Observation System) One of the first results of this effort is the Earthquake Data Portal (http://www.seismicportal.eu), which provides a collection of tools to discover, visualize and access a variety of seismological data sets like seismic waveform, accelerometric data, earthquake catalogs and parameters. The Portal offers a cohesive distributed search environment, linking data search and access across multiple data providers through interactive web-services, map-based tools and diverse command-line clients. Our work continues under other EU FP7 projects. Here we will address initiatives in two of those projects. The NERA, (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation) project will implement a Common Services Architecture based on OGC services APIs, in order to provide Resource-Oriented common interfaces across the data access and processing services. This will improve interoperability between tools and across projects, enabling the development of higher-level applications that can uniformly access the data and processing services of all participants. This effort will be conducted jointly with the VERCE project (Virtual Earthquake and Seismology Research Community for Europe). VERCE aims to enable seismologists to exploit the wealth of seismic data within a data-intensive computation framework, which will be tailored to the specific needs of the community. It will provide a new interoperable infrastructure, as the computational backbone laying behind the publicly available interfaces. VERCE will have to face the challenges of implementing a service oriented architecture providing an efficient layer between the Data and the Grid infrastructures, coupling HPC data analysis and HPC data modeling applications through the execution of workflows and data sharing mechanism. Online registries of interoperable worklflow components, storage of intermediate results and data provenance are those aspects that are currently under investigations to make the VERCE facilities usable from a large scale of users, data and service providers. For such purposes the adoption of a Digital Object Architecture, to create online catalogs referencing and describing semantically all these distributed resources, such as datasets, computational processes and derivative products, is seen as one of the viable solution to monitor and steer the usage of the infrastructure, increasing its efficiency and the cooperation among the community.

  11. THE EPA MULTIMEDIA INTEGRATED MODELING SYSTEM SOFTWARE SUITE

    EPA Science Inventory

    The U.S. EPA is developing a Multimedia Integrated Modeling System (MIMS) framework that will provide a software infrastructure or environment to support constructing, composing, executing, and evaluating complex modeling studies. The framework will include (1) common software ...

  12. Increasing accuracy of vehicle detection from conventional vehicle detectors - counts, speeds, classification, and travel time.

    DOT National Transportation Integrated Search

    2014-09-01

    Vehicle classification is an important traffic parameter for transportation planning and infrastructure : management. Length-based vehicle classification from dual loop detectors is among the lowest cost : technologies commonly used for collecting th...

  13. The Infrastructure Necessary to Support a Sustainable Material Protection, Control and Accounting (MPC&A) Program in Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachner, Katherine M.; Mladineo, Stephen V.

    The NNSA Material Protection, Control, and Accounting (MPC&A) program has been engaged for fifteen years in upgrading the security of nuclear materials in Russia. Part of the effort has been to establish the conditions necessary to ensure the long-term sustainability of nuclear security. A sustainable program of nuclear security requires the creation of an indigenous infrastructure, starting with sustained high level government commitment. This includes organizational development, training, maintenance, regulations, inspections, and a strong nuclear security culture. The provision of modern physical protection, control, and accounting equipment to the Russian Federation alone is not sufficient. Comprehensive infrastructure projects support themore » Russian Federation's ability to maintain the risk reduction achieved through upgrades to the equipment. To illustrate the contributions to security, and challenges of implementation, this paper discusses the history and next steps for an indigenous Tamper Indication Device (TID) program, and a Radiation Portal Monitoring (RPM) program.« less

  14. The Role of Transformative Leadership, ICT-Infrastructure and Learning Climate in Teachers' Use of Digital Learning Materials during Their Classes

    ERIC Educational Resources Information Center

    Vermeulen, Marjan; Kreijns, Karel; van Buuren, Hans; Van Acker, Frederik

    2017-01-01

    This study investigated whether school organizational variables (ie, transformative leadership (TL), ICT-infrastructure (technical and social) and organizational learning climate were related to teachers' dispositional variables (ie, attitude, perceived norm and perceived behavior control [PBC]). The direct and indirect influences of the…

  15. Creating Technology Infrastructures in a Rural School District: A Partnership Approach.

    ERIC Educational Resources Information Center

    Jensen, Dennis

    Rural schools face significant challenges in upgrading their technology infrastructures. Rural school districts tend to have older school buildings that have multiple problems and lack climate control, adequate space, and necessary wiring. In rural districts, it may be difficult to find the leadership and expertise needed to provide professional…

  16. 76 FR 19122 - Record of Decision (ROD) for Authorizing the Use of Outer Continental Shelf (OCS) Sand Resources...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Aeronautics and Space Administration's Wallops Flight Facility Shoreline Restoration and Infrastructure... authorize the use of OCS sand resources in National Aeronautics and Space Administration's (NASA's) Wallops... infrastructure on the WFF (such as rocket launch pads, runways, and launch control centers) valued at over $1...

  17. Performance results from Small- and Large-Scale System Monitoring and green Infrastructure in Kansas City - slides

    EPA Science Inventory

    In 2010, Kansas City, MO (KCMO) signed a consent degree with EPA on combined sewer overflows. The City decided to use adaptive management in order to extensively utilize green infrastructure (GI) in lieu of, and in addition to, structural controls. KCMO installed 130 GI storm co...

  18. 77 FR 65125 - Approval and Promulgation of Implementation Plans; Georgia 110(a)(1) and (2) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-25

    ... controls are enforced through the associated SIP rules or Federal Implementation Plans (FIPs). Any purchase... Promulgation of Implementation Plans; Georgia 110(a)(1) and (2) Infrastructure Requirements for the 1997 and... Agency (EPA). ACTION: Final rule. SUMMARY: EPA is taking final action to approve the State Implementation...

  19. Conception of the system for traffic measurements based on piezoelectric foils

    NASA Astrophysics Data System (ADS)

    Płaczek, M.

    2016-08-01

    A concept of mechatronic system for traffic measurements based on the piezoelectric transducers used as sensors is presented. The aim of the work project is to theoretically and experimentally analyse the dynamic response of road infrastructure forced by vehicles motion. The subject of the project is therefore on the borderline of civil engineering and mechanical and covers a wide range of issues in both these areas. To measure the dynamic response of the tested pieces of road infrastructure application of piezoelectric, in particular piezoelectric transducers in the form of piezoelectric films (MFC - Macro Fiber Composite) is proposed. The purpose is to verify the possibility to use composite piezoelectric transducers as sensors used in traffic surveillance systems - innovative methods of controlling the road infrastructure and traffic. Presented paper reports works that were done in order to receive the basic information about analysed systems and their behaviour under excitation by passing vehicles. It is very important to verify if such kind of systems can be controlled by the analysis of the dynamic response of road infrastructure measured using piezoelectric transducers. Obtained results show that it could be possible.

  20. EUDAT: A New Cross-Disciplinary Data Infrastructure For Science

    NASA Astrophysics Data System (ADS)

    Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter

    2013-04-01

    In recent years significant investments have been made by the European Commission and European member states to create a pan-European e-Infrastructure supporting multiple research communities. As a result, a European e-Infrastructure ecosystem is currently taking shape, with communication networks, distributed grids and HPC facilities providing European researchers from all fields with state-of-the-art instruments and services that support the deployment of new research facilities on a pan-European level. However, the accelerated proliferation of data - newly available from powerful new scientific instruments, simulations and the digitization of existing resources - has created a new impetus for increasing efforts and investments in order to tackle the specific challenges of data management, and to ensure a coherent approach to research data access and preservation. EUDAT is a pan-European initiative that started in October 2011 and which aims to help overcome these challenges by laying out the foundations of a Collaborative Data Infrastructure (CDI) in which centres offering community-specific support services to their users could rely on a set of common data services shared between different research communities. Although research communities from different disciplines have different ambitions and approaches - particularly with respect to data organization and content - they also share many basic service requirements. This commonality makes it possible for EUDAT to establish common data services, designed to support multiple research communities, as part of this CDI. During the first year, EUDAT has been reviewing the approaches and requirements of a first subset of communities from linguistics (CLARIN), solid earth sciences (EPOS), climate sciences (ENES), environmental sciences (LIFEWATCH), and biological and medical sciences (VPH), and shortlisted four generic services to be deployed as shared services on the EUDAT infrastructure. These services are data replication from site to site, data staging to compute facilities, metadata, and easy storage. A number of enabling services such as distributed authentication and authorization, persistent identifiers, hosting of services, workspaces and centre registry were also discussed. The services being designed in EUDAT will thus be of interest to a broad range of communities that lack their own robust data infrastructures, or that are simply looking for additional storage and/or computing capacities to better access, use, re-use, and preserve their data. The first pilots were completed in 2012 and a pre-production ready operational infrastructure, comprised of five sites (RZG, CINECA, SARA, CSC, FZJ), offering 480TB of online storage and 4PB of near-line (tape) storage, initially serving four user communities (ENES, EPOS, CLARIN, VPH) was established. These services shall be available to all communities in a production environment by 2014. Although EUDAT has initially focused on a subset of research communities, it aims to engage with other communities interested in adapting their solutions or contributing to the design of the infrastructure. Discussions with other research communities - belonging to the fields of environmental sciences, biomedical science, physics, social sciences and humanities - have already begun and are following a pattern similar to the one we adopted with the initial communities. The next step will consist of integrating representatives from these communities into the existing pilots and task forces so as to include them in the process of designing the services and, ultimately, shaping the future CDI.

  1. UNH Data Cooperative: A Cyber Infrastructure for Earth System Studies

    NASA Astrophysics Data System (ADS)

    Braswell, B. H.; Fekete, B. M.; Prusevich, A.; Gliden, S.; Magill, A.; Vorosmarty, C. J.

    2007-12-01

    Earth system scientists and managers have a continuously growing demand for a wide array of earth observations derived from various data sources including (a) modern satellite retrievals, (b) "in-situ" records, (c) various simulation outputs, and (d) assimilated data products combining model results with observational records. The sheer quantity of data, and formatting inconsistencies make it difficult for users to take full advantage of this important information resource. Thus the system could benefit from a thorough retooling of our current data processing procedures and infrastructure. Emerging technologies, like OPeNDAP and OGC map services, open standard data formats (NetCDF, HDF) data cataloging systems (NASA-Echo, Global Change Master Directory, etc.) are providing the basis for a new approach in data management and processing, where web- services are increasingly designed to serve computer-to-computer communications without human interactions and complex analysis can be carried out over distributed computer resources interconnected via cyber infrastructure. The UNH Earth System Data Collaborative is designed to utilize the aforementioned emerging web technologies to offer new means of access to earth system data. While the UNH Data Collaborative serves a wide array of data ranging from weather station data (Climate Portal) to ocean buoy records and ship tracks (Portsmouth Harbor Initiative) to land cover characteristics, etc. the underlaying data architecture shares common components for data mining and data dissemination via web-services. Perhaps the most unique element of the UNH Data Cooperative's IT infrastructure is its prototype modeling environment for regional ecosystem surveillance over the Northeast corridor, which allows the integration of complex earth system model components with the Cooperative's data services. While the complexity of the IT infrastructure to perform complex computations is continuously increasing, scientists are often forced to spend considerable amount of time to solve basic data management and preprocessing tasks and deal with low level computational design problems like parallelization of model codes. Our modeling infrastructure is designed to take care the bulk of the common tasks found in complex earth system models like I/O handling, computational domain and time management, parallel execution of the modeling tasks, etc. The modeling infrastructure allows scientists to focus on the numerical implementation of the physical processes on a single computational objects(typically grid cells) while the framework takes care of the preprocessing of input data, establishing of the data exchange between computation objects and the execution of the science code. In our presentation, we will discuss the key concepts of our modeling infrastructure. We will demonstrate integration of our modeling framework with data services offered by the UNH Earth System Data Collaborative via web interfaces. We will layout the road map to turn our prototype modeling environment into a truly community framework for wide range of earth system scientists and environmental managers.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Youn, Edward; Chynoweth, Joshua

    As Electric Vehicles (EVs) increase, charging infrastructure becomes more important. When during the day there is a power shortage, the charging infrastructure should have the options to either shut off the power to the charging stations or to lower the power to the EVs in order to satisfy the needs of the grid. This paper proposes a design for a smart charging infrastructure capable of providing power to several EVs from one circuit by multiplexing power and providing charge control and safety systems to prevent electric shock. The safety design is implemented in different levels that include both the servermore » and the smart charging stations. With this smart charging infrastructure, the shortage of energy in a local grid could be solved by our EV charging management system.« less

  3. Innovative neuro-fuzzy system of smart transport infrastructure for road traffic safety

    NASA Astrophysics Data System (ADS)

    Beinarovica, Anna; Gorobetz, Mikhail; Levchenkov, Anatoly

    2017-09-01

    The proposed study describes applying of neural network and fuzzy logic in transport control for safety improvement by evaluation of accidents’ risk by intelligent infrastructure devices. Risk evaluation is made by following multiple-criteria: danger, changeability and influence of changes for risk increasing. Neuro-fuzzy algorithms are described and proposed for task solution. The novelty of the proposed system is proved by deep analysis of known studies in the field. The structure of neuro-fuzzy system for risk evaluation and mathematical model is described in the paper. The simulation model of the intelligent devices for transport infrastructure is proposed to simulate different situations, assess the risks and propose the possible actions for infrastructure or vehicles to minimize the risk of possible accidents.

  4. SeaDataNet: Pan-European infrastructure for ocean and marine data management

    NASA Astrophysics Data System (ADS)

    Fichaut, M.; Schaap, D.; Maudire, G.; Manzella, G. M. R.

    2012-04-01

    The overall objective of the SeaDataNet project is the upgrade the present SeaDataNet infrastructure into an operationally robust and state-of-the-art Pan-European infrastructure for providing up-to-date and high quality access to ocean and marine metadata, data and data products originating from data acquisition activities by all engaged coastal states, by setting, adopting and promoting common data management standards and by realising technical and semantic interoperability with other relevant data management systems and initiatives on behalf of science, environmental management, policy making, and economy. SeaDataNet is undertaken by the National Oceanographic Data Centres (NODCs), and marine information services of major research institutes, from 31 coastal states bordering the European seas, and also includes Satellite Data Centres, expert modelling centres and the international organisations IOC, ICES and EU-JRC in its network. Its 40 data centres are highly skilled and have been actively engaged in data management for many years and have the essential capabilities and facilities for data quality control, long term stewardship, retrieval and distribution. SeaDataNet undertakes activities to achieve data access and data products services that meet requirements of end-users and intermediate user communities, such as GMES Marine Core Services (e.g. MyOcean), establishing SeaDataNet as the core data management component of the EMODNet infrastructure and contributing on behalf of Europe to global portal initiatives, such as the IOC/IODE - Ocean Data Portal (ODP), and GEOSS. Moreover it aims to achieve INSPIRE compliance and to contribute to the INSPIRE process for developing implementing rules for oceanography. • As part of the SeaDataNet upgrading and capacity building, training courses will be organised aiming at data managers and technicians at the data centres. For the data managers it is important, that they learn to work with the upgraded common SeaDataNet formats and procedures and software tools for preparing and updating metadata, processing and quality control of data, and presentation of data in viewing services, and for production of data products. • SeaDataNet maintains and operates several discovery services with overviews of marine organisations in Europe and their engagement in marine research projects, managing large datasets, and data acquisition by research vessels and monitoring programmes for the European seas and global oceans: o European Directory of Marine Environmental Data (EDMED) (at present > 4300 entries from more than 600 data holding centres in Europe) is a comprehensive reference to the marine data and sample collections held within Europe providing marine scientists, engineers and policy makers with a simple discovery mechanism. It covers all marine environmental disciplines. This needs regular maintenance. o European Directory of Marine Environmental Research Projects (EDMERP) (at present > 2200 entries from more than 300 organisations in Europe) gives an overview of research projects relating to the marine environment, that are relevant in the context of data sets and data acquisition activities ( cruises, in situ monitoring networks, ..) that are covered in SeaDataNet. This needs regular updating, following activities by dataholding institutes for preparing metadata references for EDMED, EDIOS, CSR and CDI. o Cruise Summary Reports (CSR) directory (at present > 43000 entries) provides a coarse-grained inventory for tracking oceanographic data collected by research vessels. o European Directory of Oceanographic Observing Systems (EDIOS) (at present > 10000 entries) is an initiative of EuroGOOS and gives an overview of the ocean measuring and monitoring systems operated by European countries. • European Directory of Marine Organisations (EDMO) (at present > 2000 entries) contains the contact information and activity profiles for the organisations whose data and activities are described by the discovery services. • Common Vocabularies (at present > 120000 terms in > 100 lists), covering a broad spectrum of ocean and marine disciplines. The common terms are used to mark up metadata, data and data products in a consistent and coherent way. Governance is regulated by an international board. • Common Data Index (CDI) data discovery and access service: SeaDataNet provides online unified access to distributed datasets via its portal website to the vast resources of marine and ocean datasets, managed by all the connected distributed data centres. The Common Data Index (CDI) service is the key Discovery and Delivery service. It enables users to have a detailed insight of the availability and geographical distribution of marine data, archived at the connected data centres, and it provides the means for downloading datasets in common formats via a transaction mechanism.

  5. The Contribution of the Geodetic Community (WG4) to EPOS

    NASA Astrophysics Data System (ADS)

    Fernandes, R. M. S.; Bastos, L. C.; Bruyninx, C.; D'Agostino, N.; Dousa, J.; Ganas, A.; Lidberg, M.; Nocquet, J.-M.

    2012-04-01

    WG4 - "EPOS Geodetic Data and Infrastructure" is the Working Group of the EPOS project responsible to define and prepare the integration of the existing Pan-European Geodetic Infrastructures into a unique future consistent infrastructure that supports the European Geosciences, which is the ultimate goal of the EPOS project. The WG4 is formed by representatives of the participating EPOS countries and from EUREF (European Reference Frame), which also ensures the inclusion and the contact with countries that formally are not part of the current phase of EPOS. In reality, the fact that Europe is formed by many countries (having different laws and policies) lacking an infrastructure similar to UNAVCO (which concentrates the effort of the local geo-science community) raises the difficulties to create a common geodetic infrastructure serving not only the entire geo-science community, but also many other areas of great social-economic impact. The benefits of the creation of such infrastructure (shared and easily accessed by all) are evident in order to optimize the existing and future geodetic resources. This presentation intends to detail the work being produced within the working group WG4 related with the definition of strategies towards the implementation of the best solutions that will permit to the end-users, and in particular geo-scientists, to access the geodetic data, derived solutions, and associated metadata using transparent and uniform processes. Discussed issues include the access to high-rate data in near real-time, storage and backup of historical and future data, the sustainability of the networks in order to achieve long-term stability in the observation infrastructure, seamless access to the data, open data policies, and processing tools.

  6. Collaboration and decision making tools for mobile groups

    NASA Astrophysics Data System (ADS)

    Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander

    2017-12-01

    Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.

  7. Modernized build and test infrastructure for control software at ESO: highly flexible building, testing, and automatic quality practices for telescope control software

    NASA Astrophysics Data System (ADS)

    Pellegrin, F.; Jeram, B.; Haucke, J.; Feyrin, S.

    2016-07-01

    The paper describes the introduction of a new automatized build and test infrastructure, based on the open-source software Jenkins1, into the ESO Very Large Telescope control software to replace the preexisting in-house solution. A brief introduction to software quality practices is given, a description of the previous solution, the limitations of it and new upcoming requirements. Modifications required to adapt the new system are described, how these were implemented to current software and the results obtained. An overview on how the new system may be used in future projects is also presented.

  8. US-CERT Control System Center Input/Output (I/O) Conceputal Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2005-02-01

    This document was prepared for the US-CERT Control Systems Center of the National Cyber Security Division (NCSD) of the Department of Homeland Security (DHS). DHS has been tasked under the Homeland Security Act of 2002 to coordinate the overall national effort to enhance the protection of the national critical infrastructure. Homeland Security Presidential Directive HSPD-7 directs the federal departments to identify and prioritize critical infrastructure and protect it from terrorist attack. The US-CERT National Strategy for Control Systems Security was prepared by the NCSD to address the control system security component addressed in the National Strategy to Secure Cyberspace andmore » the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. The US-CERT National Strategy for Control Systems Security identified five high-level strategic goals for improving cyber security of control systems; the I/O upgrade described in this document supports these goals. The vulnerability assessment Test Bed, located in the Information Operations Research Center (IORC) facility at Idaho National Laboratory (INL), consists of a cyber test facility integrated with multiple test beds that simulate the nation's critical infrastructure. The fundamental mission of the Test Bed is to provide industry owner/operators, system vendors, and multi-agency partners of the INL National Security Division a platform for vulnerability assessments of control systems. The Input/Output (I/O) upgrade to the Test Bed (see Work Package 3.1 of the FY-05 Annual Work Plan) will provide for the expansion of assessment capabilities within the IORC facility. It will also provide capabilities to connect test beds within the Test Range and other Laboratory resources. This will allow real time I/O data input and communication channels for full replications of control systems (Process Control Systems [PCS], Supervisory Control and Data Acquisition Systems [SCADA], and components). This will be accomplished through the design and implementation of a modular infrastructure of control system, communications, networking, computing and associated equipment, and measurement/control devices. The architecture upgrade will provide a flexible patching system providing a quick ''plug and play''configuration through various communication paths to gain access to live I/O running over specific protocols. This will allow for in-depth assessments of control systems in a true-to-life environment. The full I/O upgrade will be completed through a two-phased approach. Phase I, funded by DHS, expands the capabilities of the Test Bed by developing an operational control system in two functional areas, the Science & Technology Applications Research (STAR) Facility and the expansion of various portions of the Test Bed. Phase II (see Appendix A), funded by other programs, will complete the full I/O upgrade to the facility.« less

  9. Proof of concept for using unmanned aerial vehicles for high mast pole and bridge inspections.

    DOT National Transportation Integrated Search

    2015-06-01

    Bridges and high mast luminaires (HMLs) are key components of transportation infrastructures. Effective inspection : processes are crucial to maintain the structural integrity of these components. The most common approach for : inspections is visual ...

  10. Report of the workshop on nanotechnology for cement and concrete.

    DOT National Transportation Integrated Search

    2007-09-05

    "Concrete as a material is the most commonly used material (other than water) on the planet. : Its significance to the basic infrastructure of modern civilization is immeasurable, and it is : difficult to imagine life without it. However, concrete as...

  11. Re-naturing the city: a role for sustainable infrastructure and buildings

    Treesearch

    Hillary Brown

    2009-01-01

    One of 18 articles inspired by the Meristem 2007 Forum, "Restorative Commons for Community Health." The articles include interviews, case studies, thought pieces, and interdisciplinary theoretical works that explore the relationship between human health and the urban...

  12. Quantifying surgical and anesthetic availability at primary health facilities in Mongolia.

    PubMed

    Spiegel, David A; Choo, Shelly; Cherian, Meena; Orgoi, Sergelen; Kehrer, Beat; Price, Raymond R; Govind, Salik

    2011-02-01

    Significant barriers limit the safe and timely provision of surgical and anaesthetic care in low- and middle-income countries. Nearly one-half of Mongolia's population resides in rural areas where the austere geography makes travel for adequate surgical care very difficult. Our goal was to characterize the availability of surgical and anaesthetic services, in terms of infrastructure capability, physical resources (supplies and equipment), and human resources for health at primary level health facilities in Mongolia. A situational analysis of the capacity to deliver emergency and essential surgical care (EESC) was performed in a nonrandom sample of 44 primary health facilities throughout Mongolia. Significant shortfalls were noted in the capacity to deliver surgical and anesthetic services. Deficiencies in infrastructure and supplies were common, and there were no trained surgeons or anaesthesiologists at any of the health facilities sampled. Most procedures were performed by general doctors and paraprofessionals, and occasionally visiting surgeons from higher levels of the health system. While basic interventions such as suturing or abscess drainage were commonly performed, the availability of many essential interventions was absent at a significant number of facilities. This situational analysis of the availability of essential surgical and anesthetic services identified significant deficiencies in infrastructure, supplies, and equipment, as well as a lack of human resources at the primary referral level facilities in Mongolia. Given the significant travel distances to secondary level facilities for the majority of the rural population, there is an urgent need to strengthen the delivery of essential surgical and anaesthetic services at the primary referral level (soum and intersoum). This will require a multidisciplinary, multi-sectoral effort aimed to improve infrastructure, procure and maintain essential equipment and supplies, and train appropriate health professionals.

  13. The Forgotten Side of School Finance Equity: The Role of Infrastructure Funding in Student Success

    ERIC Educational Resources Information Center

    Crampton, Faith E.; Thompson, David C.; Vesely, Randall S.

    2004-01-01

    Traditionally, local school districts have shouldered the burden of funding school infrastructure in the name of local control, relying upon local property tax revenues and the willingness of local voters to approve bond issues. Given vast disparities in school districts' property wealth, gross inequities in school facilities will remain without…

  14. Developing an Information Infrastructure To Support Information Retrieval: Towards a Theory of Clustering Based in Classification.

    ERIC Educational Resources Information Center

    Micco, Mary; Popp, Rich

    Techniques for building a world-wide information infrastructure by reverse engineering existing databases to link them in a hierarchical system of subject clusters to create an integrated database are explored. The controlled vocabulary of the Library of Congress Subject Headings is used to ensure consistency and group similar items. Each database…

  15. Closing the Gap: Cybersecurity for U.S. Forces and Commands

    DTIC Science & Technology

    2017-03-30

    Dickson, Ph.D. Professor of Military Studies , JAWS Thesis Advisor Kevin Therrien, Col, USAF Committee Member Stephen Rogers, Colonel, USA Director...infrastructures, and includes the Internet, telecommunications networks, computer systems, and embedded processors and controllers in critical industries.”5...of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and

  16. Isothermal Recombinase Polymerase amplification (RPA) of Schistosoma haematobium DNA and oligochromatographic lateral flow detection.

    PubMed

    Rosser, A; Rollinson, D; Forrest, M; Webster, B L

    2015-09-04

    Accurate diagnosis of urogenital schistosomiasis is vital for surveillance/control programs. Amplification of schistosome DNA in urine by PCR is sensitive and specific but requires infrastructure, financial resources and skilled personnel, often not available in endemic areas. Recombinase Polymerase Amplification (RPA) is an isothermal DNA amplification/detection technology that is simple, rapid, portable and needs few resources. Here a Schistosoma haematobium RPA assay was developed and adapted so that DNA amplicons could be detected using oligochromatographic Lateral Flow (LF) strips. The assay successfully amplified S. haematobium DNA at 30-45 °C in 10 mins and was sensitive to a lower limit of 100 fg of DNA. The assay was also successful with the addition of crude urine, up to 5% of the total reaction volume. Cross amplification occurred with other schistosome species but not with other common urine microorganisms. The LF-RPA assay developed here can amplify and detect low levels of S. haematobium DNA. Reactions are rapid, require low temperatures and positive reactions are interpreted using lateral flow strips, reducing the need for infrastructure and resources. This together with an ability to withstand inhibitors within urine makes RPA a promising technology for further development as a molecular diagnostic tool for urogenital schistosomiasis.

  17. WeaselBoard :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulder, John C.; Schwartz, Moses Daniel; Berg, Michael J.

    2013-10-01

    Critical infrastructures, such as electrical power plants and oil refineries, rely on programmable logic controllers (PLCs) to control essential processes. State of the art security cannot detect attacks on PLCs at the hardware or firmware level. This renders critical infrastructure control systems vulnerable to costly and dangerous attacks. WeaselBoard is a PLC backplane analysis system that connects directly to the PLC backplane to capture backplane communications between modules. WeaselBoard forwards inter-module traffic to an external analysis system that detects changes to process control settings, sensor values, module configuration information, firmware updates, and process control program (logic) updates. WeaselBoard provides zero-daymore » exploit detection for PLCs by detecting changes in the PLC and the process. This approach to PLC monitoring is protected under U.S. Patent Application 13/947,887.« less

  18. Software defined networking (SDN) over space division multiplexing (SDM) optical networks: features, benefits and experimental demonstration.

    PubMed

    Amaya, N; Yan, S; Channegowda, M; Rofoee, B R; Shu, Y; Rashidi, M; Ou, Y; Hugues-Salas, E; Zervas, G; Nejabati, R; Simeonidou, D; Puttnam, B J; Klaus, W; Sakaguchi, J; Miyazawa, T; Awaji, Y; Harai, H; Wada, N

    2014-02-10

    We present results from the first demonstration of a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilizing sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. Results show that SDN is a suitable control plane solution for the high-capacity flexible SDM network. It is able to provision end-to-end bandwidth and QoT requests according to user requirements, considering the unique characteristics of the underlying SDM infrastructure.

  19. Wireless intelligent network: infrastructure before services?

    NASA Astrophysics Data System (ADS)

    Chu, Narisa N.

    1996-01-01

    The Wireless Intelligent Network (WIN) intends to take advantage of the Advanced Intelligent Network (AIN) concepts and products developed from wireline communications. However, progress of the AIN deployment has been slow due to the many barriers that exist in the traditional wireline carriers' deployment procedures and infrastructure. The success of AIN has not been truly demonstrated. The AIN objectives and directions are applicable to the wireless industry although the plans and implementations could be significantly different. This paper points out WIN characteristics in architecture, flexibility, deployment, and value to customers. In order to succeed, the technology driven AIN concept has to be reinforced by the market driven WIN services. An infrastructure suitable for the WIN will contain elements that are foreign to the wireline network. The deployment process is expected to seed with the revenue generated services. Standardization will be achieved by simplifying and incorporating the IS-41C, AIN, and Intelligent Network CS-1 recommendations. Integration of the existing and future systems impose the biggest challenge of all. Service creation has to be complemented with service deployment process which heavily impact the carriers' infrastructure. WIN deployment will likely start from an Intelligent Peripheral, a Service Control Point and migrate to a Service Node when sufficient triggers are implemented in the mobile switch for distributed call control. The struggle to move forward will not be based on technology, but rather on the impact to existing infrastructure.

  20. Resilience in social insect infrastructure systems.

    PubMed

    Middleton, Eliza J T; Latty, Tanya

    2016-03-01

    Both human and insect societies depend on complex and highly coordinated infrastructure systems, such as communication networks, supply chains and transportation networks. Like human-designed infrastructure systems, those of social insects are regularly subject to disruptions such as natural disasters, blockages or breaks in the transportation network, fluctuations in supply and/or demand, outbreaks of disease and loss of individuals. Unlike human-designed systems, there is no deliberate planning or centralized control system; rather, individual insects make simple decisions based on local information. How do these highly decentralized, leaderless systems deal with disruption? What factors make a social insect system resilient, and which factors lead to its collapse? In this review, we bring together literature on resilience in three key social insect infrastructure systems: transportation networks, supply chains and communication networks. We describe how systems differentially invest in three pathways to resilience: resistance, redirection or reconstruction. We suggest that investment in particular resistance pathways is related to the severity and frequency of disturbance. In the final section, we lay out a prospectus for future research. Human infrastructure networks are rapidly becoming decentralized and interconnected; indeed, more like social insect infrastructures. Human infrastructure management might therefore learn from social insect researchers, who can in turn make use of the mature analytical and simulation tools developed for the study of human infrastructure resilience. © 2016 The Author(s).

  1. The Earth System Grid Federation (ESGF) Project

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Denvil, Sébastien; Greenslade, Mark

    2015-04-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) enterprise system is a collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of model output and observational data. ESGF's primary goal is to facilitate advancements in Earth System Science. It is an interagency and international effort led by the US Department of Energy (DOE), and co-funded by National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), National Science Foundation (NSF), Infrastructure for the European Network of Earth System Modelling (IS-ENES) and international laboratories such as the Max Planck Institute for Meteorology (MPI-M) german Climate Computing Centre (DKRZ), the Australian National University (ANU) National Computational Infrastructure (NCI), Institut Pierre-Simon Laplace (IPSL), and the British Atmospheric Data Center (BADC). Its main mission is to support current CMIP5 activities and prepare for future assesments. The ESGF architecture is based on a system of autonomous and distributed nodes, which interoperate through common acceptance of federation protocols and trust agreements. Data is stored at multiple nodes around the world, and served through local data and metadata services. Nodes exchange information about their data holdings and services, trust each other for registering users and establishing access control decisions. The net result is that a user can use a web browser, connect to any node, and seamlessly find and access data throughout the federation. This type of collaborative working organization and distributed architecture context en-lighted the need of integration and testing processes definition to ensure the quality of software releases and interoperability. This presentation will introduce the ESGF project and demonstrate the range of tools and processes that have been set up to support release management activities.

  2. Defending networks against denial-of-service attacks

    NASA Astrophysics Data System (ADS)

    Gelenbe, Erol; Gellman, Michael; Loukas, George

    2004-11-01

    Denial of service attacks, viruses and worms are common tools for malicious adversarial behavior in networks. Experience shows that over the last few years several of these techniques have probably been used by governments to impair the Internet communications of various entities, and we can expect that these and other information warfare tools will be used increasingly as part of hostile behavior either independently, or in conjunction with other forms of attack in conventional or asymmetric warfare, as well as in other forms of malicious behavior. In this paper we concentrate on Distributed Denial of Service Attacks (DDoS) where one or more attackers generate flooding traffic and direct it from multiple sources towards a set of selected nodes or IP addresses in the Internet. We first briefly survey the literature on the subject, and discuss some examples of DDoS incidents. We then present a technique that can be used for DDoS protection based on creating islands of protection around a critical information infrastructure. This technique, that we call the CPN-DoS-DT (Cognitive Packet Networks DoS Defence Technique), creates a self-monitoring sub-network surrounding each critical infrastructure node. CPN-DoS-DT is triggered by a DDoS detection scheme, and generates control traffic from the objects of the DDoS attack to the islands of protection where DDOS packet flows are destroyed before they reach the critical infrastructure. We use mathematical modelling, simulation and experiments on our test-bed to show the positive and negative outcomes that may result from both the attack, and the CPN-DoS-DT protection mechanism, due to imperfect detection and false alarms.

  3. Ocean Data Interoperability Platform (ODIP): developing a common framework for marine data management on a global scale

    NASA Astrophysics Data System (ADS)

    Schaap, D.

    2015-12-01

    Europe, the USA, and Australia are making significant progress in facilitating the discovery, access and long term stewardship of ocean and marine data through the development, implementation, population and operation of national, regional or international distributed ocean and marine observing and data management infrastructures such as SeaDataNet, EMODnet, IOOS, R2R, and IMOS. All of these developments are resulting in the development of standards and services implemented and used by their regional communities. The Ocean Data Interoperability Platform (ODIP) project is supported by the EU FP7 Research Infrastructures programme, National Science Foundation (USA) and Australian government and has been initiated 1st October 2012. Recently the project has been continued as ODIP 2 for another 3 years with EU HORIZON 2020 funding. ODIP includes all the major organisations engaged in ocean data management in EU, US, and Australia. ODIP is also supported by the IOC-IODE, closely linking this activity with its Ocean Data Portal (ODP) and Ocean Data Standards Best Practices (ODSBP) projects. The ODIP platform aims to ease interoperability between the regional marine data management infrastructures. Therefore it facilitates an organised dialogue between the key infrastructure representatives by means of publishing best practice, organising a series of international workshops and fostering the development of common standards and interoperability solutions. These are evaluated and tested by means of prototype projects. The presentation will give further background on the ODIP projects and the latest information on the progress of three prototype projects addressing: establishing interoperability between the regional EU, USA and Australia data discovery and access services (SeaDataNet CDI, US NODC, and IMOS MCP) and contributing to the global GEOSS and IODE-ODP portals; establishing interoperability between cruise summary reporting systems in Europe, the USA and Australia for routine harvesting of cruise data for delivery via the Partnership for Observation of Global Oceans (POGO) global portal; establishing common standards for a Sensor Observation Service (SOS) for selected sensors installed on vessels and in real-time monitoring systems using sensor web enablement (SWE)

  4. Ocean Data Interoperability Platform (ODIP): developing a common framework for marine data management on a global scale

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Glaves, Helen

    2016-04-01

    Europe, the USA, and Australia are making significant progress in facilitating the discovery, access and long term stewardship of ocean and marine data through the development, implementation, population and operation of national, regional or international distributed ocean and marine observing and data management infrastructures such as SeaDataNet, EMODnet, IOOS, R2R, and IMOS. All of these developments are resulting in the development of standards and services implemented and used by their regional communities. The Ocean Data Interoperability Platform (ODIP) project is supported by the EU FP7 Research Infrastructures programme, National Science Foundation (USA) and Australian government and has been initiated 1st October 2012. Recently the project has been continued as ODIP II for another 3 years with EU HORIZON 2020 funding. ODIP includes all the major organisations engaged in ocean data management in EU, US, and Australia. ODIP is also supported by the IOC-IODE, closely linking this activity with its Ocean Data Portal (ODP) and Ocean Data Standards Best Practices (ODSBP) projects. The ODIP platform aims to ease interoperability between the regional marine data management infrastructures. Therefore it facilitates an organised dialogue between the key infrastructure representatives by means of publishing best practice, organising a series of international workshops and fostering the development of common standards and interoperability solutions. These are evaluated and tested by means of prototype projects. The presentation will give further background on the ODIP projects and the latest information on the progress of three prototype projects addressing: 1. establishing interoperability between the regional EU, USA and Australia data discovery and access services (SeaDataNet CDI, US NODC, and IMOS MCP) and contributing to the global GEOSS and IODE-ODP portals; 2. establishing interoperability between cruise summary reporting systems in Europe, the USA and Australia for routine harvesting of cruise data for delivery via the Partnership for Observation of Global Oceans (POGO) global portal; 3. the establishment of common standards for a Sensor Observation Service (SOS) for selected sensors installed on vessels and in real-time monitoring systems using sensor web enablement (SWE)

  5. VDJServer: A Cloud-Based Analysis Portal and Data Commons for Immune Repertoire Sequences and Rearrangements.

    PubMed

    Christley, Scott; Scarborough, Walter; Salinas, Eddie; Rounds, William H; Toby, Inimary T; Fonner, John M; Levin, Mikhail K; Kim, Min; Mock, Stephen A; Jordan, Christopher; Ostmeyer, Jared; Buntzman, Adam; Rubelt, Florian; Davila, Marco L; Monson, Nancy L; Scheuermann, Richard H; Cowell, Lindsay G

    2018-01-01

    Recent technological advances in immune repertoire sequencing have created tremendous potential for advancing our understanding of adaptive immune response dynamics in various states of health and disease. Immune repertoire sequencing produces large, highly complex data sets, however, which require specialized methods and software tools for their effective analysis and interpretation. VDJServer is a cloud-based analysis portal for immune repertoire sequence data that provide access to a suite of tools for a complete analysis workflow, including modules for preprocessing and quality control of sequence reads, V(D)J gene segment assignment, repertoire characterization, and repertoire comparison. VDJServer also provides sophisticated visualizations for exploratory analysis. It is accessible through a standard web browser via a graphical user interface designed for use by immunologists, clinicians, and bioinformatics researchers. VDJServer provides a data commons for public sharing of repertoire sequencing data, as well as private sharing of data between users. We describe the main functionality and architecture of VDJServer and demonstrate its capabilities with use cases from cancer immunology and autoimmunity. VDJServer provides a complete analysis suite for human and mouse T-cell and B-cell receptor repertoire sequencing data. The combination of its user-friendly interface and high-performance computing allows large immune repertoire sequencing projects to be analyzed with no programming or software installation required. VDJServer is a web-accessible cloud platform that provides access through a graphical user interface to a data management infrastructure, a collection of analysis tools covering all steps in an analysis, and an infrastructure for sharing data along with workflows, results, and computational provenance. VDJServer is a free, publicly available, and open-source licensed resource.

  6. Development of CCHE2D embankment break model

    USDA-ARS?s Scientific Manuscript database

    Earthen embankment breach often results in detrimental impact on downstream residents and infrastructure, especially those located in the flooding zone. Embankment failures are most commonly caused by overtopping or internal erosion. This study is to develop a practical numerical model for simulat...

  7. FE-ANN based modeling of 3D simple reinforced concrete girders for objective structural health evaluation.

    DOT National Transportation Integrated Search

    2017-06-01

    The structural deterioration of aging infrastructure systems and the costs of repairing these systems is an increasingly important issue worldwide. Structural health monitoring (SHM), most commonly visual inspection and condition rating, has proven t...

  8. 42 CFR § 512.460 - Compliance enforcement.

    Code of Federal Regulations, 2010 CFR

    2017-10-01

    ... (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Quality Measures, Beneficiary... regulations under this part must not be construed to affect the applicable payment, coverage, program..., commonly referred to as a CAP. (iii) Reducing or eliminating the EPM participant's reconciliation payment...

  9. Wireless Communications Infrastructure for Collaboration in Common Space

    DTIC Science & Technology

    2004-03-01

    creation tools accessible to a broad range of computer graphics professionals in the film, broadcast, industrial design, visualization, game ... development and web design industries. It is one of the leading full 3D production solutions. Maya Complete is available for Windows 2000 Professional

  10. REHABILITATION OF AGING WATER INFRASTRUCTURE SYSTEMS: KEY CHALLENGES AND ISSUES

    EPA Science Inventory

    Presented in this paper are the results of a state-of-the-practice survey on the rehabilitation of water distribution and wastewater collection systems. The survey identified several needs, including the need for rational and common design approaches for rehabilitation systems, ...

  11. Preparing for Sea-level Rise: Conflicts and Opportunities in Coastal Wetlands Coexisting with Infrastructure

    NASA Astrophysics Data System (ADS)

    Rodriguez, J. F.; Saco, P. M.; Sandi, S. G.; Saintilan, N.; Riccardi, G.

    2017-12-01

    Even though on a large scale the sustainability and resilience of coastal wetlands to sea-level rise depends on the slope of the landscape and a balance between the rates of soil accretion and the sea-level rise, local man-made flow disturbances can have comparable effects. Coastal infrastructure controlling flow in the wetlands can pose an additional constraint on the adaptive capacity of these ecosystems, but can also present opportunities for targeted flow management to increase their resilience. Coastal wetlands in SE Australia are heavily managed and typically present infrastructure including flow control devices. How these flow control structures are operated respond to different ecological conservation objectives (i.e. bird, frog or fish habitat) that can sometimes be mutually exclusive. For example, promoting mangrove establishment to enhance fish habitat results in saltmarsh decline thus affecting bird habitat. Moreover, sea-level rise will change hydraulic conditions in wetlands and may result in some flow control structures and strategies becoming obsolete or even counterproductive. In order to address these problems and in support of future management of flows in coastal wetlands, we have developed a predictive tool for long-term wetland evolution that incorporates the effects of infrastructure and other perturbations to the tidal flow within the wetland (i.e. vegetation resistance) and determines how these flow conditions affect vegetation establishment and survival. We use the model to support management and analyse different scenarios of sea-level rise and flow control measures aimed at preserving bird habitat. Our results show that sea-level rise affects the efficiency of management measures and in some cases may completely override their effect. It also shows the potential of targeted flow management to compensate for the effects of sea-level rise.

  12. NASA World Wind: Infrastructure for Spatial Data

    NASA Technical Reports Server (NTRS)

    Hogan, Patrick

    2011-01-01

    The world has great need for analysis of Earth observation data, be it climate change, carbon monitoring, disaster response, national defense or simply local resource management. To best provide for spatial and time-dependent information analysis, the world benefits from an open standards and open source infrastructure for spatial data. In the spirit of NASA's motto "for the benefit of all" NASA invites the world community to collaboratively advance this core technology. The World Wind infrastructure for spatial data both unites and challenges the world for innovative solutions analyzing spatial data while also allowing absolute command and control over any respective information exchange medium.

  13. The National Biological Information Infrastructure: Coming of age

    USGS Publications Warehouse

    Cotter, G.; Frame, M.; Sepic, R.; Zolly, L.

    2000-01-01

    Coordinated by the US Geological Survey, the National Biological Information Infrastructure (NBII) is a Web-based system that provides increased access to data and information on the nation's biological resources. The NBII can be viewed from a variety of perspectives. This article - an individual case study and not a broad survey with extensive references to the literature - addresses the structure of the NBII related to thematic sections, infrastructure sections and place-based sections, and other topics such as the Integrated Taxonomic Information System (one of our more innovative tools) and the development of our controlled vocabulary.

  14. Towards Social Radiology as an Information Infrastructure: Reconciling the Local With the Global

    PubMed Central

    2014-01-01

    The current widespread use of medical images and imaging procedures in clinical practice and patient diagnosis has brought about an increase in the demand for sharing medical imaging studies among health professionals in an easy and effective manner. This article reveals the existence of a polarization between the local and global demands for radiology practice. While there are no major barriers for sharing such studies, when access is made from a (local) picture archive and communication system (PACS) within the domain of a healthcare organization, there are a number of impediments for sharing studies among health professionals on a global scale. Social radiology as an information infrastructure involves the notion of a shared infrastructure as a public good, affording a social space where people, organizations and technical components may spontaneously form associations in order to share clinical information linked to patient care and radiology practice. This article shows however, that such polarization establishes a tension between local and global demands, which hinders the emergence of social radiology as an information infrastructure. Based on an analysis of the social space for radiology practice, the present article has observed that this tension persists due to the inertia of a locally installed base in radiology departments, for which common teleradiology models are not truly capable of reorganizing as a global social space for radiology practice. Reconciling the local with the global signifies integrating PACS and teleradiology into an evolving, secure, heterogeneous, shared, open information infrastructure where the conceptual boundaries between (local) PACS and (global) teleradiology are transparent, signaling the emergence of social radiology as an information infrastructure. PMID:25600710

  15. Telepsychiatry: assessment of televideo psychiatric interview reliability with present- and next-generation internet infrastructures.

    PubMed

    Yoshino, A; Shigemura, J; Kobayashi, Y; Nomura, S; Shishikura, K; Den, R; Wakisaka, H; Kamata, S; Ashida, H

    2001-09-01

    We assessed the reliability of remote video psychiatric interviews conducted via the internet using narrow and broad bandwidths. Televideo psychiatric interviews conducted with 42 in-patients with chronic schizophrenia using two bandwidths (narrow, 128 kilobits/s; broad, 2 megabits/s) were assessed in terms of agreement with face-to-face interviews in a test-retest fashion. As a control, agreement was assessed between face-to-face interviews. Psychiatric symptoms were rated using the Oxford version of the Brief Psychiatric Rating Scale (BPRS), and agreement between interviews was estimated as the intraclass correlation coefficient (ICC). The ICC was significantly lower in the narrow bandwidth than in the broad bandwidth and the control for both positive symptoms score and total score. While reliability of televideo psychiatric interviews is insufficient using the present narrow-band internet infrastructure, the next generation of infrastructure (broad-band) may permit reliable diagnostic interviews.

  16. A Programmable SDN+NFV Architecture for UAV Telemetry Monitoring

    NASA Technical Reports Server (NTRS)

    White, Kyle J. S.; Pezaros, Dimitrios P.; Denney, Ewen; Knudson, Matt D.

    2017-01-01

    With the explosive growth in UAV numbers forecast worldwide, a core concern is how to manage the ad-hoc network configuration required for mobility management. As UAVs migrate among ground control stations, associated network services, routing and operational control must also rapidly migrate to ensure a seamless transition. In this paper, we present a novel, lightweight and modular architecture which supports high mobility, resilience and flexibility through the application of SDN and NFV principles on top of the UAV infrastructure. By combining SDN programmability and Network Function Virtualization we can achieve resilient infrastructure migration of network services, such as network monitoring and anomaly detection, coupled with migrating UAVs to enable high mobility management. Our container-based monitoring and anomaly detection Network Functions (NFs) can be tuned to specific UAV models providing operators better insight during live, high-mobility deployments. We evaluate our architecture against telemetry from over 80flights from a scientific research UAV infrastructure.

  17. Commerce lab: Mission analysis and payload integration study

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Conceived as one or more arrays of carriers which would fly aboard space shuttle, Commerce Lab can provide a point of focus for implementing a series of shuttle flights, co-sponsored by NASA and U.S. domestic concerns, for performing materials processing in research and pre-commercial investigations. As an orbiting facility for testing, developing, and implementing hardware and procedures, Commerce Lab can enhance space station development and hasten space platform production capability. Tasks considered include: (1) synthesis of user requirements and identification of common element and voids; (2) definition of performance and infrastructure requirement and alternative approaches; and (3) carrier, mission model, and infrastructure development.

  18. Lithology and Bedrock Geotechnical Properties in Controlling Rock and Ice Mass Movements in Mountain Cryosphere

    NASA Astrophysics Data System (ADS)

    Karki, A.; Kargel, J. S.

    2017-12-01

    Landslides and ice avalanches kill >5000 people annually (D. Petley, 2012, Geology http://dx.doi.org/10.1130/G33217.1); destroy or damage homes and infrastructure; and create secondary hazards, such as flooding due to blocked rivers. Critical roles of surface slope, earthquake shaking, soil characteristics and saturation, river erosional undercutting, rainfall intensity, snow loading, permafrost thaw, freeze-thaw and frost shattering, debuttressing of unstable masses due to glacier thinning, and vegetation burn or removal are well-known factors affecting landslides and avalanches. Lithology-dependent bedrock physicochemical-mechanical properties—especially brittle elastic and shear strength, and chemical weathering properties that affect rock strength, are also recognized controls on landsliding and avalanching, but are not commonly considered in detail in landslide susceptibility assessment. Lithology controls the formation of weakened, weathered bedrock; the formation and accumulation of soils; soil saturation-related properties of grain size distribution, porosity, and permeability; and soil creep related to soil wetting-drying and freeze-thaw. Lithology controls bedrock abrasion and glacial erosion and debris production rates, the formation of rough or smoothed bedrock surface by glaciation, fluvial, and freeze-thaw processes. Lithologic variability (e.g., bedding; fault and joint structure) affects contrasts in chemical weathering rates, porosity, and susceptibility to frost shattering and chemical weathering, hence formation of overhanging outcrops and weakened slip planes. The sudden failure of bedrock or sudden slip of ice on bedrock, and many other processes depend on rock lithology, microstructure (porosity and permeability), and macrostructure (bedding; faults). These properties are sometimes considered in gross terms for landslide susceptibility assessment, but in detailed applications to specific development projects, and in detailed mapping over large areas, the details of rock lithology, weathering state, and structure are rarely considered. We have initiated a geological and rock mechanical properties approach to landslide susceptibility assessments in areas of high concern for human and infrastructure safety.

  19. Eutrophication and contaminant data management for EU marine policies: the EMODnet Chemistry infrastructure.

    NASA Astrophysics Data System (ADS)

    Vinci, Matteo; Lipizer, Marina; Giorgetti, Alessandra

    2016-04-01

    The European Marine Observation and Data Network (EMODnet) initiative has the following purposes: to assemble marine metadata, data and products, to make these fragmented resources more easily available to public and private users and to provide quality-assured, standardised and harmonised marine data. EMODnet Chemistry was launched by DG MARE in 2009 to support the Marine Strategy Framework Directive (MSFD) requirements for the assessment of eutrophication and contaminants, following INSPIRE Directive rules. The aim is twofold: the first task is to make available and reusable the big amount of fragmented and inaccessible data, hosted in the European research institutes and environmental agencies. The second objective is to develop visualization services useful for the tasks of the MSFD. The technical set-up is based on the principle of adopting and adapting the SeaDataNet infrastructure for ocean and marine data which are managed by National Oceanographic Data Centers and relies on a distributed network of data centers. Data centers contribute to data harvesting and enrichment with the relevant metadata. Data are processed into interoperable formats (using agreed standards ISO XML, ODV) with the use of common vocabularies and standardized quality control procedures .Data quality control is a key issue when merging heterogeneous data coming from different sources and a data validation loop has been agreed within EMODnet Chemistry community and is routinely performed. After data quality control done by the regional coordinators of the EU marine basins (Atlantic, Baltic, North, Mediterranean and Black Sea), validated regional datasets are used to develop data products useful for the requirements of the MSFD. EMODnet Chemistry provides interpolated seasonal maps of nutrients and services for the visualization of time series and profiles of several chemical parameters. All visualization services are developed following OGC standards as WMS and WPS. In order to test new strategies for data storage, reanalysis and to upgrade the infrastructure performances, EMODnet Chemistry has chosen the Cloud environment offered by Cineca (the Consortium of Italian Universities and research institutes) where both regional aggregated datasets and analysis and visualization services are hosted. Finally, beside the delivery of data and the visualization products, the results of the data harvesting provide a useful tool to identify data gaps where the future monitoring efforts should be focused.

  20. NSLS-II HIGH LEVEL APPLICATION INFRASTRUCTURE AND CLIENT API DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, G.; Yang; L.

    2011-03-28

    The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. It is an open structure platform, and we try to provide a narrow API set for client application. With this narrow API, existing applications developed in different language under different architecture could be ported to our platform with small modification. This paper describes system infrastructure design, client API and system integration, and latest progress. As a new 3rd generation synchrotron light source with ultra low emittance, there are new requirements and challenges to control and manipulate themore » beam. A use case study and a theoretical analysis have been performed to clarify requirements and challenges to the high level applications (HLA) software environment. To satisfy those requirements and challenges, adequate system architecture of the software framework is critical for beam commissioning, study and operation. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating, plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology. The HLA is combination of tools for accelerator physicists and operators, which is same as traditional approach. In NSLS-II, they include monitoring applications and control routines. Scripting environment is very important for the later part of HLA and both parts are designed based on a common set of APIs. Physicists and operators are users of these APIs, while control system engineers and a few accelerator physicists are the developers of these APIs. With our Client/Server mode based approach, we leave how to retrieve information to the developers of APIs and how to use them to form a physics application to the users. For example, how the channels are related to magnet and what the current real-time setting of a magnet is in physics unit are the internals of APIs. Measuring chromaticities are the users of APIs. All the users of APIs are working with magnet and instrument names in a physics unit. The low level communications in current or voltage unit are minimized. In this paper, we discussed our recent progress of our infrastructure development, and client API.« less

  1. The Chandra Source Catalog: Processing and Infrastructure

    NASA Astrophysics Data System (ADS)

    Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.

  2. Harmonization in preclinical epilepsy research: A joint AES/ILAE translational initiative.

    PubMed

    Galanopoulou, Aristea S; French, Jacqueline A; O'Brien, Terence; Simonato, Michele

    2017-11-01

    Among the priority next steps outlined during the first translational epilepsy research workshop in London, United Kingdom (2012), jointly organized by the American Epilepsy Society (AES) and the International League Against Epilepsy (ILAE), are the harmonization of research practices used in preclinical studies and the development of infrastructure that facilitates multicenter preclinical studies. The AES/ILAE Translational Task Force of the ILAE has been pursuing initiatives that advance these goals. In this supplement, we present the first reports of the working groups of the Task Force that aim to improve practices of performing rodent video-electroencephalography (vEEG) studies in experimental controls, generate systematic reviews of preclinical research data, and develop preclinical common data elements (CDEs) for epilepsy research in animals. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  3. Synthetic Proxy Infrastructure for Task Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junghans, Christoph; Pavel, Robert

    The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less

  4. A physical layer perspective on access network sharing

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Thomas

    2015-12-01

    Unlike in copper or wireless networks, there is no sharing of resources in fiber access networks yet, other than bit stream access or cable sharing, in which the fibers of a cable are let to one or multiple operators. Sharing optical resources on a single fiber among multiple operators or different services has not yet been applied. While this would allow for a better exploitation of installed infrastructures, there are operational issues which still need to be resolved, before this sharing model can be implemented in networks. Operating multiple optical systems and services over a common fiber plant, autonomously and independently from each other, can result in mutual distortions on the physical layer. These distortions will degrade the performance of the involved systems, unless precautions are taken in the infrastructure hardware to eliminate or to reduce them to an acceptable level. Moreover, the infrastructure needs to be designed such as to support different system technologies and to ensure a guaranteed quality of the end-to-end connections. In this paper, suitable means are proposed to be introduced in fiber access infrastructures that will allow for shared utilization of the fibers while safeguarding the operational needs and business interests of the involved parties.

  5. Progress Toward Cancer Data Ecosystems.

    PubMed

    Grossman, Robert L

    One of the recommendations of the Cancer Moonshot Blue Ribbon Panel report from 2016 was the creation of a national cancer data ecosystem. We review some of the approaches for building cancer data ecosystems and some of the progress that has been made. A data commons is the colocation of data with cloud computing infrastructure and commonly used software services, tools, and applications for managing, integrating, analyzing, and sharing data to create an interoperable resource for the research community. We discuss data commons and their potential role in cancer data ecosystems and, in particular, how multiple data commons can interoperate to form part of the foundation for a cancer data ecosystem.

  6. Interdependent Network Recovery Games.

    PubMed

    Smith, Andrew M; González, Andrés D; Dueñas-Osorio, Leonardo; D'Souza, Raissa M

    2017-10-30

    Recovery of interdependent infrastructure networks in the presence of catastrophic failure is crucial to the economy and welfare of society. Recently, centralized methods have been developed to address optimal resource allocation in postdisaster recovery scenarios of interdependent infrastructure systems that minimize total cost. In real-world systems, however, multiple independent, possibly noncooperative, utility network controllers are responsible for making recovery decisions, resulting in suboptimal decentralized processes. With the goal of minimizing recovery cost, a best-case decentralized model allows controllers to develop a full recovery plan and negotiate until all parties are satisfied (an equilibrium is reached). Such a model is computationally intensive for planning and negotiating, and time is a crucial resource in postdisaster recovery scenarios. Furthermore, in this work, we prove this best-case decentralized negotiation process could continue indefinitely under certain conditions. Accounting for network controllers' urgency in repairing their system, we propose an ad hoc sequential game-theoretic model of interdependent infrastructure network recovery represented as a discrete time noncooperative game between network controllers that is guaranteed to converge to an equilibrium. We further reduce the computation time needed to find a solution by applying a best-response heuristic and prove bounds on ε-Nash equilibrium, where ε depends on problem inputs. We compare best-case and ad hoc models on an empirical interdependent infrastructure network in the presence of simulated earthquakes to demonstrate the extent of the tradeoff between optimality and computational efficiency. Our method provides a foundation for modeling sociotechnical systems in a way that mirrors restoration processes in practice. © 2017 Society for Risk Analysis.

  7. National Infrastructure Protection Plan

    DTIC Science & Technology

    2006-01-01

    effective and efficient CI/KR protection; and • Provide a system for continuous measurement and improvement of CI/KR...information- based core processes, a top-down system -, network-, or function- based approach may be more appropri- ate. A bottom-up approach normally... e - commerce , e -mail, and R&D systems . • Control Systems : Cyber systems used within many infrastructure and industries to monitor and

  8. Security Engineering and Educational Initiatives for Critical Information Infrastructures

    DTIC Science & Technology

    2013-06-01

    standard for cryptographic protection of SCADA communications. The United Kingdom’s National Infrastructure Security Co-ordination Centre (NISCC...has released a good practice guide on firewall deployment for SCADA systems and process control networks [17]. Meanwhile, National Institute for ...report. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 18 The SCADA gateway collects the data gathered by sensors, translates them from

  9. Assured communications and combat resiliency: the relationship between effective national communications and combat efficiency

    NASA Astrophysics Data System (ADS)

    Allgood, Glenn O.; Kuruganti, Phani Teja; Nutaro, James; Saffold, Jay

    2009-05-01

    Combat resiliency is the ability of a commander to prosecute, control, and consolidate his/her's sphere of influence in adverse and changing conditions. To support this, an infrastructure must exist that allows the commander to view the world in varying degrees of granularity with sufficient levels of detail to permit confidence estimates to be levied against decisions and course of actions. An infrastructure such as this will include the ability to effectively communicate context and relevance within and across the battle space. To achieve this will require careful thought, planning, and understanding of a network and its capacity limitations in post-event command and control. Relevance and impact on any existing infrastructure must be fully understood prior to deployment to exploit the system's full capacity and capabilities. In this view, the combat communication network is considered an integral part of or National communication network and infrastructure. This paper will describe an analytical tool set developed at ORNL and RNI incorporating complexity theory, advanced communications modeling, simulation, and visualization technologies that could be used as a pre-planning tool or post event reasoning application to support response and containment.

  10. Neural Network Based Intrusion Detection System for Critical Infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd Vollmer; Ondrej Linda; Milos Manic

    2009-07-01

    Resiliency and security in control systems such as SCADA and Nuclear plant’s in today’s world of hackers and malware are a relevant concern. Computer systems used within critical infrastructures to control physical functions are not immune to the threat of cyber attacks and may be potentially vulnerable. Tailoring an intrusion detection system to the specifics of critical infrastructures can significantly improve the security of such systems. The IDS-NNM – Intrusion Detection System using Neural Network based Modeling, is presented in this paper. The main contributions of this work are: 1) the use and analyses of real network data (data recordedmore » from an existing critical infrastructure); 2) the development of a specific window based feature extraction technique; 3) the construction of training dataset using randomly generated intrusion vectors; 4) the use of a combination of two neural network learning algorithms – the Error-Back Propagation and Levenberg-Marquardt, for normal behavior modeling. The presented algorithm was evaluated on previously unseen network data. The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.« less

  11. Compound Formulas of Traditional Chinese Medicine for the Common Cold: Systematic Review of Randomized, Placebo-controlled Trials.

    PubMed

    Li, Guanhong; Cai, Linli; Jiang, Hongli; Dong, Shoujin; Fan, Tao; Liu, Wei; Xie, Li; Mao, Bing

    2015-01-01

    The common cold is one of the most frequent acute illnesses of the respiratory tract, affecting all age groups. The compound formulas of traditional Chinese medicine (TCM) are frequently used to treat the common cold in China and other parts of the world. Until now, however, the efficacy and safety of compound formulas of TCM for the common cold, studied in comparison with placebos, have not been systematically reviewed. This literature review intended to assess existing evidence of the effectiveness and safety of compound formulas of TCM for the common cold. Randomized, controlled trials (RCTs) comparing compound formulas of TCM with placebos in treating the common cold were included, regardless of publication status. The research team searched the Cochrane Library, PubMed, Embase, the Chinese Biomedical Literature Database, the Chinese Scientific and Technological Periodical Database, the Chinese National Knowledge Infrastructure and the Wangfang Database from their inceptions to December 2013. The team also searched Web sites listing ongoing trials and contacted experts in the field and relevant pharmaceutical companies to locate unpublished materials. Two review authors independently extracted data and assessed the methodological quality of included studies, using the Cochrane risk of bias tool. A total of 6 randomized, double-blind, placebo-controlled trials involving 1502 participants were included. Most trials had a low risk of bias. Five were conducted in mainland China and 1 in Hong Kong; 5 were multicenter clinical trials and 1 was a single-center trial; 4 were published in Chinese and 2 were published in English. Compound formulas of TCM were superior to placebos in reducing disease symptoms, inducing recovery from a TCM syndrome, and increasing quality of life. In addition, the formulas were superior in shortening the duration of the main symptoms, the amount of time for a decline in temperature of at least 0.5°C to occur, and the duration of any fever. The team did not perform a summary meta-analysis due to clinical heterogeneity. No serious adverse event (AE) occurred in either the treatment or the control groups. This systematic review indicated that compound formulas of TCM, compared with placebo, can provide benefits to patients with the common cold, with no serious side effects having been identified in the included trials. However, due to the small number of included studies and of participants and the unclear risk of some biases in the included studies, more high-quality, large-scale RCTs are still warranted to clarify fully the effectiveness and safety of compound formulas of TCM in treating the common cold.

  12. Integrating WEPP into the WEPS infrastructure

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...

  13. Developing a data infrastructure for a learning health system: the PORTAL network

    PubMed Central

    McGlynn, Elizabeth A; Lieu, Tracy A; Durham, Mary L; Bauck, Alan; Laws, Reesa; Go, Alan S; Chen, Jersey; Feigelson, Heather Spencer; Corley, Douglas A; Young, Deborah Rohm; Nelson, Andrew F; Davidson, Arthur J; Morales, Leo S; Kahn, Michael G

    2014-01-01

    The Kaiser Permanente & Strategic Partners Patient Outcomes Research To Advance Learning (PORTAL) network engages four healthcare delivery systems (Kaiser Permanente, Group Health Cooperative, HealthPartners, and Denver Health) and their affiliated research centers to create a new national network infrastructure that builds on existing relationships among these institutions. PORTAL is enhancing its current capabilities by expanding the scope of the common data model, paying particular attention to incorporating patient-reported data more systematically, implementing new multi-site data governance procedures, and integrating the PCORnet PopMedNet platform across our research centers. PORTAL is partnering with clinical research and patient experts to create cohorts of patients with a common diagnosis (colorectal cancer), a rare diagnosis (adolescents and adults with severe congenital heart disease), and adults who are overweight or obese, including those with pre-diabetes or diabetes, to conduct large-scale observational comparative effectiveness research and pragmatic clinical trials across diverse clinical care settings. PMID:24821738

  14. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  15. Access Control Management for SCADA Systems

    NASA Astrophysics Data System (ADS)

    Hong, Seng-Phil; Ahn, Gail-Joon; Xu, Wenjuan

    The information technology revolution has transformed all aspects of our society including critical infrastructures and led a significant shift from their old and disparate business models based on proprietary and legacy environments to more open and consolidated ones. Supervisory Control and Data Acquisition (SCADA) systems have been widely used not only for industrial processes but also for some experimental facilities. Due to the nature of open environments, managing SCADA systems should meet various security requirements since system administrators need to deal with a large number of entities and functions involved in critical infrastructures. In this paper, we identify necessary access control requirements in SCADA systems and articulate access control policies for the simulated SCADA systems. We also attempt to analyze and realize those requirements and policies in the context of role-based access control that is suitable for simplifying administrative tasks in large scale enterprises.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aderholdt, Ferrol; Caldwell, Blake A.; Hicks, Susan Elaine

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data at various security levels but in so doing are often enclaved at the highest security posture. This approach places significant restrictions on the users of the system even when processing data at a lower security level and exposes data at higher levels of confidentiality to a much broader population than otherwise necessary. The traditional approach of isolation, while effective in establishing security enclaves poses significant challenges formore » the use of shared infrastructure in HPC environments. This report details current state-of-the-art in reconfigurable network enclaving through Software Defined Networking (SDN) and Network Function Virtualization (NFV) and their applicability to secure enclaves in HPC environments. SDN and NFV methods are based on a solid foundation of system wide virtualization. The purpose of which is very straight forward, the system administrator can deploy networks that are more amenable to customer needs, and at the same time achieve increased scalability making it easier to increase overall capacity as needed without negatively affecting functionality. The network administration of both the server system and the virtual sub-systems is simplified allowing control of the infrastructure through well-defined APIs (Application Programming Interface). While SDN and NFV technologies offer significant promise in meeting these goals, they also provide the ability to address a significant component of the multi-tenant challenge in HPC environments, namely resource isolation. Traditional HPC systems are built upon scalable high-performance networking technologies designed to meet specific application requirements. Dynamic isolation of resources within these environments has remained difficult to achieve. SDN and NFV methodology provide us with relevant concepts and available open standards based APIs that isolate compute and storage resources within an otherwise common networking infrastructure. Additionally, the integration of the networking APIs within larger system frameworks such as OpenStack provide the tools necessary to establish isolated enclaves dynamically allowing the benefits of HPC while providing a controlled security structure surrounding these systems.« less

  17. A Study to Compare the Failure Rates of Current Space Shuttle Ground Support Equipment with the New Pathfinder Equipment and Investigate the Effect that the Proposed GSE Infrastructure Upgrade Might Have to Reduce GSE Infrastructure Failures

    NASA Technical Reports Server (NTRS)

    Kennedy, Barbara J.

    2004-01-01

    The purposes of this study are to compare the current Space Shuttle Ground Support Equipment (GSE) infrastructure with the proposed GSE infrastructure upgrade modification. The methodology will include analyzing the first prototype installation equipment at Launch PAD B called the "Pathfinder". This study will begin by comparing the failure rate of the current components associated with the "Hardware interface module (HIM)" at the Kennedy Space Center to the failure rate of the neW Pathfinder components. Quantitative data will be gathered specifically on HIM components and the PAD B Hypergolic Fuel facility and Hypergolic Oxidizer facility areas which has the upgraded pathfinder equipment installed. The proposed upgrades include utilizing industrial controlled modules, software, and a fiber optic network. The results of this study provide evidence that there is a significant difference in the failure rates of the two studied infrastructure equipment components. There is also evidence that the support staff for each infrastructure system is not equal. A recommendation to continue with future upgrades is based on a significant reduction of failures in the new' installed ground system components.

  18. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting point. Usually this other configuration is provided by a researcher in the same research group or by a previous collaborator with whom there is an existing scientific relationship. Some efforts have been made at the university department level to create documentation but there is a wide diversity in the scope and purpose of this information. The consistent and comprehensive documentation enabled by PIMMS will enable the wider sharing of climate model data and configuration information. The PIMMS methodology assumes an initial effort to document standard model configurations. Once these descriptions have been created users need only describe the specific way in which their model configuration is different from the standard. Thus the documentation burden on the user is specific to the experiment they are performing and fits easily into the workflow of doing their science. PIMMS metadata is independent of data and as such is ideally suited for documenting model development. PIMMS provides a framework for sharing information about failed model configurations for which data are not kept, the negative results that don't appear in scientific literature. PIMMS is a UK project funded by JISC, The University of Reading, The University of Bristol and STFC.

  19. A framework for quantifying and optimizing the value of seismic monitoring of infrastructure

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr

    2017-04-01

    This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.

  20. Cycling infrastructure for reducing cycling injuries in cyclists.

    PubMed

    Mulvaney, Caroline A; Smith, Sherie; Watson, Michael C; Parkin, John; Coupland, Carol; Miller, Philip; Kendrick, Denise; McClintock, Hugh

    2015-12-10

    Cycling is an attractive form of transport. It is beneficial to the individual as a form of physical activity that may fit more readily into an individual's daily routine, such as for cycling to work and to the shops, than other physical activities such as visiting a gym. Cycling is also beneficial to the wider community and the environment as a result of fewer motorised journeys. Cyclists are seen as vulnerable road users who are frequently in close proximity to larger and faster motorised vehicles. Cycling infrastructure aims to make cycling both more convenient and safer for cyclists. This review is needed to guide transport planning. To:1. evaluate the effects of different types of cycling infrastructure on reducing cycling injuries in cyclists, by type of infrastructure;2. evaluate the effects of cycling infrastructure on reducing the severity of cycling injuries in cyclists;3. evaluate the effects of cycling infrastructure on reducing cycling injuries in cyclists with respect to age, sex and social group. We ran the most recent search on 2nd March 2015. We searched the Cochrane Injuries Group Specialised Register, CENTRAL (The Cochrane Library), MEDLINE (OvidSP), Embase Classic + Embase(OvidSP), PubMed and 10 other databases. We searched websites, handsearched conference proceedings, screened reference lists of included studies and previously published reviews and contacted relevant organisations. We included randomised controlled trials, cluster randomised controlled trials, controlled before-after studies, and interrupted time series studies which evaluated the effect of cycling infrastructure (such as cycle lanes, tracks or paths, speed management, roundabout design) on cyclist injury or collision rates. Studies had to include a comparator, that is, either no infrastructure or a different type of infrastructure. We excluded studies that assessed collisions that occurred as a result of competitive cycling. Two review authors examined the titles and abstracts of papers obtained from searches to determine eligibility. Two review authors extracted data from the included trials and assessed the risk of bias. We carried out a meta-analysis using the random-effects model where at least three studies reported the same intervention and outcome. Where there were sufficient studies, as a secondary analysis we accounted for changes in cyclist exposure in the calculation of the rate ratios. We rated the quality of the evidence as 'high', 'moderate', 'low' or 'very low' according to the GRADE approach for the installation of cycle routes and networks. We identified 21 studies for inclusion in the review: 20 controlled before-after (CBA) studies and one interrupted time series (ITS) study. These evaluated a range of infrastructure including cycle lanes, advanced stop lines, use of colour, cycle tracks, cycle paths, management of the road network, speed management, cycle routes and networks, roundabout design and packages of measures. No studies reported medically-attended or self-reported injuries. There was no evidence that cycle lanes reduce the rate of cycle collisions (rate ratio 1.21, 95% CI 0.70 to 2.08). Taking into account cycle flow, there was no difference in collisions for cyclists using cycle routes and networks compared with cyclists not using cycle routes and networks (RR 0.40, 95% CI 0.15 to 1.05). There was statistically significant heterogeneity between the studies (I² = 75%, Chi² = 8.00 df = 2, P = 0.02) for the analysis adjusted for cycle flow. We judged the quality of the evidence regarding cycle routes and networks as very low and we are very uncertain about the estimate. These analyses are based on findings from CBA studies.From data presented narratively, the use of 20 mph speed restrictions in urban areas may be effective at reducing cyclist collisions. Redesigning specific parts of cycle routes that may be particularly busy or complex in terms of traffic movement may be beneficial to cyclists in terms of reducing the risk of collision. Generally, the conversion of intersections to roundabouts may increase the number of cycle collisions. In particular, the conversion of intersections to roundabouts with cycle lanes marked as part of the circulating carriageway increased cycle collisions. However, the conversion of intersections with and without signals to roundabouts with cycle paths may reduce the odds of collision. Both continuing a cycle lane across the mouth of a side road with a give way line onto the main road, and cycle tracks, may increase the risk of injury collisions in cyclists. However, these conclusions are uncertain, being based on a narrative review of findings from included studies. There is a lack of evidence that cycle paths or advanced stop lines either reduce or increase injury collisions in cyclists. There is also insufficient evidence to draw any robust conclusions concerning the effect of cycling infrastructure on cycling collisions in terms of severity of injury, sex, age, and level of social deprivation of the casualty.In terms of quality of the evidence, there was little matching of intervention and control sites. In many studies, the comparability of the control area to the intervention site was unclear and few studies provided information on other cycling infrastructures that may be in place in the control and intervention areas. The majority of studies analysed data routinely collected by organisations external to the study team, thus reducing the risk of bias in terms of systematic differences in assessing outcomes between the control and intervention groups. Some authors did not take regression-to-mean effects into account when examining changes in collisions. Longer data collection periods pre- and post-installation would allow for regression-to-mean effects and also seasonal and time trends in traffic volume to be observed. Few studies adjusted cycle collision rates for exposure. Generally, there is a lack of high quality evidence to be able to draw firm conclusions as to the effect of cycling infrastructure on cycling collisions. There is a lack of rigorous evaluation of cycling infrastructure.

  1. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    NASA Astrophysics Data System (ADS)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The so called "Supersite Exploitation Platform" (SSEP) provides scientists an overarching federated e-infrastructure with a very fast access to (i) large volume of data (EO/non-space data), (ii) computing resources (e.g. hybrid cloud/grid), (iii) processing software (e.g. toolboxes, RTMs, retrieval baselines, visualization routines), and (iv) general platform capabilities (e.g. user management and access control, accounting, information portal, collaborative tools, social networks etc.). In this federation each data provider remains in full control of the implementation of its data policy. This presentation outlines the Architecture (technical and services) supporting very heterogeneous science domains as well as the procedures for new-comers to join the Helix Nebula Market Place. Ref.1 http://cds.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf

  2. Center for Infrastructure Assurance and Security - Attack and Defense Exercises

    DTIC Science & Technology

    2010-06-01

    conclusion of the research funding under this program. 4.1. Steganography Detection Tools Steganography is the art of hiding information in a cover image ...Some of the more common methods are altering the LSB (least significant bit) of the pixels of the image , altering the palette of an RGB image , or...altering parts of the image in the transform domain. Algorithms that embed information in the transform domain are usually more robust to common

  3. Common solutions for power, communication and robustness in operations of large measurement networks within Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Huber, Robert; Beranzoli, Laura; Fiebig, Markus; Gilbert, Olivier; Laj, Paolo; Mazzola, Mauro; Paris, Jean-Daniel; Pedersen, Helle; Stocker, Markus; Vitale, Vito; Waldmann, Christoph

    2017-04-01

    European Environmental Research Infrastructures (RI) frequently comprise in situ observatories from large-scale networks of platforms or sites to local networks of various sensors. Network operation is usually a cumbersome aspect of these RIs facing specific technological problems related to operations in remote areas, maintenance of the network, transmission of observation values, etc.. Robust inter-connection within and across these networks is still at infancy level and the burden increases with remoteness of the station, harshness of environmental conditions, and unavailability of classic communication systems, which is a common feature here. Despite existing RIs having developed ad-hoc solutions to overcome specific problems and innovative technologies becoming available, no common approach yet exists. Within the European project ENVRIplus, a dedicated work package aims to stimulate common network operation technologies and approaches in terms of power supply and storage, robustness, and data transmission. Major objectives of this task are to review existing technologies and RI requirements, propose innovative solutions and evaluate the standardization potential prior to wider deployment across networks. Focus areas within these efforts are: improving energy production and storage units, testing robustness of RI equipment towards extreme conditions as well as methodologies for robust data transmission. We will introduce current project activities which are coordinated at various levels including the engineering as well as the data management perspective, and explain how environmental RIs can benefit from the developments.

  4. Shear Bond Strength of Composite and Ceromer Superstructures to Direct Laser Sintered and Ni-Cr-Based Infrastructures Treated with KTP, Nd:YAG, and Er:YAG Lasers: An Experimental Study.

    PubMed

    Gorler, Oguzhan; Hubbezoglu, Ihsan; Ulgey, Melih; Zan, Recai; Guner, Kubra

    2018-04-01

    The aim of this study was to examine the shear bond strength (SBS) of ceromer and nanohybrid composite to direct laser sintered (DLS) Cr-Co and Ni-Cr-based metal infrastructures treated with erbium-doped yttrium aluminum garnet (Er:YAG), neodymium-doped yttrium aluminum garnet (Nd:YAG), and potassium titanyl phosphate (KTP) laser modalities in in vitro settings. Experimental specimens had four sets (n = 32) including two DLS infrastructures with ceromer and nanohybrid composite superstructures and two Ni-Cr-based infrastructures with ceromer and nanohybrid composite superstructures. Of each infrastructure set, the specimens randomized into four treatment modalities (n = 8): no treatment (controls) and Er:YAG, Nd:YAG, and KTP lasers. The infrastructures were prepared in the final dimensions of 7 × 3 mm. Ceromer and nanohybrid composite was applied to the infrastructures after their surface treatments according to randomization. The SBS of specimens was measured to test the efficacy of surface treatments. Representative scanning electron microscopy (SEM) images after laser treatments were obtained. Overall, in current experimental settings, Nd:YAG, KTP, and Er:YAG lasers, in order of efficacy, are effective to improve the bonding of ceromer and nanohybrid composite to the DLS and Ni-Cr-based infrastructures (p < 0.05). Nd:YAG laser is more effective in the DLS/ceromer infrastructures (p < 0.05). KTP laser, as second more effective preparation, is more effective in the DLS/ceromer infrastructures (p < 0.05). SEM findings presented moderate accordance with these findings. The results of this study supported the bonding of ceromer and nanohybrid composite superstructures to the DLS and Ni-Cr-based infrastructures suggesting that laser modalities, in order of success, Nd:YAG, KTP, and Er:YAG, are effective to increase bonding of these structures.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasquale, David A.; Hansen, Richard G.

    This paper discusses command and control issues relating to the operation of Incident Command Posts (ICPs) and Emergency Operations Centers (EOCs) in the surrounding area jurisdictions following the detonation of an Improvised Nuclear Device (IND). Although many aspects of command and control will be similar to what is considered to be normal operations using the Incident Command System (ICS) and the National Incident Management System (NIMS), the IND response will require many new procedures and associations in order to design and implement a successful response. The scope of this white paper is to address the following questions: • Would themore » current command and control framework change in the face of an IND incident? • What would the management of operations look like as the event unfolded? • How do neighboring and/or affected jurisdictions coordinate with the state? • If the target area’s command and control infrastructure is destroyed or disabled, how could neighboring jurisdictions assist with command and control of the targeted jurisdiction? • How would public health and medical services fit into the command and control structure? • How can pre-planning and common policies improve coordination and response effectiveness? • Where can public health officials get federal guidance on radiation, contamination and other health and safety issues for IND response planning and operations?« less

  6. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  7. Methods for Determining Aircraft Surface State at Lesser-Equipped Airports

    NASA Technical Reports Server (NTRS)

    Roach, Keenan; Null, Jody

    2016-01-01

    Tactical departure scheduling within a terminal airspace must accommodate a wide spectrum of surveillance and communication capabilities at multiple airports. The success of such a scheduler is highly dependent upon the knowledge of a departure's state while it is still on the surface. Airports within a common Terminal RAdar CONtrol (TRACON) airspace possess varying levels of surface surveillance infrastructure which directly impacts uncertainties in wheels-off times. Large airports have access to surface surveillance data, which is shared with the TRACON, while lesser-equipped airports still rely solely on controllers in Air Traffic Control Towers (Towers). Coordination between TRACON and Towers can be greatly enhanced when the TRACON controller has access to the surface surveillance and the associated decision-support tools at well-equipped airports. Similar coordination at lesser-equipped airports is still based on verbal communications. This paper investigates possible methods to reduce the uncertainty in wheels-off time predictions at the lesser-equipped airports through the novel use of Over-the-Air (OTA) data transmissions. We also discuss the methods and equipment used to collect sample data at lesser-equipped airports within a large US TRACON, as well as the data evaluation to determine if meaningful information can be extracted from it.

  8. Decentral Smart Grid Control

    NASA Astrophysics Data System (ADS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  9. Large funding inflows, limited local capacity and emerging disease control priorities: a situational assessment of tuberculosis control in Myanmar.

    PubMed

    Khan, Mishal S; Schwanke-Khilji, Sara; Yoong, Joanne; Tun, Zaw Myo; Watson, Samantha; Coker, Richard James

    2017-10-01

    There are numerous challenges in planning and implementing effective disease control programmes in Myanmar, which is undergoing internal political and economic transformations whilst experiencing massive inflows of external funding. The objective of our study-involving key informant discussions, participant observations and linked literature reviews-was to analyse how tuberculosis (TB) control strategies in Myanmar are influenced by the broader political, economic, epidemiological and health systems context using the Systemic Rapid Assessment conceptual and analytical framework. Our findings indicate that the substantial influx of donor funding, in the order of one billion dollars over a 5-year period, may be too rapid for the country's infrastructure to effectively utilize. TB control strategies thus far have tended to favour medical or technological approaches rather than infrastructure development, and appear to be driven more by perceived urgency to 'do something' rather informed by evidence of cost-effectiveness and sustainable long-term impact. Progress has been made towards ambitious targets for scaling up treatment of drug-resistant TB, although there are concerns about ensuring quality of care. We also find substantial disparities in health and funding allocation between regions and ethnic groups, which are related to the political context and health system infrastructure. Our situational assessment of emerging TB control strategies in this transitioning health system indicates that large investments by international donors may be pushing Myanmar to scale up TB and drug-resistant TB services too quickly, without due consideration given to the health system (service delivery infrastructure, human resource capacity, quality of care, equity) and epidemiological (evidence of effectiveness of interventions, prevention of new cases) context. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Relationships and trends of E. Coli, human-associated Bacteroides, and pathogens in the Proctor Creek Watershed

    EPA Science Inventory

    Urban surface waters can be impacted by anthropogenic sources such as impervious surfaces, sanitary and storm sewers, and failing infrastructure. Fecal indicator bacteria (FIB) and microbial source tracking (MST) markers are common gauges of stream water quality, however, little...

  11. 47 CFR 59.3 - Information concerning deployment of new services and equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... services and equipment, including any software or upgrades of software integral to the use or operation of... services and equipment. 59.3 Section 59.3 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INFRASTRUCTURE SHARING § 59.3 Information concerning deployment of...

  12. Canada-U.S. Relations

    DTIC Science & Technology

    2009-05-12

    56 RBC Financial Group, Daily Forex Fundamentals, February 27, 2009. [ http...www.actionforex.com/fundamental- analysis/daily- forex -fundamentals/canada%27s-fourth%11quarter-current-account-moves-into-deficit-after-nine-years- of-surpluses...sharing, infrastructure improvements, improvement of compatible immigration databases , visa policy coordination, common biometric identifiers in

  13. HANDBOOK: SEWER SYSTEM INFRASTRUCTURE ANALYSIS AND REHABILITATION

    EPA Science Inventory

    Many of our Nation's sewer systems date back to the 19th Century when brick sewers were common. hese and more recent sewer systems can be expected to fail in time, but because they are placed underground, signs of accelerated deterioration and capacity limitations are not readily...

  14. Cyber-Critical Infrastructure Protection Using Real-Time Payload-Based Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Düssel, Patrick; Gehl, Christian; Laskov, Pavel; Bußer, Jens-Uwe; Störmann, Christof; Kästner, Jan

    With an increasing demand of inter-connectivity and protocol standardization modern cyber-critical infrastructures are exposed to a multitude of serious threats that may give rise to severe damage for life and assets without the implementation of proper safeguards. Thus, we propose a method that is capable to reliably detect unknown, exploit-based attacks on cyber-critical infrastructures carried out over the network. We illustrate the effectiveness of the proposed method by conducting experiments on network traffic that can be found in modern industrial control systems. Moreover, we provide results of a throughput measuring which demonstrate the real-time capabilities of our system.

  15. A Transparent Translation from Legacy System Model into Common Information Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Fei; Simpson, Jeffrey; Zhang, Yingchen

    Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms andmore » applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.« less

  16. NATO initial common operational picture capability project

    NASA Astrophysics Data System (ADS)

    Fanti, Laura; Beach, David

    2002-08-01

    The Common Operational Picture (COP) capability can be defined as the ability to display on a single screen integrated views of the Recognized Maritime, Air and Ground Pictures, enriched by other tactical data, such as theater plans, assets, intelligence and logistics information. The purpose of the COP capability is to provide military forces a comprehensive view of the battle space, thereby enhancing situational awareness and the decision-making process across the military command and control spectrum. The availability of a COP capability throughout the command structure is a high priority operational requirement in NATO. A COP capability for NATO is being procured and implemented in an incremental way within the NATO Automated Information System (Bi-SC AIS) Functional Services programme under the coordination of the NATO Consultation, Command and Control Agency (NC3A) Integrated Programme Team 5 (IPT5). The NATO Initial COP (iCOP) capability project, first step of this evolutionary procurement, will provide an initial COP capability to NATO in a highly pragmatic and low-risk fashion, by using existing operational communications infrastructure and NATO systems, i.e. the NATO-Wide Integrated Command and Control Software for Air Operations (ICC), the Maritime Command and Control Information System (MCCIS), and the Joint Operations and Intelligence Information System (JOIIS), which will provide respectively the Recognized Air, Maritime and Ground Pictures. This paper gives an overview of the NATO Initial COP capability project, including its evolutionary implementation approach, and describes the technical solution selected to satisfy the urgent operational requirement in a timely and cost effective manner.

  17. A cyber infrastructure for the SKA Telescope Manager

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. P.; Carvalho, Bruno; Maia, Dalmiro; Gupta, Yashwant; Natarajan, Swaminathan; Le Roux, Gerhard; Swart, Paul

    2016-07-01

    The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for assisting the SKA Operations and Observation Management, carrying out System diagnosis and collecting Monitoring and Control data from the SKA subsystems and components. To provide adequate compute resources, scalability, operation continuity and high availability, as well as strict Quality of Service, the TM cyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists of COTS hardware and infrastructural software (for example: server monitoring software, host operating system, virtualization software, device firmware), providing a specially tailored Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solution. The TM infrastructure provides services in the form of computational power, software defined networking, power, storage abstractions, and high level, state of the art IaaS and PaaS management interfaces. This cyber platform will be tailored to each of the two SKA Phase 1 telescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each presenting different computational and storage infrastructures and conditioned by location. This cyber platform will provide a compute model enabling TM to manage the deployment and execution of its multiple components (observation scheduler, proposal submission tools, MandC components, Forensic tools and several Databases, etc). In this sense, the TM LINFRA is primarily focused towards the provision of isolated instances, mostly resorting to virtualization technologies, while defaulting to bare hardware if specifically required due to performance, security, availability, or other requirement.

  18. Concept of intellectual charging system for electrical and plug-in hybrid vehicles in Russian Federation

    NASA Astrophysics Data System (ADS)

    Kolbasov, A.; Karpukhin, K.; Terenchenko, A.; Kavalchuk, I.

    2018-02-01

    Electric vehicles have become the most common solution to improve sustainability of the transportation systems all around the world. Despite all benefits, wide adaptation of electric vehicles requires major changes in the infrastructure, including grid adaptation to the rapidly increased power demand and development of the Connected Car concept. This paper discusses the approaches to improve usability of electric vehicles, by creating suitable web-services, with possible connections vehicle-to-vehicle, vehicle-to-infrastructure, and vehicle-to-grid. Developed concept combines information about electrical loads on the grid in specific direction, navigation information from the on-board system, existing and empty charging slots and power availability. In addition, this paper presents the universal concept of the photovoltaic integrated charging stations, which are connected to the developed information systems. It helps to achieve rapid adaptation of the overall infrastructure to the needs of the electric vehicles users with minor changes in the existing grid and loads.

  19. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  20. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A new report from the National Renewable Energy Laboratory (NREL) explores the role of alternative fuels and energy efficient vehicles in motor fuel taxes. Throughout the United States, it is common practice for federal, state, and local governments to tax motor fuels on a per gallon basis to fund construction and maintenance of our transportation infrastructure. In recent years, however, expenses have outpaced revenues creating substantial funding shortfalls that have required supplemental funding sources. While rising infrastructure costs and the decreasing purchasing power of the gas tax are significant factors contributing to the shortfall, the increased use of alternative fuelsmore » and more stringent fuel economy standards are also exacerbating revenue shortfalls. The current dynamic places vehicle efficiency and petroleum use reduction polices at direct odds with policies promoting robust transportation infrastructure. Understanding the energy, transportation, and environmental tradeoffs of motor fuel tax policies can be complicated, but recent experiences at the state level are helping policymakers align their energy and environmental priorities with highway funding requirements.« less

  2. Modeling plug-in electric vehicle charging demand with BEAM: the framework for behavior energy autonomy mobility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheppard, Colin; Waraich, Rashid; Campbell, Andrew

    This report summarizes the BEAM modeling framework (Behavior, Energy, Mobility, and Autonomy) and its application to simulating plug-in electric vehicle (PEV) mobility, energy consumption, and spatiotemporal charging demand. BEAM is an agent-based model of PEV mobility and charging behavior designed as an extension to MATSim (the Multi-Agent Transportation Simulation model). We apply BEAM to the San Francisco Bay Area and conduct a preliminary calibration and validation of its prediction of charging load based on observed charging infrastructure utilization for the region in 2016. We then explore the impact of a variety of common modeling assumptions in the literature regarding chargingmore » infrastructure availability and driver behavior. We find that accurately reproducing observed charging patterns requires an explicit representation of spatially disaggregated charging infrastructure as well as a more nuanced model of the decision to charge that balances tradeoffs people make with regards to time, cost, convenience, and range anxiety.« less

  3. Enabling Data Intensive Science through Service Oriented Science: Virtual Laboratories and Science Gateways

    NASA Astrophysics Data System (ADS)

    Lescinsky, D. T.; Wyborn, L. A.; Evans, B. J. K.; Allen, C.; Fraser, R.; Rankine, T.

    2014-12-01

    We present collaborative work on a generic, modular infrastructure for virtual laboratories (VLs, similar to science gateways) that combine online access to data, scientific code, and computing resources as services that support multiple data intensive scientific computing needs across a wide range of science disciplines. We are leveraging access to 10+ PB of earth science data on Lustre filesystems at Australia's National Computational Infrastructure (NCI) Research Data Storage Infrastructure (RDSI) node, co-located with NCI's 1.2 PFlop Raijin supercomputer and a 3000 CPU core research cloud. The development, maintenance and sustainability of VLs is best accomplished through modularisation and standardisation of interfaces between components. Our approach has been to break up tightly-coupled, specialised application packages into modules, with identified best techniques and algorithms repackaged either as data services or scientific tools that are accessible across domains. The data services can be used to manipulate, visualise and transform multiple data types whilst the scientific tools can be used in concert with multiple scientific codes. We are currently designing a scalable generic infrastructure that will handle scientific code as modularised services and thereby enable the rapid/easy deployment of new codes or versions of codes. The goal is to build open source libraries/collections of scientific tools, scripts and modelling codes that can be combined in specially designed deployments. Additional services in development include: provenance, publication of results, monitoring, workflow tools, etc. The generic VL infrastructure will be hosted at NCI, but can access alternative computing infrastructures (i.e., public/private cloud, HPC).The Virtual Geophysics Laboratory (VGL) was developed as a pilot project to demonstrate the underlying technology. This base is now being redesigned and generalised to develop a Virtual Hazards Impact and Risk Laboratory (VHIRL); any enhancements and new capabilities will be incorporated into a generic VL infrastructure. At same time, we are scoping seven new VLs and in the process, identifying other common components to prioritise and focus development.

  4. A Spatial Data Infrastructure for Environmental Noise Data in Europe.

    PubMed

    Abramic, Andrej; Kotsev, Alexander; Cetl, Vlado; Kephalopoulos, Stylianos; Paviotti, Marco

    2017-07-06

    Access to high quality data is essential in order to better understand the environmental and health impact of noise in an increasingly urbanised world. This paper analyses how recent developments of spatial data infrastructures in Europe can significantly improve the utilization of data and streamline reporting on a pan-European scale. The Infrastructure for Spatial Information in the European Community (INSPIRE), and Environmental Noise Directive (END) described in this manuscript provide principles for data management that, once applied, would lead to a better understanding of the state of environmental noise. Furthermore, shared, harmonised and easily discoverable environmental spatial data, required by the INSPIRE, would also support the data collection needed for the assessment and development of strategic noise maps. Action plans designed by the EU Member States to reduce noise and mitigate related effects can be shared to the public through already established nodes of the European spatial data infrastructure. Finally, data flows regarding reporting on the state of environment and END implementation to the European level can benefit by applying a decentralised e-reporting service oriented infrastructure. This would allow reported data to be maintained, frequently updated and enable pooling of information from/to other relevant and interrelated domains such as air quality, transportation, human health, population, marine environment or biodiversity. We describe those processes and provide a use case in which noise data from two neighbouring European countries are mapped to common data specifications, defined by INSPIRE, thus ensuring interoperability and harmonisation.

  5. Infrastructural requirements for local implementation of safety policies: the discordance between top-down and bottom-up systems of action.

    PubMed

    Timpka, Toomas; Nordqvist, Cecilia; Lindqvist, Kent

    2009-03-09

    Safety promotion is planned and practised not only by public health organizations, but also by other welfare state agencies, private companies and non-governmental organizations. The term 'infrastructure' originally denoted the underlying resources needed for warfare, e.g. roads, industries, and an industrial workforce. Today, 'infrastructure' refers to the physical elements, organizations and people needed to run projects in different societal arenas. The aim of this study was to examine associations between infrastructure and local implementation of safety policies in injury prevention and safety promotion programs. Qualitative data on municipalities in Sweden designated as Safe Communities were collected from focus group interviews with municipal politicians and administrators, as well as from policy documents, and materials published on the Internet. Actor network theory was used to identify weaknesses in the present infrastructure and determine strategies that can be used to resolve these. The weakness identification analysis revealed that the factual infrastructure available for effectuating national strategies varied between safety areas and approaches, basically reflecting differences between bureaucratic and network-based organizational models. At the local level, a contradiction between safety promotion and the existence of quasi-markets for local public service providers was found to predispose for a poor local infrastructure diminishing the interest in integrated inter-agency activities. The weakness resolution analysis showed that development of an adequate infrastructure for safety promotion would require adjustment of the legal framework regulating injury data exchange, and would also require rational financial models for multi-party investments in local infrastructures. We found that the "silo" structure of government organization and assignment of resources was a barrier to collaborative action for safety at a community level. It may therefore be overly optimistic to take for granted that different approaches to injury control, such as injury prevention and safety promotion, can share infrastructure. Similarly, it may be unrealistic to presuppose that safety promotion can reach its potential in terms of injury rate reductions unless the critical infrastructure for this is in place. Such an alignment of the infrastructure to organizational processes requires more than financial investments.

  6. Prototyping the E-ELT M1 local control system communication infrastructure

    NASA Astrophysics Data System (ADS)

    Argomedo, J.; Kornweibel, N.; Grudzien, T.; Dimmler, M.; Andolfato, L.; Barriga, P.

    2016-08-01

    The primary mirror of the E-ELT is composed of 798 hexagonal segments of about 1.45 meters across. Each segment can be moved in piston and tip-tilt using three position actuators. Inductive edge sensors are used to provide feedback for global reconstruction of the mirror shape. The E-ELT M1 Local Control System will provide a deterministic infrastructure for collecting edge sensor and actuators readings and distribute the new position actuators references while at the same time providing failure detection, isolation and notification, synchronization, monitoring and configuration management. The present paper describes the prototyping activities carried out to verify the feasibility of the E-ELT M1 local control system communication architecture design and assess its performance and potential limitations.

  7. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    Stormwater discharges continue to cause impairment of our Nation’s waterbodies. EPA has developed the National Stormwater Calculator (SWC) to help support local, state, and national stormwater management objectives to reduce runoff through infiltration and retention using green infrastructure practices as low impact development (LID) controls. The primary focus of the SWC is to inform site developers on how well they can meet a desired stormwater retention target with and without the use of green infrastructure. It can also be used by landscapers and homeowners. Platform. The SWC is a Windows-based desktop program that requires an internet connection. A mobile web application version that will be compatible with all operating systems is currently being developed and is expected to be released in the fall of 2017.Cost Module. An LID cost estimation module within the application allows planners and managers to evaluate LID controls based on comparison of regional and national project planning level cost estimates (capital and average annual maintenance) and predicted LID control performance. Cost estimation is accomplished based on user-identified size configuration of the LID control infrastructure and other key project and site-specific variables. This includes whether the project is being applied as part of new development or redevelopment and if there are existing site constraints.Climate Scenarios. The SWC allows users to consider how runoff may vary based

  8. Keeping It Simple: Can We Estimate Malting Quality Potential Using an Isothermal Mashing Protocol and Common Laboratory Instrumentation?

    USDA-ARS?s Scientific Manuscript database

    Current methods for generating malting quality metrics have been developed largely to support commercial malting and brewing operations, providing accurate, reproducible analytical data to guide malting and brewing production. Infrastructure to support these analytical operations often involves sub...

  9. Harmful Algae Bloom Occurrence in Urban Ponds: Relationship of Toxin Levels with Cell Density and Species Composition

    EPA Science Inventory

    Retention ponds constructed within urban watershed areas of high density populations are common as a result of green infrastructure applications. Several urban ponds in the Northern Kentucky area were monitored for algal community (algae and cyanobacteria) from October 2012 to Se...

  10. Relationships and trends of E. Coli, human-associated bacteroides, and pathogens in the Proctor Creek watershed (GWRC 2017)

    EPA Science Inventory

    Urban surface waters can be impacted by anthropogenic sources such as impervious surfaces, sani-tary and storm sewers, and failing infrastructure. Fecal indicator bacteria (FIB) and microbial source tracking (MST) markers are common gauges of stream water qual-ity, however, litt...

  11. Policy Perspectives on Social, Agricultural, and Rural Sustainability.

    ERIC Educational Resources Information Center

    Wimberley, Ronald C.

    1993-01-01

    Introduces three types of agricultural policy dealing with the sustainability of society, the agricultural sector, and rural people and places. Outlines sustainability issues and special interest groups related to each policy type, common ground, and the impact on rural policy of the environment, economic change, physical infrastructure, social…

  12. Collaborative Knowledge Creation in the Higher Education Academic Library

    ERIC Educational Resources Information Center

    Lee, Young S.; Schottenfeld, Matthew A.

    2014-01-01

    Collaboration has become a core competency of the 21st century workforce. Thus, the need of collaboration is reshaping the academic library in higher education to produce competent future workforce. To encourage collaboration in the academic library, knowledge commons that integrate technology to infrastructure and system furniture are introduced.…

  13. 75 FR 6180 - Mission Statement; Secretarial China Clean Energy Business Development Mission; May 16-21, 2010

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-08

    ... addition, Hong Kong has an efficient, transparent legal system based on common law principles that offer... 2020. The current grid infrastructure system is unable to support greater electricity movement from... sector, including traditional transmission/distribution systems and smart grid technologies, offers huge...

  14. Pedagogical Applications of Smartphone Integration in Teaching - Lecturers', Students' & Pupils' Perspectives

    ERIC Educational Resources Information Center

    Seifert, Tami

    2014-01-01

    As the disparity between educational standards and reality outside educational institutions is increasing, alternative learning infrastructure such as mobile technologies are becoming more common, and are challenging long held, traditional modes of teaching. Educators' attitudes toward wireless devices are mixed. Wireless devices are perceived by…

  15. Continuous Improvement in Online Education: Documenting Teaching Effectiveness in the Online Environment through Observations

    ERIC Educational Resources Information Center

    Purcell, Jennifer W.; Scott, Heather I.; Mixson-Brookshire, Deborah

    2017-01-01

    Teaching observations are commonly used among educators to document and improve teaching effectiveness. Unfortunately, the necessary protocols and supporting infrastructure are not consistently available for faculty who teach online. This paper presents a brief literature review and reflective narratives of educators representing online education…

  16. The role of assessment infrastructures in crafting project-based science classrooms

    NASA Astrophysics Data System (ADS)

    D'Amico, Laura Marie

    In project-based science teaching, teachers engage students in the practice of conducting meaningful investigations and explanations of natural phenomena, often in collaboration with fellow students or adults. Reformers suggest that this approach can provide students with more profitable learning experiences; but for many teachers, a shift to such instruction can be difficult to manage. As some reform-minded teachers have discovered, classroom assessment can serve as a vital tool for meeting the challenges associated with project science activity. In this research, classroom assessment was viewed as an infrastructure that both students and teachers rely upon as a mediational tool for classroom activity and communications. The study explored the classroom assessment infrastructures created by three teachers involved in the Learning through Collaborative Visualization (CoVis) Project from 1993--94 to 1995--96. Each of the three teachers under study either created a new course or radically reformulated an old one in an effort to incorporate project-based science pedagogy and supporting technologies. Data in the form of interviews, classroom observations, surveys, student work, and teacher records was collected. From these data, an interpretive case study was developed for each course and its accompanying assessment infrastructure. A set of cross-case analyses was also constructed, based upon common themes that emerged from all three cases. These themes included: the assessment challenges based on the nature of project activity, the role of technology in the teachers' assessment infrastructure designs, and the influence of the wider assessment infrastructure on their course and assessment designs. In combination, the case studies and cross-case analyses describe the synergistic relationship between the design of pedagogical reforms and classroom assessment infrastructures, as well as the effectiveness of all three assessment designs. This work contributes to research and practice associated with assessment and pedagogical reform in three ways. First, it provides a theoretical frame for the relationship between assessment and pedagogical reform. Second, it provides a set of taxonomies which outline both the challenges of project-based science activity and typical assessment strategies to meet them. Finally, it provides a set of cautions and recommendations for designing classroom assessment infrastructures in support of project-based science.

  17. Reference Avionics Architecture for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Somervill, Kevin M.; Lapin, Jonathan C.; Schmidt, Oron L.

    2010-01-01

    Developing and delivering infrastructure capable of supporting long-term manned operations to the lunar surface has been a primary objective of the Constellation Program in the Exploration Systems Mission Directorate. Several concepts have been developed related to development and deployment lunar exploration vehicles and assets that provide critical functionality such as transportation, habitation, and communication, to name a few. Together, these systems perform complex safety-critical functions, largely dependent on avionics for control and behavior of system functions. These functions are implemented using interchangeable, modular avionics designed for lunar transit and lunar surface deployment. Systems are optimized towards reuse and commonality of form and interface and can be configured via software or component integration for special purpose applications. There are two core concepts in the reference avionics architecture described in this report. The first concept uses distributed, smart systems to manage complexity, simplify integration, and facilitate commonality. The second core concept is to employ extensive commonality between elements and subsystems. These two concepts are used in the context of developing reference designs for many lunar surface exploration vehicles and elements. These concepts are repeated constantly as architectural patterns in a conceptual architectural framework. This report describes the use of these architectural patterns in a reference avionics architecture for Lunar surface systems elements.

  18. CERN's Common Unix and X Terminal Environment

    NASA Astrophysics Data System (ADS)

    Cass, Tony

    The Desktop Infrastructure Group of CERN's Computing and Networks Division has developed a Common Unix and X Terminal Environment to ease the migration to Unix based Interactive Computing. The CUTE architecture relies on a distributed filesystem—currently Trans arc's AFS—to enable essentially interchangeable client work-stations to access both "home directory" and program files transparently. Additionally, we provide a suite of programs to configure workstations for CUTE and to ensure continued compatibility. This paper describes the different components and the development of the CUTE architecture.

  19. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  20. Real-time contaminant sensing and control in civil infrastructure systems

    NASA Astrophysics Data System (ADS)

    Rimer, Sara; Katopodes, Nikolaos

    2014-11-01

    A laboratory-scale prototype has been designed and implemented to test the feasibility of real-time contaminant sensing and control in civil infrastructure systems. A blower wind tunnel is the basis of the prototype design, with propylene glycol smoke as the ``contaminant.'' A camera sensor and compressed-air vacuum nozzle system is set up at the test section portion of the prototype to visually sense and then control the contaminant; a real-time controller is programmed to read in data from the camera sensor and administer pressure to regulators controlling the compressed air operating the vacuum nozzles. A computational fluid dynamics model is being integrated in with this prototype to inform the correct pressure to supply to the regulators in order to optimally control the contaminant's removal from the prototype. The performance of the prototype has been evaluated against the computational fluid dynamics model and is discussed in this presentation. Furthermore, the initial performance of the sensor-control system implemented in the test section of the prototype is discussed. NSF-CMMI 0856438.

  1. Is the work flow model a suitable candidate for an observatory supervisory control infrastructure?

    NASA Astrophysics Data System (ADS)

    Daly, Philip N.; Schumacher, Germán.

    2016-08-01

    This paper reports on the early investigation of using the work flow model for observatory infrastructure software. We researched several work ow engines and identified 3 for further detailed, study: Bonita BPM, Activiti and Taverna. We discuss the business process model and how it relates to observatory operations and identify a path finder exercise to further evaluate the applicability of these paradigms.

  2. Critical Infrastructure Protection: How to Assess and Provide Remedy to Vulnerabilities in Telecom Hotels

    DTIC Science & Technology

    2006-09-01

    Telecommunications and Information Administration Telecom Telecommunications Telco Telecommunications Company VBIED Vehicle Borne Improvised Explosive... effect the damage to one system or sector would have on another. These concentrations of the sector’s key assets are becoming attractive targets even...critical U.S. infrastructures, such as the nation’s telephone system . Companies make it easier to control their networks from remote locations to save

  3. Systems engineering considerations for operational support systems

    NASA Technical Reports Server (NTRS)

    Aller, Robert O.

    1993-01-01

    Operations support as considered here is the infrastructure of people, procedures, facilities and systems that provide NASA with the capability to conduct space missions. This infrastructure involves most of the Centers but is concentrated principally at the Johnson Space Center, the Kennedy Space Center, the Goddard Space Flight Center, and the Jet Propulsion Laboratory. It includes mission training and planning, launch and recovery, mission control, tracking, communications, data retrieval and data processing.

  4. Essential levels of health information in Europe: an action plan for a coherent and sustainable infrastructure.

    PubMed

    Carinci, Fabrizio

    2015-04-01

    The European Union needs a common health information infrastructure to support policy and governance on a routine basis. A stream of initiatives conducted in Europe during the last decade resulted into several success stories, but did not specify a unified framework that could be broadly implemented on a continental level. The recent debate raised a potential controversy on the different roles and responsibilities of policy makers vs the public health community in the construction of such a pan-European health information system. While institutional bodies shall clarify the statutory conditions under which such an endeavour is to be carried out, researchers should define a common framework for optimal cross-border information exchange. This paper conceptualizes a general solution emerging from past experiences, introducing a governance structure and overarching framework that can be realized through four main action lines, underpinned by the key principle of "Essential Levels of Health Information" for Europe. The proposed information model is amenable to be applied in a consistent manner at both national and EU level. If realized, the four action lines outlined here will allow developing a EU health information infrastructure that would effectively integrate best practices emerging from EU public health initiatives, including projects and joint actions carried out during the last ten years. The proposed approach adds new content to the ongoing debate on the future activity of the European Commission in the area of health information. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. The Electronic Data Methods (EDM) forum for comparative effectiveness research (CER).

    PubMed

    Holve, Erin; Segal, Courtney; Lopez, Marianne Hamilton; Rein, Alison; Johnson, Beth H

    2012-07-01

    AcademyHealth convened the Electronic Data Methods (EDM) Forum to collect, synthesize, and share lessons from eleven projects that are building infrastructure and using electronic clinical data for comparative effectiveness research (CER) and patient-centered outcomes research (PCOR). This paper provides a brief review of participating projects and provides a framework of common challenges. EDM Forum staff conducted a text review of relevant grant programs' funding opportunity announcements; projects' research plans; and available information on projects' websites. Additional information was obtained from presentations provided by each project; phone calls with project principal investigators, affiliated partners, and staff from the Agency for Healthcare Research and Quality (AHRQ); and six site visits. Projects participating in the EDM Forum are building infrastructure and developing innovative strategies to address a set of methodological, and data and informatics challenges, here identified in a common framework. The eleven networks represent more than 20 states and include a range of partnership models. Projects vary substantially in size, from 11,000 to more than 7.5 million individuals. Nearly all of the AHRQ priority populations and conditions are addressed. In partnership with the projects, the EDM Forum is focused on identifying and sharing lessons learned to advance the national dialogue on the use of electronic clinical data to conduct CER and PCOR. These efforts have the shared goal of addressing challenges in traditional research studies and data sources, and aim to build infrastructure and generate evidence to support a learning health care system that can improve patient outcomes.

  6. The ESPAS e-infrastructure: Access to data from near-Earth space

    NASA Astrophysics Data System (ADS)

    Belehaki, Anna; James, Sarah; Hapgood, Mike; Ventouras, Spiros; Galkin, Ivan; Lembesis, Antonis; Tsagouri, Ioanna; Charisi, Anna; Spogli, Luca; Berdermann, Jens; Häggström, Ingemar; ESPAS Consortium

    2016-10-01

    ESPAS, the ;near-Earth space data infrastructure for e-science; is a data e-infrastructure facilitating discovery and access to observations, ground-based and space borne, and to model predictions of the near-Earth space environment, a region extending from the Earth's atmosphere up to the outer radiation belts. ESPAS provides access to metadata and/or data from an extended network of data providers distributed globally. The interoperability of the heterogeneous data collections is achieved with the adoption and adaption of the ESPAS data model which is built entirely on ISO 19100 series geographic information standards. The ESPAS data portal manages a vocabulary of space physics keywords that can be used to narrow down data searches to observations of specific physical content. Such content-targeted search is an ESPAS innovation provided in addition to the commonly practiced data selection by time, location, and instrument. The article presents an overview of the architectural design of the ESPAS system, of its data model and ontology, and of interoperable services that allow the discovery, access and download of registered data. Emphasis is given to the standardization, and expandability concepts which represent also the main elements that support the building of long-term sustainability activities of the ESPAS e-infrastructure.

  7. A zebra or a painted horse? Are hospital PPPs infrastructure partnerships with stripes or a separate species?

    PubMed

    Montagu, Dominic; Harding, April

    2012-01-01

    Public Private Partnerships (PPP) have been common in infrastructure for many years and are increasingly being considered as a means to finance, build, and manage hospitals. However, the growth of hospital PPPs in the past two decades has led to confusion about what sorts of contractual arrangements between public and private partners consititute a PPP, and what key differences distinguish public private partnership for hospitals from PPPs for infrastructure. Based on experiences from around the world we indentify six key areas where hospital PPPs differ from infrastructure partnerships. We draw upon the hospital partnerships that have been documented in OECD countries and a growing number of middle-income countries to identify four distinct types of hospital PPPs: service focused partnerships in which private partners manage operations within publicly constructed facilities; facilities and finance PPPs, focused on mobilizing capital and creating new hospitals; combined PPPs, involving both facility and clinical operations; and co-located PPPs where privately operated services are developed within the grounds of a public hospital. These four types of hospital PPPs have differing goals, and therefore different contractual and functional aspects, as well as differing risks to both public and private partners. By clarifying these, we provide a base upon which hospital PPPs can be assessed against appropriate goals and benchmarks.

  8. Integrating Data and Networks: Human Factors

    NASA Astrophysics Data System (ADS)

    Chen, R. S.

    2012-12-01

    The development of technical linkages and interoperability between scientific networks is a necessary but not sufficient step towards integrated use and application of networked data and information for scientific and societal benefit. A range of "human factors" must also be addressed to ensure the long-term integration, sustainability, and utility of both the interoperable networks themselves and the scientific data and information to which they provide access. These human factors encompass the behavior of both individual humans and human institutions, and include system governance, a common framework for intellectual property rights and data sharing, consensus on terminology, metadata, and quality control processes, agreement on key system metrics and milestones, the compatibility of "business models" in the short and long term, harmonization of incentives for cooperation, and minimization of disincentives. Experience with several national and international initiatives and research programs such as the International Polar Year, the Group on Earth Observations, the NASA Earth Observing Data and Information System, the U.S. National Spatial Data Infrastructure, the Global Earthquake Model, and the United Nations Spatial Data Infrastructure provide a range of lessons regarding these human factors. Ongoing changes in science, technology, institutions, relationships, and even culture are creating both opportunities and challenges for expanded interoperability of scientific networks and significant improvement in data integration to advance science and the use of scientific data and information to achieve benefits for society as a whole.

  9. Watershed Scale Impacts of Stormwater Green Infrastructure ...

    EPA Pesticide Factsheets

    Despite the increasing use of urban stormwater green infrastructure (SGI), including detention ponds and rain gardens, few studies have quantified the cumulative effects of multiple SGI projects on hydrology and water quality at the watershed scale. To assess the effects of SGI, Baltimore County, MD, Montgomery County, MD, and Washington, DC, were selected based on the availability of data on SGI, water quality, and stream flow. The watershed scale impact of SGI was evaluated by assessing how increased spatial density of SGI correlates with stream hydrology and nitrogen exports over space and time. The most common SGI types were detention ponds (58%), followed by marshes (12%), sand filters (9%), wet ponds (7%), infiltration trenches (4%), and rain gardens (2%). When controlling for watersheds size and percent impervious surface cover, watersheds with greater amounts of SGI (>10% SGI) have 44% lower peak runoff, 26% less frequent runoff events, and 26% less variable runoff than watersheds with lower SGI. Watersheds with more SGI also show 44% less NO3− and 48% less total nitrogen exports compared to watersheds with minimal SGI. There was no significant reduction in combined sewer overflows in watersheds with greater SGI. Based on specific SGI types, infiltration trenches (R2 = 0.35) showed the strongest correlation with hydrologic metrics, likely due to their ability to attenuate flow, while bioretention (R2 = 0.19) and wet ponds (R2 = 0.12) showed stronger

  10. Brazil's Cuiabá- Santarém (BR-163) Highway: the environmental cost of paving a soybean corridor through the Amazon.

    PubMed

    Fearnside, Philip M

    2007-05-01

    Brazil's Cuiabá-Santarém (BR-163) Highway provides a valuable example of ways in which decision-making procedures for infrastructure projects in tropical forest areas need to be reformulated in order to guarantee that environmental concerns are properly weighed. BR-163, which is slated to be paved as an export corridor for soybeans via the Amazon River, traverses an area that is largely outside of Brazilian government control. A climate of generalized lawlessness and impunity prevails, and matters related to environment and to land tenure are especially unregulated. Deforestation and illegal logging have accelerated in anticipation of highway paving. Paving would further speed forest loss in the area, as well as stimulate migration of land thieves (grileiros) to other frontiers. An argument is made that the highway should not be reconstructed and paved until after a state of law has been established and it has been independently certified that sufficient governance prevails to secure protected areas and enforce environmental legislation. A waiting period is needed after this is achieved before proceeding with the highway paving. Above all, the logical sequence of steps must be followed, whereby environmental costs are assessed, reported, and weighed prior to making de facto decisions on implementation of infrastructure projects. Deviation from this logical sequence is a common occurrence in many parts of the world, especially in tropical areas.

  11. Controlling factors of the parental safety perception on children's travel mode choice.

    PubMed

    Nevelsteen, Kristof; Steenberghen, Thérèse; Van Rompaey, Anton; Uyttersprot, Liesbeth

    2012-03-01

    The travel mode of children changed significantly over the last 20 years, with a decrease of children travelling as pedestrians or cyclists. This study focuses on six to twelve year old children. Parents determine to a large extent the mode choice of children in this age category. Based on the analysis of an extensive survey, the research shows that traffic infrastructure has a significant impact on parental decision making concerning children's travel mode choice, by affecting both the real and the perceived traffic safety. Real traffic safety is quantified in terms of numbers of accidents and road infrastructure. For the perceived traffic safety a parental allowance probability is calculated per road type to show that infrastructure characteristics influence parental decision making on the children's mode choice. A binary logistic model shows that this allowance is determined by age, gender and traffic infrastructure near the child's home or near destinations frequently visited by children. Since both real and perceived traffic safety are influenced by infrastructure characteristics, a spatial analysis of parental perception and accident statistics can be used to indicate the locations where infrastructure improvements will be most effective to increase the number of children travelling - safely - as pedestrians or cyclists. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Joint road safety operations in tunnels and open roads

    NASA Astrophysics Data System (ADS)

    Adesiyun, Adewole; Avenoso, Antonio; Dionelis, Kallistratos; Cela, Liljana; Nicodème, Christophe; Goger, Thierry; Polidori, Carlo

    2017-09-01

    The objective of the ECOROADS project is to overcome the barrier established by the formal interpretation of the two Directives 2008/96/EC and 2004/54/EC, which in practice do not allow the same Road Safety Audits/Inspections to be performed inside tunnels. The projects aims at the establishment of a common enhanced approach to road infrastructure and tunnel safety management by using the concepts and criteria of the Directive 2008/96/CE on road infrastructure safety management and the results of related European Commission (EC) funded projects. ECOROADS has already implemented an analysis of national practices regarding Road Safety Inspections (RSI), two Workshops with the stakeholders, and an exchange of best practices between European tunnel experts and road safety professionals, which led to the definition of common agreed safety procedures. In the second phase of the project, different groups of experts and observers applied the above common procedures by inspecting five European road sections featuring both open roads and tunnels in Belgium, Albania, Germany, Serbia and Former Yugoslav Republic of Macedonia. This paper shows the feedback of the 5 joint safety operations and how they are being used for a set of - recommendations and guidelines for the application of the RSA and RSI concepts within the tunnel safety operations.

  13. Mission possible: creating a technology infrastructure to help reduce administrative costs.

    PubMed

    Alper, Michael

    2003-01-01

    Controlling administrative costs associated with managed care benefits has traditionally been considered a "mission impossible" in healthcare, with the unreasonably high cost of paperwork and administration pushing past the $420 billion mark. Why administrative costs remain a critical problem in healthcare while other industries have alleviated their administrative burdens must be carefully examined. This article looks at the key factors contributing to high administrative costs and how these costs can be controlled in the future with "mission possible" tools, including business process outsourcing, IT outsourcing, technology that helps to bring "consumerism" to managed care, and an IT infrastructure that improves quality and outcomes.

  14. REACTOR - a Concept for establishing a System-of-Systems

    NASA Astrophysics Data System (ADS)

    Haener, Rainer; Hammitzsch, Martin; Wächter, Joachim

    2014-05-01

    REACTOR is a working title for activities implementing reliable, emergent, adaptive, and concurrent collaboration on the basis of transactional object repositories. It aims at establishing federations of autonomous yet interoperable systems (Systems-of-Systems), which are able to expose emergent behaviour. Following the principles of event-driven service-oriented architectures (SOA 2.0), REACTOR enables adaptive re-organisation by dynamic delegation of responsibilities and novel yet coherent monitoring strategies by combining information from different domains. Thus it allows collaborative decision-processes across system, discipline, and administrative boundaries. Interoperability is based on two approaches that implement interconnection and communication between existing heterogeneous infrastructures and information systems: Coordinated (orchestration-based) communication and publish/subscribe (choreography-based) communication. Choreography-based communication ensures the autonomy of the participating systems to the highest possible degree but requires the implementation of adapters, which provide functional access to information (publishing/consuming events) via a Message Oriented Middleware (MOM). Any interconnection of the systems (composition of service and message cascades) is established on the basis of global conversations that are enacted by choreographies specifying the expected behaviour of the participating systems with respect to agreed Service Level Agreements (SLA) required by e.g. national authorities. The specification of conversations, maintained in commonly available repositories also enables the utilisation of systems for purposes (evolving) other than initially intended. Orchestration-based communication additionally requires a central component that controls the information transfer via service requests or event processing and also takes responsibility of managing business processes. Commonly available transactional object repositories are well suited to establish brokers, which mediate metadata and semantic information about the resources of all involved systems. This concept has been developed within the project Collaborative, Complex, and Critical Decision-Support in Evolving Crises (TRIDEC) on the basis of semantic registries describing all facets of events and services utilisable for crisis management systems. The implementation utilises an operative infrastructure including an Enterprise Service Bus (ESB), adapters to proprietary sensor systems, a workflow engine, and a broker-based MOM. It also applies current technologies like actor-based frameworks for highly concurrent, distributed, and fault tolerant event-driven applications. Therefore REACTOR implementations are well suited to be hosted in a cloud that provides Infrastructure as a Service (IaaS). To provide low entry barriers for legacy and future systems, REACTOR adapts the principles of Design by Contract (DbC) as well as standardised and common information models like the Sensor Web Enablement (SWE) or the JavaScript Object Notation for geographic features (GeoJSON). REACTOR has been applied exemplarily within two different scenarios, Natural Crisis Management and Industrial Subsurface Development.

  15. Validation of Storm Water Management Model Storm Control Measures Modules

    NASA Astrophysics Data System (ADS)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  16. Access to emergency and surgical care in sub-Saharan Africa: the infrastructure gap.

    PubMed

    Hsia, Renee Y; Mbembati, Naboth A; Macfarlane, Sarah; Kruk, Margaret E

    2012-05-01

    The effort to increase access to emergency and surgical care in low-income countries has received global attention. While most of the literature on this issue focuses on workforce challenges, it is critical to recognize infrastructure gaps that hinder the ability of health systems to make emergency and surgical care a reality. This study reviews key barriers to the provision of emergency and surgical care in sub-Saharan Africa using aggregate data from the Service Provision Assessments and Demographic and Health Surveys of five countries: Ghana, Kenya, Rwanda, Tanzania and Uganda. For hospitals and health centres, competency was assessed in six areas: basic infrastructure, equipment, medicine storage, infection control, education and quality control. Percentage of compliant facilities in each country was calculated for each of the six areas to facilitate comparison of hospitals and health centres across the five countries. The percentage of hospitals with dependable running water and electricity ranged from 22% to 46%. In countries analysed, only 19-50% of hospitals had the ability to provide 24-hour emergency care. For storage of medication, only 18% to 41% of facilities had unexpired drugs and current inventories. Availability of supplies to control infection and safely dispose of hazardous waste was generally poor (less than 50%) across all facilities. As few as 14% of hospitals (and as high as 76%) among those surveyed had training and supervision in place. No surveyed hospital had enough infrastructure to follow minimum standards and practices that the World Health Organization has deemed essential for the provision of emergency and surgical care. The countries where these hospitals are located may be representative of other low-income countries in sub-Saharan Africa. Thus, the results suggest that increased attention to building up the infrastructure within struggling health systems is necessary for improvements in global access to medical care.

  17. Advanced Metering Infrastructure based on Smart Meters

    NASA Astrophysics Data System (ADS)

    Suzuki, Hiroshi

    By specifically designating penetrations rates of advanced meters and communication technologies, devices and systems, this paper introduces that the penetration of advanced metering is important for the future development of electric power system infrastructure. It examines the state of the technology and the economical benefits of advanced metering. One result of the survey is that advanced metering currently has a penetration of about six percent of total installed electric meters in the United States. Applications to the infrastructure differ by type of organization. Being integrated with emerging communication technologies, smart meters enable several kinds of features such as, not only automatic meter reading but also distribution management control, outage management, remote switching, etc.

  18. Towards a Multi-Mission, Airborne Science Data System Environment

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.

    2011-12-01

    NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.

  19. Biomedical Waste Management : An Infrastructural Survey of Hospitals.

    PubMed

    Rao, Skm; Ranyal, R K; Bhatia, S S; Sharma, V R

    2004-10-01

    The Ministry of Environment & Forests notified the Biomedical Waste (management & handling) Rules, 1998" (BMW Mgt) in July 1998. In accordance with the rules, every hospital generating BMW needs to set up requisite BMW treatment facilities on site or ensure requisite treatment of waste at common treatment facility. No untreated BMW shall be kept stored beyond a period of 48 hours. The cost of construction, operation and maintenance of system for managing BMW represents a significant part of overall budget of a hospital if the BMW rules have to be implemented in their true spirit. Two types of costs are required to be incurred by hospitals for BMW Mgt, internal and external. Internal cost is the cost for segregation, mutilation, disinfection, internal storage and transportation including hidden cost of protective equipment. External costs are off site transportation, treatment and final disposal. A study of hospitals was carried out from various sectors like Govt, Private, Charitable institutions etc. to assess the infrastructural requirement for BMW Mgt. Cost was worked out for a hospital where all the infrastructure as per each and every requirement of BMW rules had been implemented and then it was compared with other hospitals where hospitals have made compromises on each stage of BMW Mgt. Capital cost incurred by benchmarked hospital of 1047 beds was Rs.3 lakh 59 thousand excluding cost of incinerator and hospital is incurring Rs. 656/- per day as recurring expenditure. Pune city has common regional facility for BMW final disposal. Facility is charging Rs.20 per kg of infectious waste. As on Dec 2001 there were 400 institutions including nursing homes, labs and blood banks which were registered. After analyzing the results of study it was felt that there is an urgent need to standardize the infrastructural requirement so that hospitals following BMW rules strictly do not suffer additional costs.

  20. Testbed-based Performance Evaluation of Attack Resilient Control for AGC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashok, Aditya; Sridhar, Siddharth; McKinnon, Archibald D.

    The modern electric power grid is a complex cyber-physical system whose reliable operation is enabled by a wide-area monitoring and control infrastructure. This infrastructure, supported by an extensive communication backbone, enables several control applications functioning at multiple time scales to ensure the grid is maintained within stable operating limits. Recent events have shown that vulnerabilities in this infrastructure may be exploited to manipulate the data being exchanged. Such a scenario could cause the associated control application to mis-operate, potentially causing system-wide instabilities. There is a growing emphasis on looking beyond traditional cybersecurity solutions to mitigate such threats. In this papermore » we perform a testbed-based validation of one such solution - Attack Resilient Control (ARC) - on Iowa State University's \\textit{PowerCyber} testbed. ARC is a cyber-physical security solution that combines domain-specific anomaly detection and model-based mitigation to detect stealthy attacks on Automatic Generation Control (AGC). In this paper, we first describe the implementation architecture of the experiment on the testbed. Next, we demonstrate the capability of stealthy attack templates to cause forced under-frequency load shedding in a 3-area test system. We then validate the performance of ARC by measuring its ability to detect and mitigate these attacks. Our results reveal that ARC is efficient in detecting stealthy attacks and enables AGC to maintain system operating frequency close to its nominal value during an attack. Our studies also highlight the importance of testbed-based experimentation for evaluating the performance of cyber-physical security and control applications.« less

  1. Coiled-coil protein composition of 22 proteomes--differences and common themes in subcellular infrastructure and traffic control.

    PubMed

    Rose, Annkatrin; Schraegle, Shannon J; Stahlberg, Eric A; Meier, Iris

    2005-11-16

    Long alpha-helical coiled-coil proteins are involved in diverse organizational and regulatory processes in eukaryotic cells. They provide cables and networks in the cyto- and nucleoskeleton, molecular scaffolds that organize membrane systems and tissues, motors, levers, rotating arms, and possibly springs. Mutations in long coiled-coil proteins have been implemented in a growing number of human diseases. Using the coiled-coil prediction program MultiCoil, we have previously identified all long coiled-coil proteins from the model plant Arabidopsis thaliana and have established a searchable Arabidopsis coiled-coil protein database. Here, we have identified all proteins with long coiled-coil domains from 21 additional fully sequenced genomes. Because regions predicted to form coiled-coils interfere with sequence homology determination, we have developed a sequence comparison and clustering strategy based on masking predicted coiled-coil domains. Comparing and grouping all long coiled-coil proteins from 22 genomes, the kingdom-specificity of coiled-coil protein families was determined. At the same time, a number of proteins with unknown function could be grouped with already characterized proteins from other organisms. MultiCoil predicts proteins with extended coiled-coil domains (more than 250 amino acids) to be largely absent from bacterial genomes, but present in archaea and eukaryotes. The structural maintenance of chromosomes proteins and their relatives are the only long coiled-coil protein family clearly conserved throughout all kingdoms, indicating their ancient nature. Motor proteins, membrane tethering and vesicle transport proteins are the dominant eukaryote-specific long coiled-coil proteins, suggesting that coiled-coil proteins have gained functions in the increasingly complex processes of subcellular infrastructure maintenance and trafficking control of the eukaryotic cell.

  2. Strengthening Software Authentication with the ROSE Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G

    2006-06-15

    Many recent nonproliferation and arms control software projects include a software authentication regime. These include U.S. Government-sponsored projects both in the United States and in the Russian Federation (RF). This trend toward requiring software authentication is only accelerating. Demonstrating assurance that software performs as expected without hidden ''backdoors'' is crucial to a project's success. In this context, ''authentication'' is defined as determining that a software package performs only its intended purpose and performs said purpose correctly and reliably over the planned duration of an agreement. In addition to visual inspections by knowledgeable computer scientists, automated tools are needed to highlightmore » suspicious code constructs, both to aid visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary and not extensible. An open-source, extensible tool can be customized to the unique needs of each project (projects can have both common and custom rules to detect flaws and security holes). Any such extensible tool has to be based on a complete language compiler. ROSE is precisely such a compiler infrastructure developed within the Department of Energy (DOE) and targeted at the optimization of scientific applications and user-defined libraries within large-scale applications (typically applications of a million lines of code). ROSE is a robust, source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C and C++ (handling the full C, C99, C++ languages and with current collaborations to support Fortran90). We propose to extend ROSE to address a number of security-specific requirements, and apply it to software authentication for nonproliferation and arms control projects.« less

  3. Coiled-coil protein composition of 22 proteomes – differences and common themes in subcellular infrastructure and traffic control

    PubMed Central

    Rose, Annkatrin; Schraegle, Shannon J; Stahlberg, Eric A; Meier, Iris

    2005-01-01

    Background Long alpha-helical coiled-coil proteins are involved in diverse organizational and regulatory processes in eukaryotic cells. They provide cables and networks in the cyto- and nucleoskeleton, molecular scaffolds that organize membrane systems and tissues, motors, levers, rotating arms, and possibly springs. Mutations in long coiled-coil proteins have been implemented in a growing number of human diseases. Using the coiled-coil prediction program MultiCoil, we have previously identified all long coiled-coil proteins from the model plant Arabidopsis thaliana and have established a searchable Arabidopsis coiled-coil protein database. Results Here, we have identified all proteins with long coiled-coil domains from 21 additional fully sequenced genomes. Because regions predicted to form coiled-coils interfere with sequence homology determination, we have developed a sequence comparison and clustering strategy based on masking predicted coiled-coil domains. Comparing and grouping all long coiled-coil proteins from 22 genomes, the kingdom-specificity of coiled-coil protein families was determined. At the same time, a number of proteins with unknown function could be grouped with already characterized proteins from other organisms. Conclusion MultiCoil predicts proteins with extended coiled-coil domains (more than 250 amino acids) to be largely absent from bacterial genomes, but present in archaea and eukaryotes. The structural maintenance of chromosomes proteins and their relatives are the only long coiled-coil protein family clearly conserved throughout all kingdoms, indicating their ancient nature. Motor proteins, membrane tethering and vesicle transport proteins are the dominant eukaryote-specific long coiled-coil proteins, suggesting that coiled-coil proteins have gained functions in the increasingly complex processes of subcellular infrastructure maintenance and trafficking control of the eukaryotic cell. PMID:16288662

  4. Control and Information Systems for the National Ignition Facility

    DOE PAGES

    Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...

    2017-03-23

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  5. Control and Information Systems for the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Gordon; Casey, Allan; Christensen, Marvin

    Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less

  6. 31 CFR 800.503 - Determination of whether to undertake an investigation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... government-controlled transaction; or (2) Would result in control by a foreign person of critical infrastructure of or within the United States, if the Committee determines that the transaction could impair the...

  7. 31 CFR 800.503 - Determination of whether to undertake an investigation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... government-controlled transaction; or (2) Would result in control by a foreign person of critical infrastructure of or within the United States, if the Committee determines that the transaction could impair the...

  8. 31 CFR 800.503 - Determination of whether to undertake an investigation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... government-controlled transaction; or (2) Would result in control by a foreign person of critical infrastructure of or within the United States, if the Committee determines that the transaction could impair the...

  9. 31 CFR 800.503 - Determination of whether to undertake an investigation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... government-controlled transaction; or (2) Would result in control by a foreign person of critical infrastructure of or within the United States, if the Committee determines that the transaction could impair the...

  10. 31 CFR 800.503 - Determination of whether to undertake an investigation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... government-controlled transaction; or (2) Would result in control by a foreign person of critical infrastructure of or within the United States, if the Committee determines that the transaction could impair the...

  11. Route Infrastructure and the Risk of Injuries to Bicyclists: A Case-Crossover Study

    PubMed Central

    Harris, M. Anne; Reynolds, Conor C. O.; Winters, Meghan; Babul, Shelina; Chipman, Mary; Cusimano, Michael D.; Brubacher, Jeff R.; Hunte, Garth; Friedman, Steven M.; Monro, Melody; Shen, Hui; Vernich, Lee; Cripton, Peter A.

    2012-01-01

    Objectives. We compared cycling injury risks of 14 route types and other route infrastructure features. Methods. We recruited 690 city residents injured while cycling in Toronto or Vancouver, Canada. A case-crossover design compared route infrastructure at each injury site to that of a randomly selected control site from the same trip. Results. Of 14 route types, cycle tracks had the lowest risk (adjusted odds ratio [OR] = 0.11; 95% confidence interval [CI] = 0.02, 0.54), about one ninth the risk of the reference: major streets with parked cars and no bike infrastructure. Risks on major streets were lower without parked cars (adjusted OR = 0.63; 95% CI = 0.41, 0.96) and with bike lanes (adjusted OR = 0.54; 95% CI = 0.29, 1.01). Local streets also had lower risks (adjusted OR = 0.51; 95% CI = 0.31, 0.84). Other infrastructure characteristics were associated with increased risks: streetcar or train tracks (adjusted OR = 3.0; 95% CI = 1.8, 5.1), downhill grades (adjusted OR = 2.3; 95% CI = 1.7, 3.1), and construction (adjusted OR = 1.9; 95% CI = 1.3, 2.9). Conclusions. The lower risks on quiet streets and with bike-specific infrastructure along busy streets support the route-design approach used in many northern European countries. Transportation infrastructure with lower bicycling injury risks merits public health support to reduce injuries and promote cycling. PMID:23078480

  12. Ocean Data Interoperability Platform: developing a common global framework for marine data management

    NASA Astrophysics Data System (ADS)

    Glaves, Helen; Schaap, Dick

    2017-04-01

    In recent years there has been a paradigm shift in marine research moving from the traditional discipline based methodology employed at the national level by one or more organizations, to a multidisciplinary, ecosystem level approach conducted on an international scale. This increasingly holistic approach to marine research is in part being driven by policy and legislation. For example, the European Commission's Blue Growth strategy promotes sustainable growth in the marine environment including the development of sea-basin strategies (European Commission 2014). As well as this policy driven shift to ecosystem level marine research there are also scientific and economic drivers for a basin level approach. Marine monitoring is essential for assessing the health of an ecosystem and determining the impacts of specific factors and activities on it. The availability of large volumes of good quality data is fundamental to this increasingly holistic approach to ocean research but there are significant barriers to its re-use. These are due to the heterogeneity of the data resulting from having been collected by many organizations around the globe using a variety of sensors mounted on a range of different platforms. The data is then delivered and archived in a range of formats, using various spatial coordinate systems and aligned with different standards. This heterogeneity coupled with organizational and national policies on data sharing make access and re-use of marine data problematic. In response to the need for greater sharing of marine data a number of e-infrastructures have been developed but these have different levels of granularity with the majority having been developed at the regional level to address specific requirements for data e.g. SeaDataNet in Europe, the Australian Ocean Data Network (AODN). These data infrastructures are also frequently aligned with the priorities of the local funding agencies and have been created in isolation from those developed elsewhere. To add a further layer of complexity there are also global initiatives providing marine data infrastructures e.g. IOC-IODE, POGO as well as those with a wider remit which includes environmental data e.g. GEOSS, COPERNICUS etc. Ecosystem level marine research requires a common framework for marine data management that supports the sharing of data across these regional and global data systems, and provides the user with access to the data available from these services via a single point of access. This framework must be based on existing data systems and established by developing interoperability between them. The Ocean Data and Interoperability Platform (ODIP/ODIP II) project brings together those organisations responsible for maintaining selected regional data infrastructures along with other relevant experts in order to identify the common standards and best practice necessary to underpin this framework, and to evaluate the differences and commonalties between the regional data infrastructures in order to establish interoperability between them for the purposes of data sharing. This coordinated approach is being demonstrated and validated through the development of a series of prototype interoperability solutions that demonstrate the mechanisms and standards necessary to facilitate the sharing of marine data across these existing data infrastructures.

  13. CDP - Adaptive Supervisory Control and Data Acquisition (SCADA) Technology for Infrastructure Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marco Carvalho; Richard Ford

    2012-05-14

    Supervisory Control and Data Acquisition (SCADA) Systems are a type of Industrial Control System characterized by the centralized (or hierarchical) monitoring and control of geographically dispersed assets. SCADA systems combine acquisition and network components to provide data gathering, transmission, and visualization for centralized monitoring and control. However these integrated capabilities, especially when built over legacy systems and protocols, generally result in vulnerabilities that can be exploited by attackers, with potentially disastrous consequences. Our research project proposal was to investigate new approaches for secure and survivable SCADA systems. In particular, we were interested in the resilience and adaptability of large-scale mission-criticalmore » monitoring and control infrastructures. Our research proposal was divided in two main tasks. The first task was centered on the design and investigation of algorithms for survivable SCADA systems and a prototype framework demonstration. The second task was centered on the characterization and demonstration of the proposed approach in illustrative scenarios (simulated or emulated).« less

  14. Information Technology and Community Restoration Studies/Task 1: Information Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upton, Jaki F.; Lesperance, Ann M.; Stein, Steven L.

    2009-11-19

    Executive Summary The Interagency Biological Restoration Demonstration—a program jointly funded by the Department of Defense's Defense Threat Reduction Agency and the Department of Homeland Security's (DHS's) Science and Technology Directorate—is developing policies, methods, plans, and applied technologies to restore large urban areas, critical infrastructures, and Department of Defense installations following the intentional release of a biological agent (anthrax) by terrorists. There is a perception that there should be a common system that can share information both vertically and horizontally amongst participating organizations as well as support analyses. A key question is: "How far away from this are we?" As partmore » of this program, Pacific Northwest National Laboratory conducted research to identify the current information technology tools that would be used by organizations in the greater Seattle urban area in such a scenario, to define criteria for use in evaluating information technology tools, and to identify current gaps. Researchers interviewed 28 individuals representing 25 agencies in civilian and military organizations to identify the tools they currently use to capture data needed to support operations and decision making. The organizations can be grouped into five broad categories: defense (Department of Defense), environmental/ecological (Environmental Protection Agency/Ecology), public health and medical services, emergency management, and critical infrastructure. The types of information that would be communicated in a biological terrorism incident include critical infrastructure and resource status, safety and protection information, laboratory test results, and general emergency information. The most commonly used tools are WebEOC (web-enabled crisis information management systems with real-time information sharing), mass notification software, resource tracking software, and NW WARN (web-based information to protect critical infrastructure systems). It appears that the current information management tools are used primarily for information gathering and sharing—not decision making. Respondents identified the following criteria for a future software system. It is easy to learn, updates information in real time, works with all agencies, is secure, uses a visualization or geographic information system feature, enables varying permission levels, flows information from one stage to another, works with other databases, feeds decision support tools, is compliant with appropriate standards, and is reasonably priced. Current tools have security issues, lack visual/mapping functions and critical infrastructure status, and do not integrate with other tools. It is clear that there is a need for an integrated, common operating system. The system would need to be accessible by all the organizations that would have a role in managing an anthrax incident to enable regional decision making. The most useful tool would feature a GIS visualization that would allow for a common operating picture that is updated in real time. To capitalize on information gained from the interviews, the following activities are recommended: • Rate emergency management decision tools against the criteria specified by the interviewees. • Identify and analyze other current activities focused on information sharing in the greater Seattle urban area. • Identify and analyze information sharing systems/tools used in other regions.« less

  15. Indicators and protocols for monitoring impacts of formal and informal trails in protected areas

    USGS Publications Warehouse

    Marion, Jeffrey L.; Leung, Yu-Fai

    2011-01-01

    Trails are a common recreation infrastructure in protected areas and their conditions affect the quality of natural resources and visitor experiences. Various trail impact indicators and assessment protocols have been developed in support of monitoring programs, which are often used for management decision-making or as part of visitor capacity management frameworks. This paper reviews common indicators and assessment protocols for three types of trails, surfaced formal trails, unsurfaced formal trails, and informal (visitor-created) trails. Monitoring methods and selected data from three U.S. National Park Service units are presented to illustrate some common trail impact indicators and assessment options.

  16. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment

    PubMed Central

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2013-01-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation’s electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments. PMID:25685516

  17. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment.

    PubMed

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2014-07-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.

  18. Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)

    NASA Technical Reports Server (NTRS)

    Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu

    1993-01-01

    This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.

  19. Cyber Security and Resilient Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert S. Anderson

    2009-07-01

    The Department of Energy (DOE) Idaho National Laboratory (INL) has become a center of excellence for critical infrastructure protection, particularly in the field of cyber security. It is one of only a few national laboratories that have enhanced the nation’s cyber security posture by performing industrial control system (ICS) vendor assessments as well as user on-site assessments. Not only are vulnerabilities discovered, but described actions for enhancing security are suggested – both on a system-specific basis and from a general perspective of identifying common weaknesses and their corresponding corrective actions. These cyber security programs have performed over 40 assessments tomore » date which have led to more robust, secure, and resilient monitoring and control systems for the US electrical grid, oil and gas, chemical, transportation, and many other sectors. In addition to cyber assessments themselves, the INL has been engaged in outreach to the ICS community through vendor forums, technical conferences, vendor user groups, and other special engagements as requested. Training programs have been created to help educate all levels of management and worker alike with an emphasis towards real everyday cyber hacking methods and techniques including typical exploits that are used. The asset owner or end user has many products available for its use created from these programs. One outstanding product is the US Department of Homeland Security (DHS) Cyber Security Procurement Language for Control Systems document that provides insight to the user when specifying a new monitoring and control system, particularly concerning security requirements. Employing some of the top cyber researchers in the nation, the INL can leverage this talent towards many applications other than critical infrastructure. Monitoring and control systems are used throughout the world to perform simple tasks such as cooking in a microwave to complex ones such as the monitoring and control of the next generation fighter jets or nuclear material safeguards systems in complex nuclear fuel cycle facilities. It is the intent of this paper to describe the cyber security programs that are currently in place, the experiences and successes achieved in industry including outreach and training, and suggestions about how other sectors and organizations can leverage this national expertise to help their monitoring and control systems become more secure.« less

  20. The Governors Propose: State Policy and Public Education, 1998.

    ERIC Educational Resources Information Center

    Hertert, Linda

    In the early months of 1998 state governors delivered state of the state addresses that touched on concerns transcending state lines: taxes, infrastructure development and maintenance, crime, health, and public education. Some of these common concerns, with an emphasis on proposed K-12 public education policies, are discussed. The paper provides a…

  1. Community Services for the Aged: The View from Eight Countries

    ERIC Educational Resources Information Center

    Kamerman, Sheila B.

    1976-01-01

    A country, case-descriptive methodology was employed in a cross-national study of social service systems. The major findings with regard to the aged are: (1) countries must establish a firm infrastructure of basic social provision for community services to function adequately and: (2) a common core of "personal social services" is emerging…

  2. The multiple resource inventory decision-making process

    Treesearch

    Victor A. Rudis

    1993-01-01

    A model of the multiple resource inventory decision-making process is presented that identifies steps in conducting inventories, describes the infrastructure, and points out knowledge gaps that are common to many interdisciplinary studies.Successful efforts to date suggest the need to bridge the gaps by sharing elements, maintain dialogue among stakeholders in multiple...

  3. DYNER: A DYNamic ClustER for Education and Research

    ERIC Educational Resources Information Center

    Kehagias, Dimitris; Grivas, Michael; Mamalis, Basilis; Pantziou, Grammati

    2006-01-01

    Purpose: The purpose of this paper is to evaluate the use of a non-expensive dynamic computing resource, consisting of a Beowulf class cluster and a NoW, as an educational and research infrastructure. Design/methodology/approach: Clusters, built using commodity-off-the-shelf (COTS) hardware components and free, or commonly used, software, provide…

  4. Developing a data infrastructure for a learning health system: the PORTAL network.

    PubMed

    McGlynn, Elizabeth A; Lieu, Tracy A; Durham, Mary L; Bauck, Alan; Laws, Reesa; Go, Alan S; Chen, Jersey; Feigelson, Heather Spencer; Corley, Douglas A; Young, Deborah Rohm; Nelson, Andrew F; Davidson, Arthur J; Morales, Leo S; Kahn, Michael G

    2014-01-01

    The Kaiser Permanente & Strategic Partners Patient Outcomes Research To Advance Learning (PORTAL) network engages four healthcare delivery systems (Kaiser Permanente, Group Health Cooperative, HealthPartners, and Denver Health) and their affiliated research centers to create a new national network infrastructure that builds on existing relationships among these institutions. PORTAL is enhancing its current capabilities by expanding the scope of the common data model, paying particular attention to incorporating patient-reported data more systematically, implementing new multi-site data governance procedures, and integrating the PCORnet PopMedNet platform across our research centers. PORTAL is partnering with clinical research and patient experts to create cohorts of patients with a common diagnosis (colorectal cancer), a rare diagnosis (adolescents and adults with severe congenital heart disease), and adults who are overweight or obese, including those with pre-diabetes or diabetes, to conduct large-scale observational comparative effectiveness research and pragmatic clinical trials across diverse clinical care settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. On Quantitative Comparative Research in Communication and Language Evolution

    PubMed Central

    Oller, D. Kimbrough; Griebel, Ulrike

    2014-01-01

    Quantitative comparison of human language and natural animal communication requires improved conceptualizations. We argue that an infrastructural approach to development and evolution incorporating an extended interpretation of the distinctions among illocution, perlocution, and meaning (Austin 1962; Oller and Griebel 2008) can help place the issues relevant to quantitative comparison in perspective. The approach can illuminate the controversy revolving around the notion of functional referentiality as applied to alarm calls, for example in the vervet monkey. We argue that referentiality offers a poor point of quantitative comparison across language and animal communication in the wild. Evidence shows that even newborn human cry could be deemed to show functional referentiality according to the criteria typically invoked by advocates of referentiality in animal communication. Exploring the essence of the idea of illocution, we illustrate an important realm of commonality among animal communication systems and human language, a commonality that opens the door to more productive, quantifiable comparisons. Finally, we delineate two examples of infrastructural communicative capabilities that should be particularly amenable to direct quantitative comparison across humans and our closest relatives. PMID:25285057

  6. On Quantitative Comparative Research in Communication and Language Evolution.

    PubMed

    Oller, D Kimbrough; Griebel, Ulrike

    2014-09-01

    Quantitative comparison of human language and natural animal communication requires improved conceptualizations. We argue that an infrastructural approach to development and evolution incorporating an extended interpretation of the distinctions among illocution, perlocution, and meaning (Austin 1962; Oller and Griebel 2008) can help place the issues relevant to quantitative comparison in perspective. The approach can illuminate the controversy revolving around the notion of functional referentiality as applied to alarm calls, for example in the vervet monkey. We argue that referentiality offers a poor point of quantitative comparison across language and animal communication in the wild. Evidence shows that even newborn human cry could be deemed to show functional referentiality according to the criteria typically invoked by advocates of referentiality in animal communication. Exploring the essence of the idea of illocution, we illustrate an important realm of commonality among animal communication systems and human language, a commonality that opens the door to more productive, quantifiable comparisons. Finally, we delineate two examples of infrastructural communicative capabilities that should be particularly amenable to direct quantitative comparison across humans and our closest relatives.

  7. In Situ High Pressure Hydrogen Tribological Testing of Common Polymer Materials Used in the Hydrogen Delivery Infrastructure.

    PubMed

    Duranty, Edward R; Roosendaal, Timothy J; Pitman, Stan G; Tucker, Joseph C; Owsley, Stanley L; Suter, Jonathan D; Alvine, Kyle James

    2018-03-31

    High pressure hydrogen gas is known to adversely affect metallic components of compressors, valves, hoses, and actuators. However, relatively little is known about the effects of high pressure hydrogen on the polymer sealing and barrier materials also found within these components. More study is required in order to determine the compatibility of common polymer materials found in the components of the hydrogen fuel delivery infrastructure with high pressure hydrogen. As a result, it is important to consider the changes in physical properties such as friction and wear in situ while the polymer is exposed to high pressure hydrogen. In this protocol, we present a method for testing the friction and wear properties of ethylene propylene diene monomer (EPDM) elastomer samples in a 28 MPa high pressure hydrogen environment using a custom-built in situ pin-on-flat linear reciprocating tribometer. Representative results from this testing are presented which indicate that the coefficient of friction between the EPDM sample coupon and steel counter surface is increased in high pressure hydrogen as compared to the coefficient of friction similarly measured in ambient air.

  8. A case analysis of INFOMED: the Cuban national health care telecommunications network and portal.

    PubMed

    Séror, Ann C

    2006-01-27

    The Internet and telecommunications technologies contribute to national health care system infrastructures and extend global health care services markets. The Cuban national health care system offers a model to show how a national information portal can contribute to system integration, including research, education, and service delivery as well as international trade in products and services. The objectives of this paper are (1) to present the context of the Cuban national health care system since the revolution in 1959, (2) to identify virtual institutional infrastructures of the system associated with the Cuban National Health Care Telecommunications Network and Portal (INFOMED), and (3) to show how they contribute to Cuban trade in international health care service markets. Qualitative case research methods were used to identify the integrated virtual infrastructure of INFOMED and to show how it reflects socialist ideology. Virtual institutional infrastructures include electronic medical and information services and the structure of national networks linking such services. Analysis of INFOMED infrastructures shows integration of health care information, research, and education as well as the interface between Cuban national information networks and the global Internet. System control mechanisms include horizontal integration and coordination through virtual institutions linked through INFOMED, and vertical control through the Ministry of Public Health and the government hierarchy. Telecommunications technology serves as a foundation for a dual market structure differentiating domestic services from international trade. INFOMED is a model of interest for integrating health care information, research, education, and services. The virtual infrastructures linked through INFOMED support the diffusion of Cuban health care products and services in global markets. Transferability of this model is contingent upon ideology and interpretation of values such as individual intellectual property and confidentiality of individual health information. Future research should focus on examination of these issues and their consequences for global markets in health care.

  9. A Case Analysis of INFOMED: The Cuban National Health Care Telecommunications Network and Portal

    PubMed Central

    2006-01-01

    Background The Internet and telecommunications technologies contribute to national health care system infrastructures and extend global health care services markets. The Cuban national health care system offers a model to show how a national information portal can contribute to system integration, including research, education, and service delivery as well as international trade in products and services. Objective The objectives of this paper are (1) to present the context of the Cuban national health care system since the revolution in 1959, (2) to identify virtual institutional infrastructures of the system associated with the Cuban National Health Care Telecommunications Network and Portal (INFOMED), and (3) to show how they contribute to Cuban trade in international health care service markets. Methods Qualitative case research methods were used to identify the integrated virtual infrastructure of INFOMED and to show how it reflects socialist ideology. Virtual institutional infrastructures include electronic medical and information services and the structure of national networks linking such services. Results Analysis of INFOMED infrastructures shows integration of health care information, research, and education as well as the interface between Cuban national information networks and the global Internet. System control mechanisms include horizontal integration and coordination through virtual institutions linked through INFOMED, and vertical control through the Ministry of Public Health and the government hierarchy. Telecommunications technology serves as a foundation for a dual market structure differentiating domestic services from international trade. Conclusions INFOMED is a model of interest for integrating health care information, research, education, and services. The virtual infrastructures linked through INFOMED support the diffusion of Cuban health care products and services in global markets. Transferability of this model is contingent upon ideology and interpretation of values such as individual intellectual property and confidentiality of individual health information. Future research should focus on examination of these issues and their consequences for global markets in health care. PMID:16585025

  10. Traffic signal control enhancements under vehicle infrastructure integration systems.

    DOT National Transportation Integrated Search

    2011-12-01

    Most current traffic signal systems are operated using a very archaic traffic-detection simple binary : logic (vehicle presence/non presence information). The logic was originally developed to provide input for old : electro-mechanical controllers th...

  11. Stormwater management and ecosystem services: a review

    NASA Astrophysics Data System (ADS)

    Prudencio, Liana; Null, Sarah E.

    2018-03-01

    Researchers and water managers have turned to green stormwater infrastructure, such as bioswales, retention basins, wetlands, rain gardens, and urban green spaces to reduce flooding, augment surface water supplies, recharge groundwater, and improve water quality. It is increasingly clear that green stormwater infrastructure not only controls stormwater volume and timing, but also promotes ecosystem services, which are the benefits that ecosystems provide to humans. Yet there has been little synthesis focused on understanding how green stormwater management affects ecosystem services. The objectives of this paper are to review and synthesize published literature on ecosystem services and green stormwater infrastructure and identify gaps in research and understanding, establishing a foundation for research at the intersection of ecosystems services and green stormwater management. We reviewed 170 publications on stormwater management and ecosystem services, and summarized the state-of-the-science categorized by the four types of ecosystem services. Major findings show that: (1) most research was conducted at the parcel-scale and should expand to larger scales to more closely understand green stormwater infrastructure impacts, (2) nearly a third of papers developed frameworks for implementing green stormwater infrastructure and highlighted barriers, (3) papers discussed ecosystem services, but less than 40% quantified ecosystem services, (4) no geographic trends emerged, indicating interest in applying green stormwater infrastructure across different contexts, (5) studies increasingly integrate engineering, physical science, and social science approaches for holistic understanding, and (6) standardizing green stormwater infrastructure terminology would provide a more cohesive field of study than the diverse and often redundant terminology currently in use. We recommend that future research provide metrics and quantify ecosystem services, integrate disciplines to measure ecosystem services from green stormwater infrastructure, and better incorporate stormwater management into environmental policy. Our conclusions outline promising future research directions at the intersection of stormwater management and ecosystem services.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaver, Justin M; Borges, Raymond Charles; Buckner, Mark A

    Critical infrastructure Supervisory Control and Data Acquisition (SCADA) systems were designed to operate on closed, proprietary networks where a malicious insider posed the greatest threat potential. The centralization of control and the movement towards open systems and standards has improved the efficiency of industrial control, but has also exposed legacy SCADA systems to security threats that they were not designed to mitigate. This work explores the viability of machine learning methods in detecting the new threat scenarios of command and data injection. Similar to network intrusion detection systems in the cyber security domain, the command and control communications in amore » critical infrastructure setting are monitored, and vetted against examples of benign and malicious command traffic, in order to identify potential attack events. Multiple learning methods are evaluated using a dataset of Remote Terminal Unit communications, which included both normal operations and instances of command and data injection attack scenarios.« less

  13. Grid-based HPC astrophysical applications at INAF Catania.

    NASA Astrophysics Data System (ADS)

    Costa, A.; Calanducci, A.; Becciani, U.; Capuzzo Dolcetta, R.

    The research activity on grid area at INAF Catania has been devoted to two main goals: the integration of a multiprocessor supercomputer (IBM SP4) within INFN-GRID middleware and the developing of a web-portal, Astrocomp-G, for the submission of astrophysical jobs into the grid infrastructure. Most of the actual grid implementation infrastructure is based on common hardware, i.e. i386 architecture machines (Intel Celeron, Pentium III, IV, Amd Duron, Athlon) using Linux RedHat OS. We were the first institute to integrate a totally different machine, an IBM SP with RISC architecture and AIX OS, as a powerful Worker Node inside a grid infrastructure. We identified and ported to AIX OS the grid components dealing with job monitoring and execution and properly tuned the Computing Element to delivery jobs into this special Worker Node. For testing purpose we used MARA, an astrophysical application for the analysis of light curve sequences. Astrocomp-G is a user-friendly front end to our grid site. Users who want to submit the astrophysical applications already available in the portal need to own a valid personal X509 certificate in addiction to a username and password released by the grid portal web master. The personal X509 certificate is a prerequisite for the creation of a short or long-term proxy certificate that allows the grid infrastructure services to identify clearly whether the owner of the job has the permissions to use resources and data. X509 and proxy certificates are part of GSI (Grid Security Infrastructure), a standard security tool adopted by all major grid sites around the world.

  14. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    NASA Astrophysics Data System (ADS)

    Arezzini, S.; Carboni, A.; Caruso, G.; Ciampa, A.; Coscetti, S.; Mazzoni, E.; Piras, S.

    2014-06-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  15. Considerations for the implementation and operation of stormwater control measure (SCM) performance monitoring systems

    EPA Science Inventory

    Green infrastructure (GI) studies are needed to make informed decisions about whether or not to select GI technologies over traditional urban drainage control methods and to assist in the timing of effective maintenance. Two permeable pavement infiltration stormwater control meas...

  16. Virtual Control Systems Environment (VCSE)

    ScienceCinema

    Atkins, Will

    2018-02-14

    Will Atkins, a Sandia National Laboratories computer engineer discusses cybersecurity research work for process control systems. Will explains his work on the Virtual Control Systems Environment project to develop a modeling and simulation framework of the U.S. electric grid in order to study and mitigate possible cyberattacks on infrastructure.

  17. Epos Working Group 10 Infrastructure for Georesources

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanisław; Kwiatek, Grzegorz

    2013-04-01

    Working Group 10 "Infrastructure for Georesources" deals primarily with induced seismicity (IS) infrastructure. Established during the EPOS Annual Meeting in Utrecht, November 2011, WG10 aims to integrate the research infrastructure in the area of seismicity induced by human activity: tremors and rockbursts in underground mines, seismicity associated with conventional and unconventional oil and gas production, induced by geothermal energy extraction and by underground reposition and storage of liquids (e.g. water disposal associated with energy extraction) and gases (CO2 sequestration, inter alia) and triggered by filling surface water reservoirs, etc. Until now the research in the area of IS has been organized around induced technologies rather than physical problems, common for these shallow seismic processes. This has hampered the integration of IS research community and the research progress. WG10 intends to work out a first step towards changing the IS research perspective from the present, technology-oriented, to physical problems-oriented without, however, losing touch with technological conditions of IS generation. This will be achieved by the integration of IS Research Infrastructure (ISRI) and the creation of Induced Seismicity Node within EPOS. The ISRI to be integrated has three components: data, software and reports. The IS data consists of seismic data and auxiliary data: geological, displacement, geomechanical, geodetic, etc, and last, but by no means least, technological data. A research in the field of IS cannot do without this last data class. The IS software comprises common software tools for data handling and visualisation, standard and advanced software for research and software based on newly proposed algorithms for tests and development. The IS reports are both peer reviewed and unreviewed as well as an internet forum. In addition to that the IS Node will play a significant role in integrating IS community and accelerating research, it will help to develop a synergy between research community and industrial partners. WG10 is working out the strategic solutions for integration and core services provided by future IS node for the European and other research groups, industrial partners, educational centers, central and local administration bodies. Measurable benefit of the integrated ISRI will be the intensification of studies on hazard and risk associated with anthropogenic seismicity and on methods of anthropogenic seismic risk mitigation. Best practices will be disseminated to industrial partners and relevant bodies of public administration. It is also planned to have an information node for the public use.

  18. A new vision of the post-NIST civil infrastructure program: the challenges of next-generation construction materials and processes

    NASA Astrophysics Data System (ADS)

    Wu, H. Felix; Wan, Yan

    2014-03-01

    Our nation's infrastructural systems are crumbling. The deteriorating process grows over time. The physical aging of these vital facilities and the remediation of their current critical state pose a key societal challenge to the United States. Current sensing technologies, while well developed in controlled laboratory environments, have not yet yielded tools for producing real-time, in-situ data that are adequately comprehensible for infrastructure decision-makers. The need for advanced sensing technologies is national because every municipality and state in the nation faces infrastructure management challenges. The need is critical because portions of infrastructure are reaching the end of their life-spans and there are few cost-effective means to monitor infrastructure integrity and to prioritize the renovation and replacement of infrastructure elements. New advanced sensing technologies that produce cost-effective inspection and real-time monitoring data, and that can also help or aid in meaningful interpretation of the acquired data, therefore will enhance the safety in regard to the public on structural integrity by issuing timely and accurate alert data for effective maintenance to avoid disasters happening. New advanced sensing technologies also allow more informed management of infrastructural investments by avoiding premature replacement of infrastructure and identifying those structures in need of immediate action to prevent from catastrophic failure. Infrastructure management requires that once a structural defect is detected, an economical and efficient repair be made. Advancing the technologies of repairing infrastructure elements in contact with water, road salt, and subjected to thermal changes requires innovative research to significantly extend the service life of repairs, lower the costs of repairs, and provide repair technologies that are suitable for a wide range of conditions. All these new technologies will provide increased lifetimes, security, and safety of elements of critical infrastructure for the Nation's already deteriorating civil infrastructure. It is envisioned that the Nation should look far beyond: not only should we efficiently and effectively address current problems of the aging infrastructure, but we must also further develop next-generation construction materials and processes for new construction. To accomplish this ambitious goal, we must include process efficiency that will help select the most reliable and cost-effective materials in construction processes; performance and cost will be the prime consideration for selections construction materials based on life-cycle cost and materials performance; energy efficiency will drive reduced energy consumption from current levels by 50 % per unit of output; and environmental responsiveness will achieve net-zero waste from construction materials and its constituents. Should it be successfully implemented, we will transform the current 21st century infrastructure systems to enable the vital functioning of society and improve competitiveness of the economy to ensure that our quality of life remains high.

  19. Data Updating Methods for Spatial Data Infrastructure that Maintain Infrastructure Quality and Enable its Sustainable Operation

    NASA Astrophysics Data System (ADS)

    Murakami, S.; Takemoto, T.; Ito, Y.

    2012-07-01

    The Japanese government, local governments and businesses are working closely together to establish spatial data infrastructures in accordance with the Basic Act on the Advancement of Utilizing Geospatial Information (NSDI Act established in August 2007). Spatial data infrastructures are urgently required not only to accelerate computerization of the public administration, but also to help restoration and reconstruction of the areas struck by the East Japan Great Earthquake and future disaster prevention and reduction. For construction of a spatial data infrastructure, various guidelines have been formulated. But after an infrastructure is constructed, there is a problem of maintaining it. In one case, an organization updates its spatial data only once every several years because of budget problems. Departments and sections update the data on their own without careful consideration. That upsets the quality control of the entire data system and the system loses integrity, which is crucial to a spatial data infrastructure. To ensure quality, ideally, it is desirable to update data of the entire area every year. But, that is virtually impossible, considering the recent budget crunch. The method we suggest is to update spatial data items of higher importance only in order to maintain quality, not updating all the items across the board. We have explored a method of partially updating the data of these two geographical features while ensuring the accuracy of locations. Using this method, data on roads and buildings that greatly change with time can be updated almost in real time or at least within a year. The method will help increase the availability of a spatial data infrastructure. We have conducted an experiment on the spatial data infrastructure of a municipality using those data. As a result, we have found that it is possible to update data of both features almost in real time.

  20. Effects of landscape-based green infrastructure on stormwater ...

    EPA Pesticide Factsheets

    The development of impervious surfaces in urban and suburban catchments affects their hydrological behavior by decreasing infiltration, increasing peak hydrograph response following rainfall events, and ultimately increasing the total volume of water and mass of pollutants reaching streams. These changes have deleterious effects on downstream surface waters. Consequently, strategies to mitigate these impacts are now components of contemporary urban development and stormwater management. This study evaluates the effectiveness of landscape green infrastructure (GI) in reducing stormwater runoff volumes and controlling peak flows in four subdivision-scale suburban catchments (1.88 – 12.97 acres) in Montgomery County, MD, USA. Stormwater flow rates during runoff events were measured in five minute intervals at each catchment outlet. One catchment was built with GI vegetated swales on all parcels with the goal of intercepting, conveying, and infiltrating stormwater before it enters the sewer network. The remaining catchments were constructed with traditional gray infrastructure and “end-of-pipe” best management practices (BMPs) that treat stormwater before entering streams. This study compared characteristics of rainfall-runoff events at the green and gray infrastructure sites to understand their effects on suburban hydrology. The landscape GI strategy generally reduced rainfall-runoff ratios compared to gray infrastructure because of increased infiltration, ul

  1. A National Strategy to Develop Pragmatic Clinical Trials Infrastructure

    PubMed Central

    Guise, Jeanne‐Marie; Dolor, Rowena J.; Meissner, Paul; Tunis, Sean; Krishnan, Jerry A.; Pace, Wilson D.; Saltz, Joel; Hersh, William R.; Michener, Lloyd; Carey, Timothy S.

    2014-01-01

    Abstract An important challenge in comparative effectiveness research is the lack of infrastructure to support pragmatic clinical trials, which compare interventions in usual practice settings and subjects. These trials present challenges that differ from those of classical efficacy trials, which are conducted under ideal circumstances, in patients selected for their suitability, and with highly controlled protocols. In 2012, we launched a 1‐year learning network to identify high‐priority pragmatic clinical trials and to deploy research infrastructure through the NIH Clinical and Translational Science Awards Consortium that could be used to launch and sustain them. The network and infrastructure were initiated as a learning ground and shared resource for investigators and communities interested in developing pragmatic clinical trials. We followed a three‐stage process of developing the network, prioritizing proposed trials, and implementing learning exercises that culminated in a 1‐day network meeting at the end of the year. The year‐long project resulted in five recommendations related to developing the network, enhancing community engagement, addressing regulatory challenges, advancing information technology, and developing research methods. The recommendations can be implemented within 24 months and are designed to lead toward a sustained national infrastructure for pragmatic trials. PMID:24472114

  2. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  3. SLURM: Simple Linux Utility for Resource Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jette, M; Dunlap, C; Garlick, J

    2002-04-24

    Simple Linux Utility for Resource Management (SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for Linux clusters of thousands of nodes. Components include machine status, partition management, job management, and scheduling modules. The design also includes a scalable, general-purpose communication infrastructure. Development will take place in four phases: Phase I results in a solid infrastructure; Phase II produces a functional but limited interactive job initiation capability without use of the interconnect/switch; Phase III provides switch support and documentation; Phase IV provides job status, fault-tolerance, and job queuing and control through Livermore's Distributed Productionmore » Control System (DPCS), a meta-batch and resource management system.« less

  4. Communications: Critical Infrastructure and Key Resources Sector-Specific Plan as Input to the National Infrastructure Protection Plan

    DTIC Science & Technology

    2007-05-01

    Commission maintains an expert staff of engineers and statisticians to analyze this data in an attempt to reveal troublesome trends in network reliability...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE MAY

  5. Water Infrastructure Needs and Investment: Review and Analysis of Key Issues

    DTIC Science & Technology

    2008-11-24

    the Rural Development Act of 1972, as amended (7 U.S.C. § 1926). The purpose of these USDA programs is to provide basic amenities, alleviate health...nonregulatory costs (e.g., routine replacement of basic infrastructure).12 Wastewater Needs. The most recent wastewater survey, conducted in 2004 and issued...1.6 billion just to implement the most basic steps needed to improve security (such as better controlling access to facilities with fences, locks

  6. Adapting Digital Libraries to Continual Evolution

    NASA Technical Reports Server (NTRS)

    Barkstrom, Bruce R.; Finch, Melinda; Ferebee, Michelle; Mackey, Calvin

    2002-01-01

    In this paper, we describe five investment streams (data storage infrastructure, knowledge management, data production control, data transport and security, and personnel skill mix) that need to be balanced against short-term operating demands in order to maximize the probability of long-term viability of a digital library. Because of the rapid pace of information technology change, a digital library cannot be a static institution. Rather, it has to become a flexible organization adapted to continuous evolution of its infrastructure.

  7. Transportation Systems: Critical Infrastructure and Key Resources Sector-Specific Plan as Input to the National Infrastructure Protection Plan

    DTIC Science & Technology

    2007-05-01

    partners will be encouraged to use the assessment methodologies referenced above, or ISO 27001 and ISO 17799, which are intended to be used together...the Information Systems Audit and Control Association (ISACA), the International Organization for Standardization ( ISO ), and a number of other...programs are aligned with NCSD’s goals for the IT sector and follow best practices developed by NIST and the ISO . The cyber protective programs

  8. 40 CFR 52.523 - Control strategy: Ozone

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 3 2013-07-01 2013-07-01 false Control strategy: Ozone 52.523 Section...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Florida § 52.523 Control strategy: Ozone (a) Disapproval. EPA is disapproving portions of Florida's infrastructure SIP for the 1997 8-hour ozone NAAQS regarding...

  9. 40 CFR 52.523 - Control strategy: Ozone

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 3 2014-07-01 2014-07-01 false Control strategy: Ozone 52.523 Section...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS Florida § 52.523 Control strategy: Ozone (a) Disapproval. EPA is disapproving portions of Florida's infrastructure SIP for the 1997 8-hour ozone NAAQS regarding...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, George

    VICE 2.0 is the second generation of the VICE financial model developed by the National Renewable Energy Laboratory for fleet managers to assess the financial soundness of converting their fleets to run on CNG. VICE 2.0 uses a number of variables for infrastructure and vehicles to estimate the business case for decision-makers when considering CNG as a vehicle fuel. Enhancements in version 2.0 include the ability to select the project type (vehicles and infrastructure or vehicle acquisitions only), and to decouple vehicle acquisition from the infrastructure investment, so the two investments may be made independently. Outputs now include graphical presentationsmore » of investment cash flow, payback period (simple and discounted), petroleum displacement (annual and cumulative), and annual greenhouse gas reductions. Also, the Vehicle Data are now built around several common conventionally fueled (gasoline and diesel) fleet vehicles. Descriptions of the various model sections and available inputs follow. Each description includes default values for the base-case business model, which was created so economic sensitivities can be investigated by altering various project parameters one at a time.« less

  11. Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis

    DOE PAGES

    Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...

    2008-01-01

    Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less

  12. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca

    2013-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  13. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinquini, Luca; Crichton, Daniel; Miller, Neill

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  14. A social science data-fusion tool and the Data Management through e-Social Science (DAMES) infrastructure.

    PubMed

    Warner, Guy C; Blum, Jesse M; Jones, Simon B; Lambert, Paul S; Turner, Kenneth J; Tan, Larry; Dawson, Alison S F; Bell, David N F

    2010-08-28

    The last two decades have seen substantially increased potential for quantitative social science research. This has been made possible by the significant expansion of publicly available social science datasets, the development of new analytical methodologies, such as microsimulation, and increases in computing power. These rich resources do, however, bring with them substantial challenges associated with organizing and using data. These processes are often referred to as 'data management'. The Data Management through e-Social Science (DAMES) project is working to support activities of data management for social science research. This paper describes the DAMES infrastructure, focusing on the data-fusion process that is central to the project approach. It covers: the background and requirements for provision of resources by DAMES; the use of grid technologies to provide easy-to-use tools and user front-ends for several common social science data-management tasks such as data fusion; the approach taken to solve problems related to data resources and metadata relevant to social science applications; and the implementation of the architecture that has been designed to achieve this infrastructure.

  15. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    NASA Technical Reports Server (NTRS)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  16. Network Randomization and Dynamic Defense for Critical Infrastructure Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Adrian R.; Martin, Mitchell Tyler; Hamlet, Jason

    2015-04-01

    Critical Infrastructure control systems continue to foster predictable communication paths, static configurations, and unpatched systems that allow easy access to our nation's most critical assets. This makes them attractive targets for cyber intrusion. We seek to address these attack vectors by automatically randomizing network settings, randomizing applications on the end devices themselves, and dynamically defending these systems against active attacks. Applying these protective measures will convert control systems into moving targets that proactively defend themselves against attack. Sandia National Laboratories has led this effort by gathering operational and technical requirements from Tennessee Valley Authority (TVA) and performing research and developmentmore » to create a proof-of-concept solution. Our proof-of-concept has been tested in a laboratory environment with over 300 nodes. The vision of this project is to enhance control system security by converting existing control systems into moving targets and building these security measures into future systems while meeting the unique constraints that control systems face.« less

  17. Ecohydrology frameworks for green infrastructure design and ecosystem service provision

    NASA Astrophysics Data System (ADS)

    Pavao-Zuckerman, M.; Knerl, A.; Barron-Gafford, G.

    2014-12-01

    Urbanization is a dominant form of landscape change that affects the structure and function of ecosystems and alters control points in biogeochemical and hydrologic cycles. Green infrastructure (GI) has been proposed as a solution to many urban environmental challenges and may be a way to manage biogeochemical control points. Despite this promise, there has been relatively limited empirical focus to evaluate the efficacy of GI, relationships between design and function, and the ability of GI to provide ecosystem services in cities. This work has been driven by goals of adapting GI approaches to dryland cities and to harvest rain and storm water for providing ecosystem services related to storm water management and urban heat island mitigation, as well as other co-benefits. We will present a modification of ecohydrologic theory for guiding the design and function of green infrastructure for dryland systems that highlights how GI functions in context of Trigger - Transfer - Reserve - Pulse (TTRP) dynamic framework. Here we also apply this TTRP framework to observations of established street-scape green infrastructure in Tucson, AZ, and an experimental installation of green infrastructure basins on the campus of Biosphere 2 (Oracle, AZ) where we have been measuring plant performance and soil biogeochemical functions. We found variable sensitivity of microbial activity, soil respiration, N-mineralization, photosynthesis and respiration that was mediated both by elements of basin design (soil texture and composition, choice of surface mulches) and antecedent precipitation inputs and soil moisture conditions. The adapted TTRP framework and field studies suggest that there are strong connections between design and function that have implications for stormwater management and ecosystem service provision in dryland cities.

  18. Structural Monitoring of Metro Infrastructure during Shield Tunneling Construction

    PubMed Central

    Ran, L.; Ye, X. W.; Ming, G.; Dong, X. B.

    2014-01-01

    Shield tunneling construction of metro infrastructure will continuously disturb the soils. The ground surface will be subjected to uplift or subsidence due to the deep excavation and the extrusion and consolidation of the soils. Implementation of the simultaneous monitoring with the shield tunnel construction will provide an effective reference in controlling the shield driving, while how to design and implement a safe, economic, and effective structural monitoring system for metro infrastructure is of great importance and necessity. This paper presents the general architecture of the shield construction of metro tunnels as well as the procedure of the artificial ground freezing construction of the metro-tunnel cross-passages. The design principles for metro infrastructure monitoring of the shield tunnel intervals in the Hangzhou Metro Line 1 are introduced. The detailed monitoring items and the specified alarming indices for construction monitoring of the shield tunneling are addressed, and the measured settlement variations at different monitoring locations are also presented. PMID:25032238

  19. Building Thematic and Integrated Services for European Solid Earth Sciences: the EPOS Integrated Approach

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Cocco, M.

    2017-12-01

    EPOS (European Plate Observing System) has been designed with the vision of creating a pan-European infrastructure for solid Earth science to support a safe and sustainable society. In accordance with this scientific vision, the EPOS mission is to integrate the diverse and advanced European Research Infrastructures for solid Earth science relying on new e-science opportunities to monitor and unravel the dynamic and complex Earth System. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. To accomplish its mission, EPOS is engaging different stakeholders, to allow the Earth sciences to open new horizons in our understanding of the planet. EPOS also aims at contributing to prepare society for geo-hazards and to responsibly manage the exploitation of geo-resources. Through integration of data, models and facilities, EPOS will allow the Earth science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and human welfare. The research infrastructures (RIs) that EPOS is coordinating include: i) distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services; v) new services for natural and anthropogenic hazards; vi) access to geo-energy test beds. Here we present the activities planned for the implementation phase focusing on the TCS, the ICS and on their interoperability. We will discuss the data, data-products, software and services (DDSS) presently under implementation, which will be validated and tested during 2018. Particular attention in this talk will be given to connecting EPOS with similar global initiatives and identifying common best practice and approaches.

  20. The Swedish Research Infrastructure for Ecosystem Science - SITES

    NASA Astrophysics Data System (ADS)

    Lindroth, A.; Ahlström, M.; Augner, M.; Erefur, C.; Jansson, G.; Steen Jensen, E.; Klemedtsson, L.; Langenheder, S.; Rosqvist, G. N.; Viklund, J.

    2017-12-01

    The vision of SITES is to promote long-term field-based ecosystem research at a world class level by offering an infrastructure with excellent technical and scientific support and services attracting both national and international researchers. In addition, SITES will make data freely and easily available through an advanced data portal which will add value to the research. During the first funding period, three innovative joint integrating facilities were established through a researcher-driven procedure: SITES Water, SITES Spectral, and SITES AquaNet. These new facilities make it possible to study terrestrial and limnic ecosystem processes across a range of ecosystem types and climatic gradients, with common protocols and similar equipment. In addition, user-driven development at the nine individual stations has resulted in e.g. design of a long-term agricultural systems experiment, and installation of weather stations, flux systems, etc. at various stations. SITES, with its integrative approach and broad coverage of climate and ecosystem types across Sweden, constitutes an excellent platform for state-of-the-art research projects. SITES' support the development of: A better understanding of the way in which key ecosystems function and interact with each other at the landscape level and with the climate system in terms of mass and energy exchanges. A better understanding of the role of different organisms in controlling different processes and ultimately the functioning of ecosystems. New strategies for forest management to better meet the many and varied requirements from nature conservation, climate and wood, fibre, and energy supply points of view. Agricultural systems that better utilize resources and minimize adverse impacts on the environment. Collaboration with other similar infrastructures and networks is a high priority for SITES. This will enable us to make use of each others' experiences, harmonize metadata for easier exchange of data, and support each other to widen the user community.

  1. What metrology can do to improve the quality of your atmospheric ammonia measurements

    NASA Astrophysics Data System (ADS)

    Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Guillevic, Myriam; Ackermann, Andreas; Ferracci, Valerio; Cassidy, Nathan; Hook, Josh; Battersby, Ross M.; Tang, Yuk S.; Stevens, Amy C. M.; Jones, Matthew R.; Braban, Christine F.; Gates, Linda; Hangartner, Markus; Sacco, Paolo; Pagani, Diego; Hoffnagle, John A.; Niederhauser, Bernhard

    2017-04-01

    Measuring ammonia in ambient air is a sensitive and priority issue due to its harmful effects on human health and ecosystems. The European Directive 2001/81/EC on "National Emission Ceilings for Certain Atmospheric Pollutants (NEC)" regulates ammonia emissions in the member states. However, there is a lack of regulation to ensure reliable ammonia measurements, namely in applicable analytical technology, maximum allowed uncertainty, quality assurance and quality control (QC/QA) procedures, as well as in the infrastructure to attain metrological traceability, i.e. that the results of measurements are traceable to SI-units through an unbroken chain of calibrations. In the framework of the European Metrology Research Programme (EMRP) project on the topic "Metrology for Ammonia in Ambient Air" (MetNH3), European national metrology institutes (NMI's) have joined to tackle the issue of generating SI-traceable reference material, i.e. generate reference gas mixtures containing known amount fractions of NH3.This requires special infrastructure and analytical techniques: Measurements of ambient ammonia are commonly carried out with diffusive samplers or by active sampling with denuders, but such techniques have not yet been extensively validated. Improvements in the metrological traceability may be achieved through the determination of NH3 diffusive sampling rates using ammonia Primary Standard Gas Mixtures (PSMs), developed by gravimetry at the National Physical Laboratory NPL and a controlled atmosphere test facility in combination with on-line monitoring with a cavity ring-down spectrometer. The Federal Institute of Metrology METAS has developed an infrastructure to generate SI-traceable NH3 reference gas mixtures dynamically in the amount fraction range 0.5-500 nmol/mol (atmospheric concentrations) and with uncertainties UNH3 <3%. The infrastructure consists of a stationary as well as a mobile device for full flexibility for calibrations in the laboratory and in the field. Both devices apply the method of temperature and pressure dependant permeation of a pure substance through a membrane into a stream of pre-purified matrix gas and subsequent dilution to required amount fractions. All relevant parameters are fully traceable to SI-units. Extractive optical analysers can be connected directly to both, stationary and mobile systems for calibration. Moreover, the resulting gas mixture can also be pressurised into coated cylinders by cryo-filling. The mobile system as well as these cylinders can be applied for calibrations of optical instruments in other laboratories and in the field. In addition, an SI-traceable dilution system based on a cascade of critical orifices has been established to dilute NH3 mixtures in the order of μmol/mol stored in cylinders. It is planned to apply this system to calibrate and re-sample gas mixtures in cylinders due to its very economical gas use. Here we present insights into the development of said infrastructure and results performance tests. Moreover, we include results of the study on adsorption/desorption effects in dry as well as humidified matrix gas into the discussion on the generation of reference gas mixtures. Acknowledgement: This work was supported by the European Metrology Research Programme (EMRP). The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.

  2. The European seismological waveform framework EIDA

    NASA Astrophysics Data System (ADS)

    Trani, Luca; Koymans, Mathijs; Quinteros, Javier; Heinloo, Andres; Euchner, Fabian; Strollo, Angelo; Sleeman, Reinoud; Clinton, John; Stammler, Klaus; Danecek, Peter; Pedersen, Helle; Ionescu, Constantin; Pinar, Ali; Evangelidis, Christos

    2017-04-01

    The ORFEUS1 European Integrated Data Archive (EIDA2) federates (currently) 11 major European seismological data centres into a common organisational and operational framework which offers: (a) transparent and uniform access tools, advanced services and products for seismological waveform data; (b) a platform for establishing common policies for the curation of seismological waveform data and the description of waveform data by standardised quality metrics; (c) proper attribution and citation (e.g. data ownership). After its establishment in 2013, EIDA has been collecting and distributing seamlessly large amounts of seismological data and products to the research community and beyond. A major task of EIDA is the on-going improvement of the services, tools and products portfolio in order to meet the increasingly demanding users' requirements. At present EIDA is entering a new operational phase and will become the reference infrastructure for seismological waveform data in the pan-European infrastructure for solid-Earth science: EPOS (European Plate Observing System)3. The EIDA Next Generation developments, initiated within the H2020 project EPOS-IP, will provide a new infrastructure that will support the seismological and multidisciplinary EPOS community facilitating interoperability in a broader context. EIDA NG comprises a number of new services and products e.g.: Routing Service, Authentication Service, WFCatalog, Mediator, Station Book and more in the near future. In this contribution we present the current status of the EIDA NG developments and provide an overview of the usage of the new services and their impact on the user community. 1 www.orfeus-eu.org/ 2 www.orfeus-eu.org/eida/eida.html 3 www.epos-ip.org

  3. Visualizing common operating picture of critical infrastructure

    NASA Astrophysics Data System (ADS)

    Rummukainen, Lauri; Oksama, Lauri; Timonen, Jussi; Vankka, Jouko

    2014-05-01

    This paper presents a solution for visualizing the common operating picture (COP) of the critical infrastructure (CI). The purpose is to improve the situational awareness (SA) of the strategic-level actor and the source system operator in order to support decision making. The information is obtained through the Situational Awareness of Critical Infrastructure and Networks (SACIN) framework. The system consists of an agent-based solution for gathering, storing, and analyzing the information, and a user interface (UI) is presented in this paper. The UI consists of multiple views visualizing information from the CI in different ways. Different CI actors are categorized in 11 separate sectors, and events are used to present meaningful incidents. Past and current states, together with geographical distribution and logical dependencies, are presented to the user. The current states are visualized as segmented circles to represent event categories. Geographical distribution of assets is displayed with a well-known map tool. Logical dependencies are presented in a simple directed graph, and users also have a timeline to review past events. The objective of the UI is to provide an easily understandable overview of the CI status. Therefore, testing methods, such as a walkthrough, an informal walkthrough, and the Situation Awareness Global Assessment Technique (SAGAT), were used in the evaluation of the UI. Results showed that users were able to obtain an understanding of the current state of CI, and the usability of the UI was rated as good. In particular, the designated display for the CI overview and the timeline were found to be efficient.

  4. An electronic infrastructure for research and treatment of the thalassemias and other hemoglobinopathies: the Euro-mediterranean ITHANET project.

    PubMed

    Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina

    2009-01-01

    Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.

  5. Lack of association between the P413L variant of chromogranin B and ALS risk or age at onset: a meta-analysis.

    PubMed

    Yang, Xinglong; Li, Shimei; Xing, Dongmei; Li, Peiyun; Li, Ci; Qi, Ling; Xu, Yanming; Ren, Hui

    2018-02-01

    Amyotrophic lateral sclerosis (ALS), the most common motor neuron disease, is thought to result from interaction of genetic and environmental risk factors. Whether the potentially functional exonic P413L variant in the chromogranin B gene influences ALS risk and age at onset is controversial. We meta-analysed or other studies assessing the association between the P413L variant and ALS risk or age at ALS onset indexed in Web of Science, PubMed, Embase, Chinese National Knowledge Infrastructure, Wanfang, and SinoMed databases. Five case-control studies were analysed, involving 2639 patients with sporadic ALS, 201 with familial ALS and 3381 controls. No association was detected between risk of either ALS type and the CT + TT genotype or T-allele of the P413L variant. Age at ALS onset was similar between carriers and non-carriers of the T-allele. The available evidence suggests that the P413L variant of chromogranin B is not associated with ALS risk or age at ALS onset. These results should be validated in large, well-designed studies.

  6. The Materials Commons: A Collaboration Platform and Information Repository for the Global Materials Community

    NASA Astrophysics Data System (ADS)

    Puchala, Brian; Tarcea, Glenn; Marquis, Emmanuelle. A.; Hedstrom, Margaret; Jagadish, H. V.; Allison, John E.

    2016-08-01

    Accelerating the pace of materials discovery and development requires new approaches and means of collaborating and sharing information. To address this need, we are developing the Materials Commons, a collaboration platform and information repository for use by the structural materials community. The Materials Commons has been designed to be a continuous, seamless part of the scientific workflow process. Researchers upload the results of experiments and computations as they are performed, automatically where possible, along with the provenance information describing the experimental and computational processes. The Materials Commons website provides an easy-to-use interface for uploading and downloading data and data provenance, as well as for searching and sharing data. This paper provides an overview of the Materials Commons. Concepts are also outlined for integrating the Materials Commons with the broader Materials Information Infrastructure that is evolving to support the Materials Genome Initiative.

  7. Integrating sea floor observatory data: the EMSO data infrastructure

    NASA Astrophysics Data System (ADS)

    Huber, Robert; Azzarone, Adriano; Carval, Thierry; Doumaz, Fawzi; Giovanetti, Gabriele; Marinaro, Giuditta; Rolin, Jean-Francois; Beranzoli, Laura; Waldmann, Christoph

    2013-04-01

    The European research infrastructure EMSO is a European network of fixed-point, deep-seafloor and water column observatories deployed in key sites of the European Continental margin and Arctic. It aims to provide the technological and scientific framework for the investigation of the environmental processes related to the interaction between the geosphere, biosphere, and hydrosphere and for a sustainable management by long-term monitoring also with real-time data transmission. Since 2006, EMSO is on the ESFRI (European Strategy Forum on Research Infrastructures) roadmap and has entered its construction phase in 2012. Within this framework, EMSO is contributing to large infrastructure integration projects such as ENVRI and COOPEUS. The EMSO infrastructure is geographically distributed in key sites of European waters, spanning from the Arctic, through the Atlantic and Mediterranean Sea to the Black Sea. It is presently consisting of thirteen sites which have been identified by the scientific community according to their importance respect to Marine Ecosystems, Climate Changes and Marine GeoHazards. The data infrastructure for EMSO is being designed as a distributed system. Presently, EMSO data collected during experiments at each EMSO site are locally stored and organized in catalogues or relational databases run by the responsible regional EMSO nodes. Three major institutions and their data centers are currently offering access to EMSO data: PANGAEA, INGV and IFREMER. In continuation of the IT activities which have been performed during EMSOs twin project ESONET, EMSO is now implementing the ESONET data architecture within an operational EMSO data infrastructure. EMSO aims to be compliant with relevant marine initiatives such as MyOceans, EUROSITES, EuroARGO, SEADATANET and EMODNET as well as to meet the requirements of international and interdisciplinary projects such as COOPEUS and ENVRI, EUDAT and iCORDI. A major focus is therefore set on standardization and interoperability of the EMSO data infrastructure. Beneath common standards for metadata exchange such as OpenSearch or OAI-PMH, EMSO has chosen to implement core standards of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards, such as Catalogue Service for Web (CS-W), Sensor Observation Service (SOS) and Observations and Measurements (O&M). Further, strong integration efforts are currently undertaken to harmonize data formats e.g NetCDF as well as the used ontologies and terminologies. The presentation will also give information to users about the discovery and visualization procedure for the EMSO data presently available.

  8. The data access infrastructure of the Wadden Sea Long Term Ecosystem Research (WaLTER) project

    NASA Astrophysics Data System (ADS)

    De Bruin, T.

    2011-12-01

    The Wadden Sea, North of The Netherlands, Germany and Danmark, is one of the most important tidal areas in the world. In 2009, the Wadden Sea was listed on the UNESCO World Heritage list. The area is noted for its ecological diversity and value, being a stopover for large numbers of migrating birds. The Wadden Sea is also used intensively for economic activities by inhabitants of the surrounding coasts and islands, as well as by the many tourists visiting the area every year. A whole series of monitoring programmes is carried out by a range of governmental bodies and institutes to study the natural processes occuring in the Wadden Sea ecosystems as well as the influence of human activities on those ecosystems. Yet, the monitoring programmes are scattered and it is difficult to get an overview of those monitoring activities or to get access to the data resulting from those monitoring programmes. The Wadden Sea Long Term Ecosystem Research (WaLTER) project aims to: 1. To provide a base set of consistent, standardized, long-term data on changes in the Wadden Sea ecological and socio-economic system in order to model and understand interrelationships with human use, climate variation and possible other drivers. 2. To provide a research infrastructure, open access to commonly shared databases, educational facilities and one or more field sites in which experimental, innovative and process-driven research can be carried out. This presentation will introduce the WaLTER-project and explain the rationale for this project. The presentation will focus on the data access infrastructure which will be used for WaLTER. This infrastructure is part of the existing and operational infrastructure of the National Oceanographic Data Committee (NODC) in the Netherlands. The NODC forms the Dutch node in the European SeaDataNet consortium, which has built an European, distributed data access infrastructure. WaLTER, NODC and SeaDataNet all use the same technology, developed within the SeaDataNet-project, resulting in a high level of standardization across Europe. Benefits and pitfalls of using this infrastructure will be addressed.

  9. The Impact of Water, Sanitation and Hygiene Interventions to Control Cholera: A Systematic Review.

    PubMed

    Taylor, Dawn L; Kahawita, Tanya M; Cairncross, Sandy; Ensink, Jeroen H J

    2015-01-01

    Cholera remains a significant threat to global public health with an estimated 100,000 deaths per year. Water, sanitation and hygiene (WASH) interventions are frequently employed to control outbreaks though evidence regarding their effectiveness is often missing. This paper presents a systematic literature review investigating the function, use and impact of WASH interventions implemented to control cholera. The review yielded eighteen studies and of the five studies reporting on health impact, four reported outcomes associated with water treatment at the point of use, and one with the provision of improved water and sanitation infrastructure. Furthermore, whilst the reporting of function and use of interventions has become more common in recent publications, the quality of studies remains low. The majority of papers (>60%) described water quality interventions, with those at the water source focussing on ineffective chlorination of wells, and the remaining being applied at the point of use. Interventions such as filtration, solar disinfection and distribution of chlorine products were implemented but their limitations regarding the need for adherence and correct use were not fully considered. Hand washing and hygiene interventions address several transmission routes but only 22% of the studies attempted to evaluate them and mainly focussed on improving knowledge and uptake of messages but not necessarily translating this into safer practices. The use and maintenance of safe water storage containers was only evaluated once, under-estimating the considerable potential for contamination between collection and use. This problem was confirmed in another study evaluating methods of container disinfection. One study investigated uptake of household disinfection kits which were accepted by the target population. A single study in an endemic setting compared a combination of interventions to improve water and sanitation infrastructure, and the resulting reductions in cholera incidence. This review highlights a focus on particular routes of transmission, and the limited number of interventions tested during outbreaks. There is a distinct gap in knowledge of which interventions are most appropriate for a given context and as such a clear need for more robust impact studies evaluating a wider array of WASH interventions, in order to ensure effective cholera control and the best use of limited resources.

  10. Design for Connecting Spatial Data Infrastructures with Sensor Web (sensdi)

    NASA Astrophysics Data System (ADS)

    Bhattacharya, D.; M., M.

    2016-06-01

    Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. It is about research to harness the sensed environment by utilizing domain specific sensor data to create a generalized sensor webframework. The challenges being semantic enablement for Spatial Data Infrastructures, and connecting the interfaces of SDI with interfaces of Sensor Web. The proposed research plan is to Identify sensor data sources, Setup an open source SDI, Match the APIs and functions between Sensor Web and SDI, and Case studies like hazard applications, urban applications etc. We take up co-operative development of SDI best practices to enable a new realm of a location enabled and semantically enriched World Wide Web - the "Geospatial Web" or "Geosemantic Web" by setting up one to one correspondence between WMS, WFS, WCS, Metadata and 'Sensor Observation Service' (SOS); 'Sensor Planning Service' (SPS); 'Sensor Alert Service' (SAS); a service that facilitates asynchronous message interchange between users and services, and between two OGC-SWE services, called the 'Web Notification Service' (WNS). Hence in conclusion, it is of importance to geospatial studies to integrate SDI with Sensor Web. The integration can be done through merging the common OGC interfaces of SDI and Sensor Web. Multi-usability studies to validate integration has to be undertaken as future research.

  11. Report of the Interagency Optical Network Testbeds Workshop 2, NASA Ames Research Center, September 12-14, 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Optical Network Testbeds Workshop 2 (ONT2), held on September 12-14, 2005, was cosponsored by the Department of Energy Office of Science (DOE/SC) and the National Aeronautics and Space Administration (NASA), in cooperation with the Joint Engineering Team (JET) of the Federal Networking and Information Technology Research and Development (NITRD) Program's Large Scale Networking (LSN) Coordinating Group. The ONT2 workshop was a follow-on to an August 2004 Workshop on Optical Network Testbeds (ONT1). ONT1 recommended actions by the Federal agencies to assure timely development and implementation of optical networking technologies and infrastructure. Hosted by the NASA Ames Research Center in Mountain View, California, the ONT2 workshop brought together representatives of the U.S. advanced research and education (R&E) networks, regional optical networks (RONs), service providers, international networking organizations, and senior engineering and R&D managers from Federal agencies and national research laboratories. Its purpose was to develop a common vision of the optical network technologies, services, infrastructure, and organizations needed to enable widespread use of optical networks; recommend activities for transitioning the optical networking research community and its current infrastructure to leading-edge optical networks over the next three to five years; and present information enabling commercial network infrastructure providers to plan for and use leading-edge optical network services in that time frame.

  12. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.

    2014-06-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  13. The size effect in corrosion greatly influences the predicted life span of concrete infrastructures.

    PubMed

    Angst, Ueli M; Elsener, Bernhard

    2017-08-01

    Forecasting the life of concrete infrastructures in corrosive environments presents a long-standing and socially relevant challenge in science and engineering. Chloride-induced corrosion of reinforcing steel in concrete is the main cause for premature degradation of concrete infrastructures worldwide. Since the middle of the past century, this challenge has been tackled by using a conceptual approach relying on a threshold chloride concentration for corrosion initiation ( C crit ). All state-of-the-art models for forecasting chloride-induced steel corrosion in concrete are based on this concept. We present an experiment that shows that C crit depends strongly on the exposed steel surface area. The smaller the tested specimen is, the higher and the more variable C crit becomes. This size effect in the ability of reinforced concrete to withstand corrosion can be explained by the local conditions at the steel-concrete interface, which exhibit pronounced spatial variability. The size effect has major implications for the future use of the common concept of C crit . It questions the applicability of laboratory results to engineering structures and the reproducibility of typically small-scale laboratory testing. Finally, we show that the weakest link theory is suitable to transform C crit from small to large dimensions, which lays the basis for taking the size effect into account in the science and engineering of forecasting the durability of infrastructures.

  14. The size effect in corrosion greatly influences the predicted life span of concrete infrastructures

    PubMed Central

    Angst, Ueli M.; Elsener, Bernhard

    2017-01-01

    Forecasting the life of concrete infrastructures in corrosive environments presents a long-standing and socially relevant challenge in science and engineering. Chloride-induced corrosion of reinforcing steel in concrete is the main cause for premature degradation of concrete infrastructures worldwide. Since the middle of the past century, this challenge has been tackled by using a conceptual approach relying on a threshold chloride concentration for corrosion initiation (Ccrit). All state-of-the-art models for forecasting chloride-induced steel corrosion in concrete are based on this concept. We present an experiment that shows that Ccrit depends strongly on the exposed steel surface area. The smaller the tested specimen is, the higher and the more variable Ccrit becomes. This size effect in the ability of reinforced concrete to withstand corrosion can be explained by the local conditions at the steel-concrete interface, which exhibit pronounced spatial variability. The size effect has major implications for the future use of the common concept of Ccrit. It questions the applicability of laboratory results to engineering structures and the reproducibility of typically small-scale laboratory testing. Finally, we show that the weakest link theory is suitable to transform Ccrit from small to large dimensions, which lays the basis for taking the size effect into account in the science and engineering of forecasting the durability of infrastructures. PMID:28782038

  15. EUPOS - Satellite multifunctional system of reference stations in Central and Eastern Europe

    NASA Astrophysics Data System (ADS)

    Sledzinski, J.

    2003-04-01

    The European project EUPOS (European Position Determination System) of establishment of a system of multifunctional satellite reference stations in Central and Eastern Europe is described in the paper. Fifteen countries intend to participate in the project: Bulgaria, Croatia, Czech Republic, Estonia, Germany, Hungary, Latvia, Lithuania, Macedonia, Poland, Romania, Russia, Serbia, Slovak Republic and Slovenia. One common project will be prepared for all countries, however it will include the existing or developed infrastructure in particular countries. The experiences of establishing and operating of the German network SAPOS as well as experiences gained by other countries will be used. The European network of stations will be compatible with the system SAPOS and future European system Galileo. The network of reference stations will provide signal for both positioning of the geodetic control points and for land, air and marine navigation. Several levels of positioning accuracy will be delivered.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonathan Gray; Robert Anderson; Julio G. Rodriguez

    Abstract: Identifying and understanding digital instrumentation and control (I&C) cyber vulnerabilities within nuclear power plants and other nuclear facilities, is critical if nation states desire to operate nuclear facilities safely, reliably, and securely. In order to demonstrate objective evidence that cyber vulnerabilities have been adequately identified and mitigated, a testbed representing a facility’s critical nuclear equipment must be replicated. Idaho National Laboratory (INL) has built and operated similar testbeds for common critical infrastructure I&C for over ten years. This experience developing, operating, and maintaining an I&C testbed in support of research identifying cyber vulnerabilities has led the Korean Atomic Energymore » Research Institute of the Republic of Korea to solicit the experiences of INL to help mitigate problems early in the design, development, operation, and maintenance of a similar testbed. The following information will discuss I&C testbed lessons learned and the impact of these experiences to KAERI.« less

  17. Continuity planning for workplace infectious diseases.

    PubMed

    Welch, Nancy; Miller, Pamela Blair; Engle, Lisa

    2016-01-01

    Traditionally, business continuity plans prepare for worst-case scenarios; people plan for the exception rather than the common. Plans focus on infrastructure damage and recovery wrought by such disasters as hurricanes, terrorist events or tornadoes. Yet, another very real threat looms present every day, every season and can strike without warning, wreaking havoc on the major asset -- human capital. Each year, millions of dollars are lost in productivity, healthcare costs, absenteeism and services due to infectious, communicable diseases. Sound preventive risk management and recovery strategies can avert this annual decimation of staff and ensure continuous business operation. This paper will present a strong economic justification for the recognition, prevention and mitigation of communicable diseases as a routine part of continuity planning for every business. Recommendations will also be provided for environmental/engineering controls as well as personnel policies that address employee and customer protection, supply chain contacts and potential legal issues.

  18. The caBIG Terminology Review Process

    PubMed Central

    Cimino, James J.; Hayamizu, Terry F.; Bodenreider, Olivier; Davis, Brian; Stafford, Grace A.; Ringwald, Martin

    2009-01-01

    The National Cancer Institute (NCI) is developing an integrated biomedical informatics infrastructure, the cancer Biomedical Informatics Grid (caBIG®), to support collaboration within the cancer research community. A key part of the caBIG architecture is the establishment of terminology standards for representing data. In order to evaluate the suitability of existing controlled terminologies, the caBIG Vocabulary and Data Elements Workspace (VCDE WS) working group has developed a set of criteria that serve to assess a terminology's structure, content, documentation, and editorial process. This paper describes the evolution of these criteria and the results of their use in evaluating four standard terminologies: the Gene Ontology (GO), the NCI Thesaurus (NCIt), the Common Terminology for Adverse Events (known as CTCAE), and the laboratory portion of the Logical Objects, Identifiers, Names and Codes (LOINC). The resulting caBIG criteria are presented as a matrix that may be applicable to any terminology standardization effort. PMID:19154797

  19. [Quality assurance in interventional cardiology].

    PubMed

    Gülker, H

    2009-10-01

    Quality assurance in clinical studies aiming at approval of pharmaceutical products is submitted to strict rules, controls and auditing regulations. Comparative instruments to ensure quality in diagnostic and therapeutic procedures are not available in interventional cardiology, likewise in other fields of cardiovascular medicine. Quality assurance simply consists of "quality registers" with basic data not externally controlled. Based on the experiences of clinical studies and their long history of standardization it is assumed that these data may be severely flawed thus being inappropriate to set standards for diagnostic and therapeutic strategies. The precondition for quality assurance are quality data. In invasive coronary angiography and intervention medical indications, the decision making process interventional versus surgical revascularization, technical performance and after - care are essential aspects affecting quality of diagnostics and therapy. Quality data are externally controlled data. To collect quality data an appropriate infrastructure is a necessary precondition which is not existent. For an appropriate infrastructure investments have to be done both to build up as well as to sustain the necessary preconditions. As long as there are no infrastructure and no investments there will be no "quality data". There exist simply registers of data which are not proved to be a basis for significant assurance and enhancement in quality in interventional coronary cardiology. Georg Thieme Verlag KG Stuttgart, New York.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, Rebecca L.; Turnbull, Laura; Earl, Stevan

    Urban watersheds are often sources of nitrogen (N) to downstream systems, contributing to poor water quality. However, it is unknown which components (e.g., land cover and stormwater infrastructure type) of urban watersheds contribute to N export and which may be sites of retention. In this study we investigated which watershed characteristics control N sourcing, biogeochemical processing of nitrate (NO3–) during storms, and the amount of rainfall N that is retained within urban watersheds. We used triple isotopes of NO3– (δ15N, δ18O, and Δ17O) to identify sources and transformations of NO3– during storms from 10 nested arid urban watersheds that variedmore » in stormwater infrastructure type and drainage area. Stormwater infrastructure and land cover—retention basins, pipes, and grass cover—dictated the sourcing of NO3– in runoff. Urban watersheds can be strong sinks or sources of N to stormwater depending on the proportion of rainfall that leaves the watershed as runoff, but we found no evidence that denitrification occurred during storms. Our results suggest that watershed characteristics control the sources and transport of inorganic N in urban stormwater but that retention of inorganic N at the timescale of individual runoff events is controlled by hydrologic, rather than biogeochemical, mechanisms.« less

  1. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry, Jr.

    1991-01-01

    Significant advances have occurred during the last decade in intelligent systems technologies (a.k.a. knowledge-based systems, KBS) including research, feasibility demonstrations, and technology implementations in operational environments. Evaluation and simulation data obtained to date in real-time operational environments suggest that cost-effective utilization of intelligent systems technologies can be realized for Automated Rendezvous and Capture applications. The successful implementation of these technologies involve a complex system infrastructure integrating the requirements of transportation, vehicle checkout and health management, and communication systems without compromise to systems reliability and performance. The resources that must be invoked to accomplish these tasks include remote ground operations and control, built-in system fault management and control, and intelligent robotics. To ensure long-term evolution and integration of new validated technologies over the lifetime of the vehicle, system interfaces must also be addressed and integrated into the overall system interface requirements. An approach for defining and evaluating the system infrastructures including the testbed currently being used to support the on-going evaluations for the evolutionary Space Station Freedom Data Management System is presented and discussed. Intelligent system technologies discussed include artificial intelligence (real-time replanning and scheduling), high performance computational elements (parallel processors, photonic processors, and neural networks), real-time fault management and control, and system software development tools for rapid prototyping capabilities.

  2. MaNIDA: an operational infrastructure for shipborne data

    NASA Astrophysics Data System (ADS)

    Macario, Ana; Scientific MaNIDA Team

    2013-04-01

    The Marine Network for Integrated Data Access (MaNIDA) aims to build a sustainable e-Infrastruture to support discovery and re-use of data archived in a distributed network of data providers in Germany (see related abstracts in session ESSI1.2 and session ESSI2.2). Because one of the primary focus of MaNIDA is the underway data acquired on board of German academic research vessels, we will be addressing various issues related to cruise-level metadata, shiptrack navigation, sampling events conducted during the cruise (event logs), standardization of device-related (type, name, parameters) and place-related (gazetteer) vocabularies, QA/QC procedures (near real time and post-cruise validation, corrections, quality flags) as well as ingestion and management of contextual information (e.g. various types of cruise-related reports and project-related information). One of MaNIDA's long-term goal is to be able to offer an integrative "one-stop-shop" framework for management and access of ship-related information based on international standards and interoperability. This access framework will be freely available and is intended for scientists, funding agencies and the public. The master "catalog" we are building currently contains information from 13 German academic research vessels and respective cruises (to date ~1900 cruises with expected growing rate of ~150 cruises annually). Moreover, MaNIDA's operational infrastructure will additionally provide a direct pipeline to SeaDataNet Cruise Summary Report Inventory, among others. In this presentation, we will focus on the extensions we are currently implementing to support automated acquisition and standardized transfer of various types of data from German research vessels to hosts on land. Our concept towards nationwide common QA/QC procedures for various types of underway data (including versioning concept) and common workflows will also be presented. The "linking" of cruise-related information with quality-controlled data and data products (e.g., digital terrain models), publications, cruise-related reports, people and other contextual information will be additionally shown in the framework of a prototype for R.V. Polarstern.

  3. 76 FR 52363 - Tortoise Power and Energy Infrastructure Fund, Inc. and Tortoise Capital Advisors, L.L.C.; Notice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... relation to market price and net asset value (``NAV'') per common share) and the relationship between the... relation to NAV per share). Applicants state that the Independent Directors also considered what conflicts... appropriate in the public interest and consistent with the protection of investors and the purposes fairly...

  4. U.S. - African Partnerships: Advancing Common Interests

    DTIC Science & Technology

    2017-12-01

    discussions on: – Governance, institutions, and effective partnerships – Prospects for enhancing economic partnerships – Opportunities and challenges in...U.S. administrations, emphasizing peace and security, countering terrorism, increasing economic growth, and promoting democracy and good governance...often focused on short- term security or economic objectives, while neglecting infrastructure projects and longer term programs that would empower

  5. Information Infrastructures for Integrated Enterprises

    DTIC Science & Technology

    1993-05-01

    PROCESSING demographic CAM realization; ule leveling; studies; prelimi- rapid tooling; con- accounting/admin- nary CAFE and tinuous cost istrative reports...nies might consider franchising some facets of indirect labor, such as selected functions of administration, finance, and human resources. Incorporate as...vices CAFE Corporate Average Fuel Economy CAD Computer-Aided Design 0 CAE Computer-Aided Engineering CAIS Common Ada Programming Support Environment

  6. Providing the Tools for Information Sharing: Net-Centric Enterprise Services

    DTIC Science & Technology

    2007-07-01

    The Department of Defense (DoD) is establishing a net-centric environment that increasingly leverages shared services and Service-Oriented...transformational program that delivers a set of shared services as part of the DoD’s common infrastructure to enable networked joint force capabilities, improved interoperability, and increased information sharing across mission area services.

  7. Boosting a Low-Cost Smart Home Environment with Usage and Access Control Rules.

    PubMed

    Barsocchi, Paolo; Calabrò, Antonello; Ferro, Erina; Gennaro, Claudio; Marchetti, Eda; Vairo, Claudio

    2018-06-08

    Smart Home has gained widespread attention due to its flexible integration into everyday life. Pervasive sensing technologies are used to recognize and track the activities that people perform during the day, and to allow communication and cooperation of physical objects. Usually, the available infrastructures and applications leveraging these smart environments have a critical impact on the overall cost of the Smart Home construction, require to be preferably installed during the home construction and are still not user-centric. In this paper, we propose a low cost, easy to install, user-friendly, dynamic and flexible infrastructure able to perform runtime resources management by decoupling the different levels of control rules. The basic idea relies on the usage of off-the-shelf sensors and technologies to guarantee the regular exchange of critical information, without the necessity from the user to develop accurate models for managing resources or regulating their access/usage. This allows us to simplify the continuous updating and improvement, to reduce the maintenance effort and to improve residents’ living and security. A first validation of the proposed infrastructure on a case study is also presented.

  8. Cistern and planter box monitoring in Camden, NJ revisited

    EPA Science Inventory

    The Camden County Municipal Utilities Authority installed green infrastructure Stormwater Control Measures at multiple locations around the city of Camden, NJ. The Stormwater Control Measures include raised downspout planter boxes and cisterns. EPA is monitoring a subset of the ...

  9. Making Culverts Great Again: Modeling Road Culvert Vulnerability to Assist Prioritization of Local Infrastructure Investment

    NASA Astrophysics Data System (ADS)

    Gold, D.; Walter, M. T.; Watkins, L.; Kaufman, Z.; Meyer, A.; Mahaney, M.

    2016-12-01

    The concurrent threats posed by climate change and aging infrastructure have become of increasing concern in recent years. In the Northeastern US, storms such as Hurricane Irene and Super Storm Sandy have highlighted the vulnerability of infrastructure to extreme weather events, which are projected to become more frequent under future climate change scenarios. Road culverts are one type of infrastructure that is particularly vulnerable to such threats. Culverts allow roads to safely traverse small streams or drainage ditches, and their proper design is critical to ensuring a safe and reliable transportation network. Much of the responsibility for designing and maintaining road culverts lies at the local level, but many local governments lack the resources to quantify the vulnerability of their culverts to major storms. This study contributes a model designed to assist local governments in rapidly assessing the vulnerability of large numbers of culverts and identifies common characteristics of vulnerable culverts. Model inputs include culvert geometry and location data collected by trained local field teams. The model uses custom tools created in ArcGIS and Python to determine the maximum return period storm that each culvert can safely convey under current and projected future rainfall regimes. As a demonstration, over 1000 culverts in New York State were modeled. It was found that a significant percentage of modeled culverts failed to convey the current 5 year return period storm event (deemed a failure) and this percentage increased under projected future rainfall conditions. The model results were analyzed to determine correlations between culvert characteristics and failure. Characteristics investigated included watershed size, road type (state, county or local), affluence of the surrounding area and suitability for aquatic organism passage. Results from this study can be used by local governments to quantify and characterize the vulnerability of current infrastructure and prioritize future infrastructure investment.

  10. A Service Oriented Infrastructure for Earth Science exchange

    NASA Astrophysics Data System (ADS)

    Burnett, M.; Mitchell, A.

    2008-12-01

    NASA's Earth Science Distributed Information System (ESDIS) program has developed an infrastructure for the exchange of Earth Observation related resources. Fundamentally a platform for Service Oriented Architectures, ECHO provides standards-based interfaces based on the basic interactions for a SOA pattern: Publish, Find and Bind. This infrastructure enables the realization of the benefits of Service Oriented Architectures, namely the reduction of stove-piped systems, the opportunity for reuse and flexibility to meet dynamic business needs, on a global scale. ECHO is the result of the infusion of IT technologies, including those standards of Web Services and Service Oriented Architecture technologies. The infrastructure is based on standards and leverages registries for data, services, clients and applications. As an operational system, ECHO currently representing over 110 million Earth Observation resources from a wide number of provider organizations. These partner organizations each have a primary mission - serving a particular facet of the Earth Observation community. Through ECHO, those partners can serve the needs of not only their target portion of the community, but also enable a wider range of users to discover and leverage their data resources, thereby increasing the value of their offerings. The Earth Observation community benefits from this infrastructure because it provides a set of common mechanisms for the discovery and access to resources from a much wider range of data and service providers. ECHO enables innovative clients to be built for targeted user types and missions. There several examples of those clients already in process. Applications built on this infrastructure can include User-driven, GUI-clients (web-based or thick clients), analysis programs (as intermediate components of larger systems), models or decision support systems. This paper will provide insight into the development of ECHO, as technologies were evaluated for infusion, and a summary of how technologies where leveraged into a significant operational system for the Earth Observation community.

  11. Next generation information communication infrastructure and case studies for future power systems

    NASA Astrophysics Data System (ADS)

    Qiu, Bin

    As power industry enters the new century, powerful driving forces, uncertainties and new functions are compelling electric utilities to make dramatic changes in their information communication infrastructure. Expanding network services such as real time measurement and monitoring are also driving the need for more bandwidth in the communication network. These needs will grow further as new remote real-time protection and control applications become more feasible and pervasive. This dissertation addresses two main issues for the future power system information infrastructure: communication network infrastructure and associated power system applications. Optical networks no doubt will become the predominant data transmission media for next generation power system communication. The rapid development of fiber optic network technology poses new challenges in the areas of topology design, network management and real time applications. Based on advanced fiber optic technologies, an all-fiber network is investigated and proposed. The study will cover the system architecture and data exchange protocol aspects. High bandwidth, robust optical networks could provide great opportunities to the power system for better service and efficient operation. In the dissertation, different applications are investigated. One of the typical applications is the SCADA information accessing system. An Internet-based application for the substation automation system will be presented. VLSI (Very Large Scale Integration) technology is also used for one-line diagrams auto-generation. High transition rate and low latency optical network is especially suitable for power system real time control. In the dissertation, a new local area network based Load Shedding Controller (LSC) for isolated power system will be presented. By using PMU (Phasor Measurement Unit) and fiber optic network, an AGE (Area Generation Error) based accurate wide area load shedding scheme will also be proposed. The objective is to shed the load in the limited area with minimum disturbance.

  12. Comparing the effects of infrastructure on bicycling injury at intersections and non-intersections using a case–crossover design

    PubMed Central

    Harris, M Anne; Reynolds, Conor C O; Winters, Meghan; Cripton, Peter A; Shen, Hui; Chipman, Mary L; Cusimano, Michael D; Babul, Shelina; Brubacher, Jeffrey R; Friedman, Steven M; Hunte, Garth; Monro, Melody; Vernich, Lee; Teschke, Kay

    2013-01-01

    Background This study examined the impact of transportation infrastructure at intersection and non-intersection locations on bicycling injury risk. Methods In Vancouver and Toronto, we studied adult cyclists who were injured and treated at a hospital emergency department. A case–crossover design compared the infrastructure of injury and control sites within each injured bicyclist's route. Intersection injury sites (N=210) were compared to randomly selected intersection control sites (N=272). Non-intersection injury sites (N=478) were compared to randomly selected non-intersection control sites (N=801). Results At intersections, the types of routes meeting and the intersection design influenced safety. Intersections of two local streets (no demarcated traffic lanes) had approximately one-fifth the risk (adjusted OR 0.19, 95% CI 0.05 to 0.66) of intersections of two major streets (more than two traffic lanes). Motor vehicle speeds less than 30 km/h also reduced risk (adjusted OR 0.52, 95% CI 0.29 to 0.92). Traffic circles (small roundabouts) on local streets increased the risk of these otherwise safe intersections (adjusted OR 7.98, 95% CI 1.79 to 35.6). At non-intersection locations, very low risks were found for cycle tracks (bike lanes physically separated from motor vehicle traffic; adjusted OR 0.05, 95% CI 0.01 to 0.59) and local streets with diverters that reduce motor vehicle traffic (adjusted OR 0.04, 95% CI 0.003 to 0.60). Downhill grades increased risks at both intersections and non-intersections. Conclusions These results provide guidance for transportation planners and engineers: at local street intersections, traditional stops are safer than traffic circles, and at non-intersections, cycle tracks alongside major streets and traffic diversion from local streets are safer than no bicycle infrastructure. PMID:23411678

  13. The Space Communications Protocol Standards Program

    NASA Technical Reports Server (NTRS)

    Jeffries, Alan; Hooke, Adrian J.

    1994-01-01

    In the fall of 1992 NASA and the Department of Defense chartered a technical team to explore the possibility of developing a common set of space data communications standards for potential dual-use across the U.S. national space mission support infrastructure. The team focused on the data communications needs of those activities associated with on-lined control of civil and military aircraft. A two-pronged approach was adopted: a top-down survey of representative civil and military space data communications requirements was conducted; and a bottom-up analysis of available standard data communications protocols was performed. A striking intersection of civil and military space mission requirements emerged, and an equally striking consensus on the approach towards joint civil and military space protocol development was reached. The team concluded that wide segments of the U.S. civil and military space communities have common needs for: (1) an efficient file transfer protocol; (2) various flavors of underlying data transport service; (3) an optional data protection mechanism to assure end-to-end security of message exchange; and (4) an efficient internetworking protocol. These recommendations led to initiating a program to develop a suite of protocols based on these findings. This paper describes the current status of this program.

  14. Development and Demonstration of Sustainable Surface Infrastructure for Moon/Mars Exploration

    NASA Technical Reports Server (NTRS)

    Sanders, Gerald B.; Larson, William E.; Picard, Martin

    2011-01-01

    For long-term human exploration of the Moon and Mars to be practical, affordable, and sustainable, future missions must be able to identify and utilize resources at the site of exploration. The ability to characterize, extract, processes, and separate products from local material, known as In-Situ Resource Utilization (ISRU), can provide significant reductions in launch mass, logistics, and development costs while reducing risk through increased mission flexibility and protection as well as increased mission capabilities in the areas of power and transportation. Making mission critical consumables like propellants, fuel cell reagents and life support gases, as well as in-situ crew/hardware protection and energy storage capabilities can significantly enhance robotic and human science and exploration missions, however other mission systems need to be designed to interface with and utilize these in-situ developed products and services from the start or the benefits will be minimized or eliminated. This requires a level of surface and transportation system development coordination not typically utilized during early technology and system development activities. An approach being utilized by the US National Aeronautics and Space Administration and the Canadian Space Agency has been to utilize joint analogue field demonstrations to focus technology development activities to demonstrate and integrate new and potentially game changing. mission critical capabilities that would enable an affordable and sustainable surface infrastructure for lunar and Mars robotic and human exploration. Two analogue field tests performed in November 2008 and February 2010 demonstrated first generation capabilities for lunar resource prospecting, exploration site preparation, and oxygen extraction from regolith while initiating integration with mobility, science, fuel cell power, and propulsion disciplines. A third analogue field test currently planned for June 2012 will continue and expand the fidelity and integration of these surface exploration and infrastructure capabilities while adding Mars exploration technologies, improving remote operations and control of hardware, and promoting the use of common software, interfaces, & standards for control and operation for surface exploration and science. The next field test will also attempt to include greater involvement by industry, academia, and other countries/space agencies. This paper will provide an overview of the development and demonstration approach utilized to date, the results of the previous two ISRU-focused field analogue tests in Hawaii, and the current objectives and plans for the 3rd international Hawaii analogue field test.

  15. Towards European organisation for integrated greenhouse gas observation system

    NASA Astrophysics Data System (ADS)

    Kaukolehto, Marjut; Vesala, Timo; Sorvari, Sanna; Juurola, Eija; Paris, Jean-Daniel

    2013-04-01

    Climate change is one the most challenging problems that humanity will have to cope with in the coming decades. The perturbed global biogeochemical cycles of the greenhouse gases (carbon dioxide, methane and nitrous oxide) are a major driving force of current and future climate change. Deeper understanding of the driving forces of climate change requires full quantification of the greenhouse gas emissions and sinks and their evolution. Regional greenhouse gas budgets, tipping-points, vulnerabilities and the controlling mechanisms can be assessed by long term, high precision observations in the atmosphere and at the ocean and land surface. ICOS RI is a distributed infrastructure for on-line, in-situ monitoring of greenhouse gases (GHG) necessary to understand their present-state and future sinks and sources. ICOS RI provides the long-term observations required to understand the present state and predict future behaviour of the global carbon cycle and greenhouse gas emissions. Linking research, education and innovation promotes technological development and demonstrations related to greenhouse gases. The first objective of ICOS RI is to provide effective access to coherent and precise data and to provide assessments of GHG inventories with high temporal and spatial resolution. The second objective is to provide profound information for research and understanding of regional budgets of greenhouse gas sources and sinks, their human and natural drivers, and the controlling mechanisms. ICOS is one of several ESFRI initiatives in the environmental science domain. There is significant potential for structural and synergetic interaction with several other ESFRI initiatives. ICOS RI is relevant for Joint Programming by providing the data access for the researchers and acting as a contact point for developing joint strategic research agendas among European member states. The preparatory phase ends in March 2013 and there will be an interim period before the legal entity will be set up. International negotiations have been going on for two years during which the constitutional documents have been processed and adopted. The instrument for the ICOS legal entity is the ERIC (European Research Infrastructure Consortium) steered by the General Assembly of its Members. ICOS is a highly distributed research infrastructure where three operative levels (ICOS National Networks, ICOS Central Facilities and ICOS ERIC) interact on several fields of research and governance. The governance structure of ICOS RI needs to reflect this complexity while maintaining the common vision, strategy and principles.

  16. The University of Washington Health Sciences Library BioCommons: an evolving Northwest biomedical research information support infrastructure

    PubMed Central

    Minie, Mark; Bowers, Stuart; Tarczy-Hornoch, Peter; Roberts, Edward; James, Rose A.; Rambo, Neil; Fuller, Sherrilynne

    2006-01-01

    Setting: The University of Washington Health Sciences Libraries and Information Center BioCommons serves the bioinformatics needs of researchers at the university and in the vibrant for-profit and not-for-profit biomedical research sector in the Washington area and region. Program Components: The BioCommons comprises services addressing internal University of Washington, not-for-profit, for-profit, and regional and global clientele. The BioCommons is maintained and administered by the BioResearcher Liaison Team. The BioCommons architecture provides a highly flexible structure for adapting to rapidly changing resources and needs. Evaluation Mechanisms: BioCommons uses Web-based pre- and post-course evaluations and periodic user surveys to assess service effectiveness. Recent surveys indicate substantial usage of BioCommons services and a high level of effectiveness and user satisfaction. Next Steps/Future Directions: BioCommons is developing novel collaborative Web resources to distribute bioinformatics tools and is experimenting with Web-based competency training in bioinformation resource use. PMID:16888667

  17. Proliferation risks from nuclear power infrastructure

    NASA Astrophysics Data System (ADS)

    Squassoni, Sharon

    2017-11-01

    Certain elements of nuclear energy infrastructure are inherently dual-use, which makes the promotion of nuclear energy fraught with uncertainty. Are current restraints on the materials, equipment, and technology that can be used either to produce fuel for nuclear electricity generation or material for nuclear explosive devices adequate? Technology controls, supply side restrictions, and fuel market assurances have been used to dissuade countries from developing sensitive technologies but the lack of legal restrictions is a continued barrier to permanent reduction of nuclear proliferation risks.

  18. Cyber-Threat Assessment for the Air Traffic Management System: A Network Controls Approach

    NASA Technical Reports Server (NTRS)

    Roy, Sandip; Sridhar, Banavar

    2016-01-01

    Air transportation networks are being disrupted with increasing frequency by failures in their cyber- (computing, communication, control) systems. Whether these cyber- failures arise due to deliberate attacks or incidental errors, they can have far-reaching impact on the performance of the air traffic control and management systems. For instance, a computer failure in the Washington DC Air Route Traffic Control Center (ZDC) on August 15, 2015, caused nearly complete closure of the Centers airspace for several hours. This closure had a propagative impact across the United States National Airspace System, causing changed congestion patterns and requiring placement of a suite of traffic management initiatives to address the capacity reduction and congestion. A snapshot of traffic on that day clearly shows the closure of the ZDC airspace and the resulting congestion at its boundary, which required augmented traffic management at multiple locations. Cyber- events also have important ramifications for private stakeholders, particularly the airlines. During the last few months, computer-system issues have caused several airlines fleets to be grounded for significant periods of time: these include United Airlines (twice), LOT Polish Airlines, and American Airlines. Delays and regional stoppages due to cyber- events are even more common, and may have myriad causes (e.g., failure of the Department of Homeland Security systems needed for security check of passengers, see [3]). The growing frequency of cyber- disruptions in the air transportation system reflects a much broader trend in the modern society: cyber- failures and threats are becoming increasingly pervasive, varied, and impactful. In consequence, an intense effort is underway to develop secure and resilient cyber- systems that can protect against, detect, and remove threats, see e.g. and its many citations. The outcomes of this wide effort on cyber- security are applicable to the air transportation infrastructure, and indeed security solutions are being implemented in the current system. While these security solutions are important, they only provide a piecemeal solution. Particular computers or communication channels are protected from particular attacks, without a holistic view of the air transportation infrastructure. On the other hand, the above-listed incidents highlight that a holistic approach is needed, for several reasons. First, the air transportation infrastructure is a large scale cyber-physical system with multiple stakeholders and diverse legacy assets. It is impractical to protect every cyber- asset from known and unknown disruptions, and instead a strategic view of security is needed. Second, disruptions to the cyber- system can incur complex propagative impacts across the air transportation network, including its physical and human assets. Also, these implications of cyber- events are exacerbated or modulated by other disruptions and operational specifics, e.g. severe weather, operator fatigue or error, etc. These characteristics motivate a holistic and strategic perspective on protecting the air transportation infrastructure from cyber- events. The analysis of cyber- threats to the air traffic system is also inextricably tied to the integration of new autonomy into the airspace. The replacement of human operators with cyber functions leaves the network open to new cyber threats, which must be modeled and managed. Paradoxically, the mitigation of cyber events in the airspace will also likely require additional autonomy, given the fast time scale and myriad pathways of cyber-attacks which must be managed. The assessment of new vulnerabilities upon integration of new autonomy is also a key motivation for a holistic perspective on cyber threats.

  19. Constellation-X Microcalorimeter Development

    NASA Technical Reports Server (NTRS)

    Kelly, Richard (Technical Monitor); Silver, Eric

    2004-01-01

    Discussion topics: Review concept for NTD array construction. Tested revised fabrication tique: simpler construction; more rugged; and easier to control thermal properties. Development status. Related cryogenic infrastructure.

  20. Flight Test of Composite Model Reference Adaptive Control (CMRAC) Augmentation Using NASA AirSTAR Infrastructure

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.; Gadient, ROss; Lavretsky, Eugene

    2011-01-01

    This paper presents flight test results of a robust linear baseline controller with and without composite adaptive control augmentation. The flight testing was conducted using the NASA Generic Transport Model as part of the Airborne Subscale Transport Aircraft Research system at NASA Langley Research Center.

Top