Optimizing Mexico’s Water Distribution Services
2011-10-28
government pursued a decentralization policy in the water distribution infrastructure sector.5 This is evident in Article 115 of the Mexican Constitution ...infrastructure, monitoring water 5 Ibid, 47. 6 Mexican Constitution . http://www.oas.org/juridico...54 Apogee Research International, Ltd., Innovative Financing of Water and Wastewater Infrastructure in the NAFTA Partners: A Focus on
US EPA/ORD Condition Assessment Research for Drinking Water Conveyance Infrastructure
This presentation describes research on condition assessment for drinking water transmission and distribution systems that EPA is conducting under the U.S. Environmental Protection Agency’s Aging Water Infrastructure (AWI) Research Program. This research program will help U.S. ...
Romanian contribution to research infrastructure database for EPOS
NASA Astrophysics Data System (ADS)
Ionescu, Constantin; Craiu, Andreea; Tataru, Dragos; Balan, Stefan; Muntean, Alexandra; Nastase, Eduard; Oaie, Gheorghe; Asimopolos, Laurentiu; Panaiotu, Cristian
2014-05-01
European Plate Observation System - EPOS is a long-term plan to facilitate integrated use of data, models and facilities from mainly distributed existing, but also new, research infrastructures for solid Earth Science. In EPOS Preparatory Phase were integrated the national Research Infrastructures at pan European level in order to create the EPOS distributed research infrastructures, structure in which, at the present time, Romania participates by means of the earth science research infrastructures of the national interest declared on the National Roadmap. The mission of EPOS is to build an efficient and comprehensive multidisciplinary research platform for solid Earth Sciences in Europe and to allow the scientific community to study the same phenomena from different points of view, in different time periods and spatial scales (laboratory and field experiments). At national scale, research and monitoring infrastructures have gathered a vast amount of geological and geophysical data, which have been used by research networks to underpin our understanding of the Earth. EPOS promotes the creation of comprehensive national and regional consortia, as well as the organization of collective actions. To serve the EPOS goals, in Romania a group of National Research Institutes, together with their infrastructures, gathered in an EPOS National Consortium, as follows: 1. National Institute for Earth Physics - Seismic, strong motion, GPS and Geomagnetic network and Experimental Laboratory; 2. National Institute of Marine Geology and Geoecology - Marine Research infrastructure and Euxinus integrated regional Black Sea observation and early-warning system; 3. Geological Institute of Romania - Surlari National Geomagnetic Observatory and National lithoteque (the latter as part of the National Museum of Geology) 4. University of Bucharest - Paleomagnetic Laboratory After national dissemination of EPOS initiative other Research Institutes and companies from the potential stakeholders group also show their interest to participate in the EPOS National Consortium.
e-Infrastructures supporting research into depression, self-harm and suicide.
McCafferty, S; Doherty, T; Sinnott, R O; Watt, J
2010-08-28
The Economic and Social Research Council (ESRC)-funded Data Management through e-Social Sciences (DAMES) project is investigating, as one of its four research themes, how research into depression, self-harm and suicide may be enhanced through the adoption of e-Science infrastructures and techniques. In this paper, we explore the challenges in supporting such research infrastructures and describe the distributed and heterogeneous datasets that need to be provisioned to support such research. We describe and demonstrate the application of an advanced user and security-driven infrastructure that has been developed specifically to meet these challenges in an on-going study into depression, self-harm and suicide.
Online catalog access and distribution of remotely sensed information
NASA Astrophysics Data System (ADS)
Lutton, Stephen M.
1997-09-01
Remote sensing is providing voluminous data and value added information products. Electronic sensors, communication electronics, computer software, hardware, and network communications technology have matured to the point where a distributed infrastructure for remotely sensed information is a reality. The amount of remotely sensed data and information is making distributed infrastructure almost a necessity. This infrastructure provides data collection, archiving, cataloging, browsing, processing, and viewing for applications from scientific research to economic, legal, and national security decision making. The remote sensing field is entering a new exciting stage of commercial growth and expansion into the mainstream of government and business decision making. This paper overviews this new distributed infrastructure and then focuses on describing a software system for on-line catalog access and distribution of remotely sensed information.
A European perspective--the European clinical research infrastructures network.
Demotes-Mainard, J; Kubiak, C
2011-11-01
Evaluating research outcomes requires multinational cooperation in clinical research for optimization of treatment strategies and comparative effectiveness research, leading to evidence-based practice and healthcare cost containment. The European Clinical Research Infrastructures Network (ECRIN) is a distributed ESFRI (European Strategy Forum on Research Infrastructures) roadmap pan-European infrastructure designed to support multinational clinical research, making Europe a single area for clinical studies, taking advantage of its population size to access patients, and unlocking latent scientific potential. Servicing multinational trials started during its preparatory phase, and ECRIN will now apply for an ERIC (European Research Infrastructures Consortium) status by 2011. By creating a single area for clinical research in Europe, this achievement will contribute to the implementation of the Europe flagship initiative 2020 'Innovation Union', whose objectives include defragmentation of the research and education capacity, tackling the major societal challenges starting with the area of healthy ageing, and removing barriers to bring ideas to the market.
Research Infrastructure and Scientific Collections: The Supply and Demand of Scientific Research
NASA Astrophysics Data System (ADS)
Graham, E.; Schindel, D. E.
2016-12-01
Research infrastructure is essential in both experimental and observational sciences and is commonly thought of as single-sited facilities. In contrast, object-based scientific collections are distributed in nearly every way, including by location, taxonomy, geologic epoch, discipline, collecting processes, benefits sharing rules, and many others. These diffused collections may have been amassed for a particular discipline, but their potential for use and impact in other fields needs to be explored. Through a series of cross-disciplinary activities, Scientific Collections International (SciColl) has explored and developed new ways in which the supply of scientific collections can meet the demand of researchers in unanticipated ways. From cross-cutting workshops on emerging infectious diseases and food security, to an online portal of collections, SciColl aims to illustrate the scope and value of object-based scientific research infrastructure. As distributed infrastructure, the full impact of scientific collections to the research community is a result of discovering, utilizing, and networking these resources. Examples and case studies from infectious disease research, food security topics, and digital connectivity will be explored.
Transforming Our Cities: High-Performance Green Infrastructure (WERF Report INFR1R11)
The objective of this project is to demonstrate that the highly distributed real-time control (DRTC) technologies for green infrastructure being developed by the research team can play a critical role in transforming our nation’s urban infrastructure. These technologies include a...
An Overview of the Distributed Space Exploration Simulation (DSES) Project
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Michael G.; Bowman, James D.
2007-01-01
This paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which investigates technologies, and processes related to integrated, distributed simulation of complex space systems in support of NASA's Exploration Initiative. In particular, it describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. With regard to network infrastructure, DSES is developing a Distributed Simulation Network for use by all NASA centers. With regard to software, DSES is developing software models, tools and procedures that streamline distributed simulation development and provide an interoperable infrastructure for agency-wide integrated simulation. Finally, with regard to simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper presents the current status and plans for these three areas, including examples of specific simulations.
Boutin, Natalie; Holzbach, Ana; Mahanta, Lisa; Aldama, Jackie; Cerretani, Xander; Embree, Kevin; Leon, Irene; Rathi, Neeta; Vickers, Matilde
2016-01-01
The Biobank and Translational Genomics core at Partners Personalized Medicine requires robust software and hardware. This Information Technology (IT) infrastructure enables the storage and transfer of large amounts of data, drives efficiencies in the laboratory, maintains data integrity from the time of consent to the time that genomic data is distributed for research, and enables the management of complex genetic data. Here, we describe the functional components of the research IT infrastructure at Partners Personalized Medicine and how they integrate with existing clinical and research systems, review some of the ways in which this IT infrastructure maintains data integrity and security, and discuss some of the challenges inherent to building and maintaining such infrastructure. PMID:26805892
ACTRIS Aerosol, Clouds and Trace Gases Research Infrastructure
NASA Astrophysics Data System (ADS)
Pappalardo, Gelsomina
2018-04-01
The Aerosols, Clouds and Trace gases Research Infrastructure (ACTRIS) is a distributed infrastructure dedicated to high-quality observation of aerosols, clouds, trace gases and exploration of their interactions. It will deliver precision data, services and procedures regarding the 4D variability of clouds, short-lived atmospheric species and the physical, optical and chemical properties of aerosols to improve the current capacity to analyse, understand and predict past, current and future evolution of the atmospheric environment.
The European Research Infrastructure for Heritage Science (erihs)
NASA Astrophysics Data System (ADS)
Striova, J.; Pezzati, L.
2017-08-01
The European Research Infrastructure for Heritage Science (E-RIHS) entered the European strategic roadmap for research infrastructures (ESFRI Roadmap [1]) in 2016, as one of its six new projects. E-RIHS supports research on heritage interpretation, preservation, documentation and management. Both cultural and natural heritage are addressed: collections, artworks, buildings, monuments and archaeological sites. E-RIHS aims to become a distributed research infrastructure with a multi-level star-structure: facilities from single Countries will be organized in national nodes, coordinated by National Hubs. The E-RIHS Central Hub will provide the unique access point to all E-RIHS services through coordination of National Hubs. E-RIHS activities already started in some of its national nodes. In Italy the access to some E-RIHS services started in 2015. A case study concerning the diagnostic of a hypogea cave is presented.
Reframing the Dissemination Challenge: A Marketing and Distribution Perspective
Bernhardt, Jay M.
2009-01-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely. PMID:19833993
Reframing the dissemination challenge: a marketing and distribution perspective.
Kreuter, Matthew W; Bernhardt, Jay M
2009-12-01
A fundamental obstacle to successful dissemination and implementation of evidence-based public health programs is the near-total absence of systems and infrastructure for marketing and distribution. We describe the functions of a marketing and distribution system, and we explain how it would help move effective public health programs from research to practice. Then we critically evaluate the 4 dominant strategies now used to promote dissemination and implementation, and we explain how each would be enhanced by marketing and distribution systems. Finally, we make 6 recommendations for building the needed system infrastructure and discuss the responsibility within the public health community for implementation of these recommendations. Without serious investment in such infrastructure, application of proven solutions in public health practice will continue to occur slowly and rarely.
Condition Assessment Technologies for Water Transmission and Distribution Systems
As part of the U.S. Environmental Protection Agency’s (EPA’s) Aging Water Infrastructure Research Program, this research was conducted to identify and characterize the state of the technology for structural condition assessment of drinking water transmission and distribution syst...
The Czech National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.
2017-10-01
The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Woodcock, R.
2013-12-01
One of the greatest drivers for change in the way scientific research is undertaken in Australia was the development of the Australian eResearch Infrastructure which was coordinated by the then Australian Government Department of Innovation, Industry, Science and Research. There were two main tranches of funding: the 2007-2013 National Collaborative Research Infrastructure Strategy (NCRIS) and the 2009 Education and Investment Framework (EIF) Super Science Initiative. Investments were in two areas: the Australian e-Research Infrastructure and domain specific capabilities: combined investment in both is 1,452M with at least 456M being invested in eResearch infrastructure. NCRIS was specifically designed as a community-guided process to provide researchers, both academic and government, with major research facilities, supporting infrastructures and networks necessary for world-class research. Extensive community engagement was sought to inform decisions on where Australia could best make strategic infrastructure investments to further develop its research capacity and improve research outcomes over the next 5 to 10years. The current (2007-2014) Australian e-Research Infrastructure has 2 components: 1. The National eResearch physical infrastructure which includes two petascale HPC facilities (one in Canberra and one in Perth), a 10 Gbps national network (National Research Network), a national data storage infrastructure comprising 8 multi petabyte data stores and shared access methods (Australian Access Federation). 2. A second component is focused on research integration infrastructures and includes the Australian National Data Service, which is concerned with better management, description and access to distributed research data in Australia and the National eResearch Collaboration Tools and Resources (NeCTAR) project. NeCTAR is centred on developing problem oriented digital laboratories which provide better and coordinated access to research tools, data environments and workflows. The eResearch Infrastructure Stack is designed to support 12 individual domain-specific capabilities. Four are relevant to the Earth and Space Sciences: (1) AuScope (a national Earth Science Infrastructure Program), (2) the Integrated Marine Observing System (IMOS), (3) the Terrestrial Ecosystems Research Network (TERN) and (4) the Australian Urban Research Infrastructure Network (AURIN). The two main research integration infrastructures, ANDS and NeCTAR, are seen as pivotal to the success of the Australian eResearch Infrastructure. Without them, there was a risk that that the investments in new computers and data storage would provide physical infrastructure, but few would come to use it as the skills barriers to entry were too high. ANDS focused on transforming Australia's research data environment. Its flagship is Research Data Australia, an Internet-based discovery service designed to provide rich connections between data, projects, researchers and institutions, and promote visibility of Australian research data collections in search engines. NeCTAR focused on building eResearch infrastructure in four areas: virtual laboratories, tools, a federated research cloud and a hosting service. Combined, ANDS and NeCTAR are ensuring that people ARE coming and ARE using the physical infrastructures that were built.
The Distributed Space Exploration Simulation (DSES)
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Mike G.; Bowman, James D.
2007-01-01
The paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which focuses on the investigation and development of technologies, processes and integrated simulations related to the collaborative distributed simulation of complex space systems in support of NASA's Exploration Initiative. This paper describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. In the network work area, DSES is developing a Distributed Simulation Network that will provide agency wide support for distributed simulation between all NASA centers. In the software work area, DSES is developing a collection of software models, tool and procedures that ease the burden of developing distributed simulations and provides a consistent interoperability infrastructure for agency wide participation in integrated simulation. Finally, for simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper will present current status and plans for each of these work areas with specific examples of simulations that support NASA's exploration initiatives.
Consolidation and development roadmap of the EMI middleware
NASA Astrophysics Data System (ADS)
Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.
2012-12-01
Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-23
... (DHS), Science and Technology, Protected Repository for the Defense of Infrastructure Against Cyber... the Defense of Infrastructure against Cyber Threats (PREDICT) program, and is a revision of a... operational data for use in cyber security research and development through the establishment of distributed...
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew
2016-01-01
EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.
De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony
2010-09-13
When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.
Walking behavior on Lapangan Merdeka district in Medan city
NASA Astrophysics Data System (ADS)
Zahrah, W.; Mandai, A. J. O.; Nasution, A. D.
2018-03-01
Lapangan Merdeka district in Medan City is an area with a lot of functions and activities. Pedestrians in this area pose particular behavior for walking. Such behavior can be formed due to certain factors. This study aimed to identify the behavior and motivation of walking, as well as knowing the perception of pedestrians on pedestrian facilities and infrastructures. This research is a qualitative descriptive study. This research was conducted in five streets that have pedestrian lanes by collecting data through observation of pedestrian facilities and infrastructures, as well as the distribution of questionnaires to investigate the characteristics of pedestrians, the behavior and motivation of walking, and perceptions of pedestrian facilities and infrastructure. The research found that the behavior of pedestrians when walking are different on certain characteristics of pedestrians as well as the specific conditions of facilities and infrastructures. The most dominant motivation when walking in this area is easy transportation access. The results of the perception of pedestrians also show that pedestrian facilities and infrastructure are good in this area.
Increasing the resilience and security of the United States' power infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happenny, Sean F.
2015-08-01
The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less
NASA Astrophysics Data System (ADS)
McKee, Shawn; Kissel, Ezra; Meekhof, Benjeman; Swany, Martin; Miller, Charles; Gregorowicz, Michael
2017-10-01
We report on the first year of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) which is targeting the creation of a distributed Ceph storage infrastructure coupled together with software-defined networking to provide high-performance access for well-connected locations on any participating campus. The projects goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to read, write, manage and share data directly from their own computing locations. The NSF CC*DNI DIBBS program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges. While OSiRIS will eventually be serving a broad range of science domains, its first adopter will be the LHC ATLAS detector project via the ATLAS Great Lakes Tier-2 (AGLT2) jointly located at the University of Michigan and Michigan State University. Part of our presentation will cover how ATLAS is using the OSiRIS infrastructure and our experiences integrating our first user community. The presentation will also review the motivations for and goals of the project, the technical details of the OSiRIS infrastructure, the challenges in providing such an infrastructure, and the technical choices made to address those challenges. We will conclude with our plans for the remaining 4 years of the project and our vision for what we hope to deliver by the projects end.
ERIC Educational Resources Information Center
Yellowlees, Peter M.; Hogarth, Michael; Hilty, Donald M.
2006-01-01
Objective: This article highlights the importance of distributed broadband networks as part of the core infrastructure necessary to deliver academic research and education programs. Method: The authors review recent developments in the field and present the University of California, Davis, environment as a case study of a future virtual regional…
Witt, Michael; Krefting, Dagmar
2016-01-01
Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.
Learning from LANCE: Developing a Web Portal Infrastructure for NASA Earth Science Data (Invited)
NASA Astrophysics Data System (ADS)
Murphy, K. J.
2013-12-01
NASA developed the Land Atmosphere Near real-time Capability for EOS (LANCE) in response to a growing need for timely satellite observations by applications users, operational agencies and researchers. EOS capabilities originally intended for long-term Earth science research were modified to deliver satellite data products with sufficient latencies to meet the needs of the NRT user communities. LANCE products are primarily distributed as HDF data files for analysis, however novel capabilities for distribution of NRT imagery for visualization have been added which have expanded the user base. Additionally systems to convert data to information such as the MODIS hotspot/active fire data are also provided through the Fire Information for Resource Management System (FIRMS). LANCE services include: FTP/HTTP file distribution, Rapid Response (RR), Worldview, Global Imagery Browse Services (GIBS) and FIRMS. This paper discusses how NASA has developed services specifically for LANCE and is taking the lessons learned through these activities to develop an Earthdata Web Infrastructure. This infrastructure is being used as a platform to support development of data portals that address specific science issues for much of EOSDIS data.
Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven
2010-11-01
The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.
A Grid Infrastructure for Supporting Space-based Science Operations
NASA Technical Reports Server (NTRS)
Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)
2002-01-01
Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.
Accelerator infrastructure in Europe: EuCARD 2011
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2011-10-01
The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.
Research Activities at Fermilab for Big Data Movement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mhashilkar, Parag; Wu, Wenji; Kim, Hyun W
2013-01-01
Adaptation of 100GE Networking Infrastructure is the next step towards management of Big Data. Being the US Tier-1 Center for the Large Hadron Collider's (LHC) Compact Muon Solenoid (CMS) experiment and the central data center for several other large-scale research collaborations, Fermilab has to constantly deal with the scaling and wide-area distribution challenges of the big data. In this paper, we will describe some of the challenges involved in the movement of big data over 100GE infrastructure and the research activities at Fermilab to address these challenges.
Comparative-effectiveness research in distributed health data networks.
Toh, S; Platt, R; Steiner, J F; Brown, J S
2011-12-01
Comparative-effectiveness research (CER) can be conducted within a distributed health data network. Such networks allow secure access to separate data sets from different data partners and overcome many practical obstacles related to patient privacy, data security, and proprietary concerns. A scalable network architecture supports a wide range of CER activities and meets the data infrastructure needs envisioned by the Federal Coordinating Council for Comparative Effectiveness Research.
International Symposium on Grids and Clouds (ISGC) 2016
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.
NASA Astrophysics Data System (ADS)
Archibong, Belinda
While previous literature has emphasized the importance of energy and public infrastructure services for economic development, questions surrounding the implications of unequal spatial distribution in access to these resources remain, particularly in the developing country context. This dissertation provides evidence on the nature, origins and implications of this distribution uniting three strands of research from the development and political economy, regional science and energy economics fields. The dissertation unites three papers on the nature of spatial inequality of access to energy and infrastructure with further implications for conflict risk , the historical institutional and biogeographical determinants of current distribution of access to energy and public infrastructure services and the response of households to fuel price changes over time. Chapter 2 uses a novel survey dataset to provide evidence for spatial clustering of public infrastructure non-functionality at schools by geopolitical zone in Nigeria with further implications for armed conflict risk in the region. Chapter 3 investigates the drivers of the results in chapter 2, exploiting variation in the spatial distribution of precolonial institutions and geography in the region, to provide evidence for the long-term impacts of these factors on current heterogeneity of access to public services. Chapter 4 addresses the policy implications of energy access, providing the first multi-year evidence on firewood demand elasticities in India, using the spatial variation in prices for estimation.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Deist, Timo M; Jochems, A; van Soest, Johan; Nalbantov, Georgi; Oberije, Cary; Walsh, Seán; Eble, Michael; Bulens, Paul; Coucke, Philippe; Dries, Wim; Dekker, Andre; Lambin, Philippe
2017-06-01
Machine learning applications for personalized medicine are highly dependent on access to sufficient data. For personalized radiation oncology, datasets representing the variation in the entire cancer patient population need to be acquired and used to learn prediction models. Ethical and legal boundaries to ensure data privacy hamper collaboration between research institutes. We hypothesize that data sharing is possible without identifiable patient data leaving the radiation clinics and that building machine learning applications on distributed datasets is feasible. We developed and implemented an IT infrastructure in five radiation clinics across three countries (Belgium, Germany, and The Netherlands). We present here a proof-of-principle for future 'big data' infrastructures and distributed learning studies. Lung cancer patient data was collected in all five locations and stored in local databases. Exemplary support vector machine (SVM) models were learned using the Alternating Direction Method of Multipliers (ADMM) from the distributed databases to predict post-radiotherapy dyspnea grade [Formula: see text]. The discriminative performance was assessed by the area under the curve (AUC) in a five-fold cross-validation (learning on four sites and validating on the fifth). The performance of the distributed learning algorithm was compared to centralized learning where datasets of all institutes are jointly analyzed. The euroCAT infrastructure has been successfully implemented in five radiation clinics across three countries. SVM models can be learned on data distributed over all five clinics. Furthermore, the infrastructure provides a general framework to execute learning algorithms on distributed data. The ongoing expansion of the euroCAT network will facilitate machine learning in radiation oncology. The resulting access to larger datasets with sufficient variation will pave the way for generalizable prediction models and personalized medicine.
NASA Technical Reports Server (NTRS)
Murphy, James R.; Otto, Neil M.
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Technical Reports Server (NTRS)
Murphy, Jim; Otto, Neil
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
INDIGO-DataCloud solutions for Earth Sciences
NASA Astrophysics Data System (ADS)
Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin
2017-04-01
INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.
Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew
2017-06-30
EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.
Outlook for grid service technologies within the @neurIST eHealth environment.
Arbona, A; Benkner, S; Fingberg, J; Frangi, A F; Hofmann, M; Hose, D R; Lonsdale, G; Ruefenacht, D; Viceconti, M
2006-01-01
The aim of the @neurIST project is to create an IT infrastructure for the management of all processes linked to research, diagnosis and treatment development for complex and multi-factorial diseases. The IT infrastructure will be developed for one such disease, cerebral aneurysm and subarachnoid haemorrhage, but its core technologies will be transferable to meet the needs of other medical areas. Since the IT infrastructure for @neurIST will need to encompass data repositories, computational analysis services and information systems handling multi-scale, multi-modal information at distributed sites, the natural basis for the IT infrastructure is a Grid Service middleware. The project will adopt a service-oriented architecture because it aims to provide a system addressing the needs of medical researchers, clinicians and health care specialists (and their IT providers/systems) and medical supplier/consulting industries.
Holub, P; Greplova, K; Knoflickova, D; Nenutil, R; Valik, D
2012-01-01
We introduce the national research biobanking infrastructure, BBMRI_CZ. The infrastructure has been founded by the Ministry of Education and became a partner of the European biobanking infrastructure BBMRI.eu. It is designed as a network of individual biobanks where each biobank stores samples obtained from associated healthcare providers. The biobanks comprise long term storage (various types of tissues classified by diagnosis, serum at surgery, genomic DNA and RNA) and short term storage (longitudinally sampled patient sera). We discuss the operation workflow of the infrastructure that needs to be the distributed system: transfer of the samples to the biobank needs to be accompanied by extraction of data from the hospital information systems and this data must be stored in a central index serving mainly for sample lookup. Since BBMRI_CZ is designed solely for research purposes, the data is anonymised prior to their integration into the central BBMRI_CZ index. The index is then available for registered researchers to seek for samples of interest and to request the samples from biobank managers. The paper provides an overview of the structure of data stored in the index. We also discuss monitoring system for the biobanks, incorporated to ensure quality of the stored samples.
NASA Technical Reports Server (NTRS)
Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen;
2012-01-01
For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.
Bernal-Delgado, Enrique; Estupiñán-Romero, Francisco
2018-01-01
The integration of different administrative data sources from a number of European countries has been shown useful in the assessment of unwarranted variations in health care performance. This essay describes the procedures used to set up a data infrastructure (e.g., data access and exchange, definition of the minimum common wealth of data required, and the development of the relational logic data model) and, the methods to produce trustworthy healthcare performance measurements (e.g., ontologies standardisation and quality assurance analysis). The paper ends providing some hints on how to use these lessons in an eventual European infrastructure on public health research and monitoring. Although the relational data infrastructure developed has been proven accurate, effective to compare health system performance across different countries, and efficient enough to deal with hundred of millions of episodes, the logic data model might not be responsive if the European infrastructure aims at including electronic health records and carrying out multi-cohort multi-intervention comparative effectiveness research. The deployment of a distributed infrastructure based on semantic interoperability, where individual data remain in-country and open-access scripts for data management and analysis travel around the hubs composing the infrastructure, might be a sensible way forward.
2007-04-30
School 4th Annual Acquisition Research Symposium of the Naval Postgraduate School: Approved for public release, distribution unlimited. Prepared ...where he teaches graduate acquisition and contract management courses . Prior to his appointment at the Naval Postgraduate School, he served for ... for the Program Management Infrastructure Published: 30 April 2007 by Rene G. Rendon, Lecturer, and Uday Apte, Professor, Naval Postgraduate
NASA Astrophysics Data System (ADS)
Kałamucki, Krzysztof; Kamińska, Anna; Buk, Dorota
2012-01-01
The aim of the research was to demonstrate changes in tourist trails and in the distribution of tourist infrastructure spots in the area of Roztoczański National Park in its vicinity. Another, equally important aim, was to check the usefulness of tourist infrastructure in both cartographic method of infrastructure research and in cartography of presentation methods. The research covered the region of Roztoczański National Park. The following elements of tourist infrastructure were selected for the analysis: linear elements (walking trails, education paths) and spot elements (accommodation, eating places and the accompanied basis). In order to recreate the state of infrastructure during the last 50 years, it was necessary to analyse the following source material: tourist maps issued as independent publications, maps issued as supplements to tour guides and aerial photography. The information from text sources was used, e.g. from tourist guides, leaflets and monographs. The temporal framework was defined as 50 years from the 1960's until 2009. This time range was divided into five 10-year periods. In order to present the state of tourist infrastructure, its spatial and qualitative changes, 6 maps were produces (maps of states and types of changes). The conducted spatial analyses and the interpretations of maps of states and changes in tourist infrastructure allowed to capture both qualitative and quantitative changes. It was stated that the changes in the trails were not regular. There were parts of trails that did not change for 40 years. There were also some that were constructed during the last decade. Presently, the area is densely covered with tourist trails and education paths. The measurements of lengths of tourist trails and their parts with regard to land cover and category of roads allowed to determine the character of trails and the scope of changes. The conducted analyses proved the usefulness of cartographic methods in researching tourist infrastructure in spatial and quantitative aspects.
Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.
2013-01-01
Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567
Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G
2013-01-01
Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.
A distributed telerobotics construction set
NASA Technical Reports Server (NTRS)
Wise, James D.
1994-01-01
During the course of our research on distributed telerobotic systems, we have assembled a collection of generic, reusable software modules and an infrastructure for connecting them to form a variety of telerobotic configurations. This paper describes the structure of this 'Telerobotics Construction Set' and lists some of the components which comprise it.
Nuclear Energy Infrastructure Database Fitness and Suitability Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Brenden
In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less
The European Network of Analytical and Experimental Laboratories for Geosciences
NASA Astrophysics Data System (ADS)
Freda, Carmela; Funiciello, Francesca; Meredith, Phil; Sagnotti, Leonardo; Scarlato, Piergiorgio; Troll, Valentin R.; Willingshofer, Ernst
2013-04-01
Integrating Earth Sciences infrastructures in Europe is the mission of the European Plate Observing System (EPOS).The integration of European analytical, experimental, and analogue laboratories plays a key role in this context and is the task of the EPOS Working Group 6 (WG6). Despite the presence in Europe of high performance infrastructures dedicated to geosciences, there is still limited collaboration in sharing facilities and best practices. The EPOS WG6 aims to overcome this limitation by pushing towards national and trans-national coordination, efficient use of current laboratory infrastructures, and future aggregation of facilities not yet included. This will be attained through the creation of common access and interoperability policies to foster and simplify personnel mobility. The EPOS ambition is to orchestrate European laboratory infrastructures with diverse, complementary tasks and competences into a single, but geographically distributed, infrastructure for rock physics, palaeomagnetism, analytical and experimental petrology and volcanology, and tectonic modeling. The WG6 is presently organizing its thematic core services within the EPOS distributed research infrastructure with the goal of joining the other EPOS communities (geologists, seismologists, volcanologists, etc...) and stakeholders (engineers, risk managers and other geosciences investigators) to: 1) develop tools and services to enhance visitor programs that will mutually benefit visitors and hosts (transnational access); 2) improve support and training activities to make facilities equally accessible to students, young researchers, and experienced users (training and dissemination); 3) collaborate in sharing technological and scientific know-how (transfer of knowledge); 4) optimize interoperability of distributed instrumentation by standardizing data collection, archive, and quality control standards (data preservation and interoperability); 5) implement a unified e-Infrastructure for data analysis, numerical modelling, and joint development and standardization of numerical tools (e-science implementation); 6) collect and store data in a flexible inventory database accessible within and beyond the Earth Sciences community(open access and outreach); 7) connect to environmental and hazard protection agencies, stakeholders, and public to raise consciousness of geo-hazards and geo-resources (innovation for society). We will inform scientists and industrial stakeholders on the most recent WG6 achievements in EPOS and we will show how our community is proceeding to design the thematic core services.
The International Symposium on Grids and Clouds
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.
Development of the AuScope Australian Earth Observing System
NASA Astrophysics Data System (ADS)
Rawling, T.
2017-12-01
Advances in monitoring technology and significant investment in new national research initiatives, will provide significant new opportunities for delivery of novel geoscience data streams from across the Australian continent over the next decade. The AuScope Australian Earth Observing System (AEOS) is linking field and laboratory infrastructure across Australia to form a national sensor array focusing on the Solid Earth. As such AuScope is working with these programs to deploy observational infrastructure, including MT, passive seismic, and GNSS networks across the entire Australian Continent. Where possible the observational grid will be co-located with strategic basement drilling in areas of shallow cover and tied with national reflection seismic and sampling transects. This integrated suite of distributed earth observation and imaging sensors will provide unprecedented imaging fidelity of our crust, across all length and time scales, to fundamental and applied researchers in the earth, environmental and geospatial sciences. The AEOS will the Earth Science community's Square Kilometer Array (SKA) - a distributed telescope that looks INTO the earth rather than away from it - a 10 million SKA. The AEOS is strongly aligned with other community strategic initiatives including the UNCOVER research program as well as other National Collaborative Research Infrastructure programs such as the Terrestrial Environmental Research Network (TERN) and the Integrated Marine Observing System (IMOS) providing an interdisciplinary collaboration platform across the earth and environmental sciences. There is also very close alignment between AuScope and similar international programs such as EPOS, the USArray and EarthCube - potential collaborative linkages we are currently in the process of pursuing more fomally. The AuScope AEOS Infrastructure System is ultimately designed to enable the progressive construction, refinement and ongoing enrichment of a live, "FAIR" four-dimensional Earth Model for the Australian Continent and its immediate environs.
NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community
NASA Astrophysics Data System (ADS)
Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.
2017-12-01
The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... Partners, Inc., Corporate Center Division, Group Technology Infrastructure Services, Infrastructure Service... Infrastructure Services, Distributed Systems and Storage Group, Chicago, Illinois. The workers provide... unit formerly known as Group Technology Infrastructure Services, Distributed Systems and Storage is...
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F
2009-02-01
Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.
Real-Time Optimization and Control of Next-Generation Distribution
Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation
Challenges for the Protection of Critical ICT-Based Financial Infrastructures
NASA Astrophysics Data System (ADS)
Hämmerli, Bernhard M.; Arendt, Henning H.
A workshop was held in Frankfurt during September 24-25, 2007, in order to initiate a dialogue between financial industry (FI) stakeholders and Europe’s top-level research community. The workshop focused on identifying research and development challenges for the protection of critical ICT-based financial infrastructures for the next 5 years: “Protection of Massively Distributed Critical Financial Services” and “Trust in New Value Added Business Chains”. The outcome of the workshop contributed to the development of the research agenda from the perspectives of three working groups. A number of project ideas were spawned based on the workshop, including a coordination actions project entitled PARSIFAL, which this paper will focus on.
Cronin, Matthew A.; Amstrup, Steven C.; Durner, George M.; Noel, Lynn E.; McDonald, Trent L.; Ballard, Warren B.
1998-01-01
There is concern that caribou (Rangifer tarandus) may avoid roads and facilities (i.e., infrastructure) in the Prudhoe Bay oil field (PBOF) in northern Alaska, and that this avoidance can have negative effects on the animals. We quantified the relationship between caribou distribution and PBOF infrastructure during the post-calving period (mid-June to mid-August) with aerial surveys from 1990 to 1995. We conducted four to eight surveys per year with complete coverage of the PBOF. We identified active oil field infrastructure and used a geographic information system (GIS) to construct ten 1 km wide concentric intervals surrounding the infrastructure. We tested whether caribou distribution is related to distance from infrastructure with a chi-squared habitat utilization-availability analysis and log-linear regression. We considered bulls, calves, and total caribou of all sex/age classes separately. The habitat utilization-availability analysis indicated there was no consistent trend of attraction to or avoidance of infrastructure. Caribou frequently were more abundant than expected in the intervals close to infrastructure, and this trend was more pronounced for bulls and for total caribou of all sex/age classes than for calves. Log-linear regression (with Poisson error structure) of numbers of caribou and distance from infrastructure were also done, with and without combining data into the 1 km distance intervals. The analysis without intervals revealed no relationship between caribou distribution and distance from oil field infrastructure, or between caribou distribution and Julian date, year, or distance from the Beaufort Sea coast. The log-linear regression with caribou combined into distance intervals showed the density of bulls and total caribou of all sex/age classes declined with distance from infrastructure. Our results indicate that during the post-calving period: 1) caribou distribution is largely unrelated to distance from infrastructure; 2) caribou regularly use habitats in the PBOF; 3) caribou often occur close to infrastructure; and 4) caribou do not appear to avoid oil field infrastructure.
Setting the stage for the EPOS ERIC: Integration of the legal, governance and financial framework
NASA Astrophysics Data System (ADS)
Atakan, Kuvvet; Bazin, Pierre-Louis; Bozzoli, Sabrina; Freda, Carmela; Giardini, Domenico; Hoffmann, Thomas; Kohler, Elisabeth; Kontkanen, Pirjo; Lauterjung, Jörn; Pedersen, Helle; Saleh, Kauzar; Sangianantoni, Agata
2017-04-01
EPOS - the European Plate Observing System - is the ESFRI infrastructure serving the need of the solid Earth science community at large. The EPOS mission is to create a single sustainable, and distributed infrastructure that integrates the diverse European Research Infrastructures for solid Earth science under a common framework. Thematic Core Services (TCS) and Integrated Core Services (Central Hub, ICS-C and Distributed, ICS-D) are key elements, together with NRIs (National Research Infrastructures), in the EPOS architecture. Following the preparatory phase, EPOS has initiated formal steps to adopt an ERIC legal framework (European Research Infrastructure Consortium). The statutory seat of EPOS will be in Rome, Italy, while the ICS-C will be jointly operated by France, UK and Denmark. The TCS planned so far cover: seismology, near-fault observatories, GNSS data and products, volcano observations, satellite data, geomagnetic observations, anthropogenic hazards, geological information modelling, multiscale laboratories and geo-energy test beds for low carbon energy. In the ERIC process, EPOS and all its services must achieve sustainability from a legal, governance, financial, and technical point of view, as well as full harmonization with national infrastructure roadmaps. As EPOS is a distributed infrastructure, the TCSs have to be linked to the future EPOS ERIC from legal and governance perspectives. For this purpose the TCSs have started to organize themselves as consortia and negotiate agreements to define the roles of the different actors in the consortium as well as their commitment to contribute to the EPOS activities. The link to the EPOS ERIC shall be made by service agreements of dedicated Service Providers. A common EPOS data policy has also been developed, based on the general principles of Open Access and paying careful attention to licensing issues, quality control, and intellectual property rights, which shall apply to the data, data products, software and services (DDSS) accessible through EPOS. From a financial standpoint, EPOS elaborated common guidelines for all institutions providing services, and selected a costing model and funding approach which foresees a mixed support of the services via national contributions and ERIC membership fees. In the EPOS multi-disciplinary environment, harmonization and integration are required at different levels and with a variety of different stakeholders; to this purpose, a Service Coordination Board (SCB) and technical Harmonization Groups (HGs) were established to develop the EPOS metadata standards with the EPOS Integrated Central Services, and to harmonize data and product standards with other projects at European and international level, including e.g. ENVRI+, EUDAT and EarthCube (US).
Neighborhood Sociodemographics and Change in Built Infrastructure.
Hirsch, Jana A; Green, Geoffrey F; Peterson, Marc; Rodriguez, Daniel A; Gordon-Larsen, Penny
2017-01-01
While increasing evidence suggests an association between physical infrastructure in neighbourhoods and health outcomes, relatively little research examines how neighbourhoods change physically over time and how these physical improvements are spatially distributed across populations. This paper describes the change over 25 years (1985-2010) in bicycle lanes, off-road trails, bus transit service, and parks, and spatial clusters of changes in these domains relative to neighbourhood sociodemographics in four U.S. cities that are diverse in terms of geography, size and population. Across all four cities, we identified increases in bicycle lanes, off-road trails, and bus transit service, with spatial clustering in these changes that related to neighbourhood sociodemographics. Overall, we found evidence of positive changes in physical infrastructure commonly identified as supportive of physical activity. However, the patterning of infrastructure change by sociodemographic change encourages attention to the equity in infrastructure improvements across neighbourhoods.
Neighborhood Sociodemographics and Change in Built Infrastructure
Hirsch, Jana A.; Green, Geoffrey F.; Peterson, Marc; Rodriguez, Daniel A.; Gordon-Larsen, Penny
2016-01-01
While increasing evidence suggests an association between physical infrastructure in neighbourhoods and health outcomes, relatively little research examines how neighbourhoods change physically over time and how these physical improvements are spatially distributed across populations. This paper describes the change over 25 years (1985–2010) in bicycle lanes, off-road trails, bus transit service, and parks, and spatial clusters of changes in these domains relative to neighbourhood sociodemographics in four U.S. cities that are diverse in terms of geography, size and population. Across all four cities, we identified increases in bicycle lanes, off-road trails, and bus transit service, with spatial clustering in these changes that related to neighbourhood sociodemographics. Overall, we found evidence of positive changes in physical infrastructure commonly identified as supportive of physical activity. However, the patterning of infrastructure change by sociodemographic change encourages attention to the equity in infrastructure improvements across neighbourhoods. PMID:28316645
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L
2008-01-15
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
NASA Astrophysics Data System (ADS)
Calignano, Elisa; Freda, Carmela; Baracchi, Laura
2017-04-01
Women are outnumbered by men in geosciences senior research positions, but what is the situation if we consider large pan-European Research Infrastructures? With this contribution we want to show an analysis of the role of women in the implementation of the European Plate Observing System (EPOS): a planned research infrastructure for European Solid Earth sciences, integrating national and transnational research infrastructures to enable innovative multidisciplinary research. EPOS involves 256 national research infrastructures, 47 partners (universities and research institutes) from 25 European countries and 4 international organizations. The EPOS integrated platform demands significant coordination between diverse solid Earth disciplinary communities, national research infrastructures and the policies and initiatives they drive, geoscientists and information technologists. The EPOS architecture takes into account governance, legal, financial and technical issues and is designed so that the enterprise works as a single, but distributed, sustainable research infrastructure. A solid management structure is vital for the successful implementation and sustainability of EPOS. The internal organization relies on community-specific Working Packages (WPs), Transversal WPs in charge of the overall EPOS integration and implementation, several governing, executive and advisory bodies, a Project Management Office (PMO) and the Project Coordinator. Driven by the timely debate on gender balance and commitment of the European Commission to promote gender equality in research and innovation, we decided to conduct a mapping exercise on a project that crosses European national borders and that brings together diverse geoscience disciplines under one management structure. We present an analysis of women representation in decision-making positions in each EPOS Working Package (WP Leader, proxy, legal, financial and IT contact persons), in the Boards and Councils and in the PMO, together with statistics on women participation based on the project intranet, which counts more than 500 users. The analysis allows us not only to assess the gender balance in decision-making positions in a pan-European research infrastructure, but also to investigate how women's participation varies with different aspects of the project implementation (management, coordination, legal, financial or technical). Most of the women in EPOS are active geoscientists (academic or in national research institutes), or have a scientific background. By interviewing some of them we report also on how being involved in the project affects their careers. We believe this kind of analysis is an important starting point to promote awareness and achieve gender equality in research and innovation.
Nuclear Energy Infrastructure Database Description and User’s Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Brenden
In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less
A service-based BLAST command tool supported by cloud infrastructures.
Carrión, Abel; Blanquer, Ignacio; Hernández, Vicente
2012-01-01
Notwithstanding the benefits of distributed-computing infrastructures for empowering bioinformatics analysis tools with the needed computing and storage capability, the actual use of these infrastructures is still low. Learning curves and deployment difficulties have reduced the impact on the wide research community. This article presents a porting strategy of BLAST based on a multiplatform client and a service that provides the same interface as sequential BLAST, thus reducing learning curve and with minimal impact on their integration on existing workflows. The porting has been done using the execution and data access components from the EC project Venus-C and the Windows Azure infrastructure provided in this project. The results obtained demonstrate a low overhead on the global execution framework and reasonable speed-up and cost-efficiency with respect to a sequential version.
Development of a Free-Flight Simulation Infrastructure
NASA Technical Reports Server (NTRS)
Miles, Eric S.; Wing, David J.; Davis, Paul C.
1999-01-01
In anticipation of a projected rise in demand for air transportation, NASA and the FAA are researching new air-traffic-management (ATM) concepts that fall under the paradigm known broadly as ":free flight". This paper documents the software development and engineering efforts in progress by Seagull Technology, to develop a free-flight simulation (FFSIM) that is intended to help NASA researchers test mature-state concepts for free flight, otherwise referred to in this paper as distributed air / ground traffic management (DAG TM). Under development is a distributed, human-in-the-loop simulation tool that is comprehensive in its consideration of current and envisioned communication, navigation and surveillance (CNS) components, and will allow evaluation of critical air and ground traffic management technologies from an overall systems perspective. The FFSIM infrastructure is designed to incorporate all three major components of the ATM triad: aircraft flight decks, air traffic control (ATC), and (eventually) airline operational control (AOC) centers.
Data Publishing Services in a Scientific Project Platform
NASA Astrophysics Data System (ADS)
Schroeder, Matthias; Stender, Vivien; Wächter, Joachim
2014-05-01
Data-intensive science lives from data. More and more interdisciplinary projects are aligned to mutually gain access to their data, models and results. In order to achieving this, an umbrella project GLUES is established in the context of the "Sustainable Land Management" (LAMA) initiative funded by the German Federal Ministry of Education and Research (BMBF). The GLUES (Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services) project supports several different regional projects of the LAMA initiative: Within the framework of GLUES a Spatial Data Infrastructure (SDI) is implemented to facilitate publishing, sharing and maintenance of distributed global and regional scientific data sets as well as model results. The GLUES SDI supports several OGC webservices like the Catalog Service Web (CSW) which enables it to harvest data from varying regional projects. One of these regional projects is SuMaRiO (Sustainable Management of River Oases along the Tarim River) which aims to support oasis management along the Tarim River (PR China) under conditions of climatic and societal changes. SuMaRiO itself is an interdisciplinary and spatially distributed project. Working groups from twelve German institutes and universities are collecting data and driving their research in disciplines like Hydrology, Remote Sensing, and Agricultural Sciences among others. Each working group is dependent on the results of another working group. Due to the spatial distribution of participating institutes the data distribution is solved by using the eSciDoc infrastructure at the German Research Centre for Geosciences (GFZ). Further, the metadata based data exchange platform PanMetaDocs will be used by participants collaborative. PanMetaDocs supports an OAI-PMH interface which enables an Open Source metadata portal like GeoNetwork to harvest the information. The data added in PanMetaDocs can be labeled with a DOI (Digital Object Identifier) to publish the data and to harvest this information subsequently by the GLUES SDI. Our contribution will show the architecture of this new established SuMaRiO infrastructure node in a superordinate network of the GLUES infrastructure.
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A. G.
2007-12-01
As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing science through a variety of cyber-infrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is being collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circum-arctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through an assessment of community needs, readiness, and resources and through the development of a prototype web-mapping portal.
Integrated cloud infrastructure of the LIT JINR, PE "NULITS" and INP's Astana branch
NASA Astrophysics Data System (ADS)
Mazhitova, Yelena; Balashov, Nikita; Baranov, Aleksandr; Kutovskiy, Nikolay; Semenov, Roman
2018-04-01
The article describes the distributed cloud infrastructure deployed on the basis of the resources of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research (LIT JINR) and some JINR Member State organizations. It explains a motivation of that work, an approach it is based on, lists of its participants among which there are private entity "Nazarbayev University Library and IT services" (PE "NULITS") Autonomous Education Organization "Nazarbayev University" (AO NU) and The Institute of Nuclear Physics' (INP's) Astana branch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Happenny, Sean F.
The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta
2017-12-01
Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.
NASA Astrophysics Data System (ADS)
The CHAIN-REDS Project is organising a workshop on "e-Infrastructures for e-Sciences" focusing on Cloud Computing and Data Repositories under the aegis of the European Commission and in co-location with the International Conference on e-Science 2013 (IEEE2013) that will be held in Beijing, P.R. of China on October 17-22, 2013. The core objective of the CHAIN-REDS project is to promote, coordinate and support the effort of a critical mass of non-European e-Infrastructures for Research and Education to collaborate with Europe addressing interoperability and interoperation of Grids and other Distributed Computing Infrastructures (DCI). From this perspective, CHAIN-REDS will optimise the interoperation of European infrastructures with those present in 6 other regions of the world, both from a development and use point of view, and catering to different communities. Overall, CHAIN-REDS will provide input for future strategies and decision-making regarding collaboration with other regions on e-Infrastructure deployment and availability of related data; it will raise the visibility of e-Infrastructures towards intercontinental audiences, covering most of the world and will provide support to establish globally connected and interoperable infrastructures, in particular between the EU and the developing regions. Organised by IHEP, INFN and Sigma Orionis with the support of all project partners, this workshop will aim at: - Presenting the state of the art of Cloud computing in Europe and in China and discussing the opportunities offered by having interoperable and federated e-Infrastructures; - Exploring the existing initiatives of Data Infrastructures in Europe and China, and highlighting the Data Repositories of interest for the Virtual Research Communities in several domains such as Health, Agriculture, Climate, etc.
NASA Astrophysics Data System (ADS)
Holmen, K. J.; Lønne, O. J.
2016-12-01
The Svalbard Integrated Earth Observing System (SIOS) is a regional response to the Earth System Science (ESS) challenges posed by the Amsterdam Declaration on Global Change. SIOS is intended to develop and implement methods for how observational networks in the Arctic are to be designed in order to address such issues in a regional scale. SIOS builds on the extensive observation capacity and research installations already in place by many international institutions and will provide upgraded and relevant Observing Systems and Research Facilities of world class in and around Svalbard. It is a distributed research infrastructure set up to provide a regional observational system for long term measurements under a joint framework. As one of the large scale research infrastructure initiatives on the ESFRI roadmap (European Strategy Forum on Research Infrastructures), SIOS is now being implemented. The new research infrastructure organization, the SIOS Knowledge Center (SIOS-KC), is instrumental in developing methods and solutions for setting up its regional contribution to a systematically constructed Arctic observational network useful for global change studies. We will discuss cross-disciplinary research experiences some case studies and lessons learned so far. SIOS aims to provide an effective, easily accessible data management system which makes use of existing data handling systems in the thematic fields covered by SIOS. SIOS will, implement a data policy which matches the ambitions that are set for the new European research infrastructures, but at the same time be flexible enough to consider `historical' legacies. Given the substantial international presence in the Svalbard archipelago and the pan-Arctic nature of the issue, there is an opportunity to build SIOS further into a wider regional network and pan-Arctic context, ideally under the umbrella of the Sustaining Arctic Observing Networks (SAON) initiative. It is necessary to anchor SIOS strongly in a European context and connect it to extra-EU initiatives, in order to establish a pan-Arctic perspective. SIOS must develop and secure a robust communication with other bodies carrying out and funding research activities in the Arctic (observational as well as modelling) and actively promote a sustained Arctic observing network.
NASA Astrophysics Data System (ADS)
Maffioletti, Sergio; Dawes, Nicholas; Bavay, Mathias; Sarni, Sofiane; Lehning, Michael
2013-04-01
The Swiss Experiment platform (SwissEx: http://www.swiss-experiment.ch) provides a distributed storage and processing infrastructure for environmental research experiments. The aim of the second phase project (the Open Support Platform for Environmental Research, OSPER, 2012-2015) is to develop the existing infrastructure to provide scientists with an improved workflow. This improved workflow will include pre-defined, documented and connected processing routines. A large-scale computing and data facility is required to provide reliable and scalable access to data for analysis, and it is desirable that such an infrastructure should be free of traditional data handling methods. Such an infrastructure has been developed using the cloud-based part of the Swiss national infrastructure SMSCG (http://www.smscg.ch) and Academic Cloud. The infrastructure under construction supports two main usage models: 1) Ad-hoc data analysis scripts: These scripts are simple processing scripts, written by the environmental researchers themselves, which can be applied to large data sets via the high power infrastructure. Examples of this type of script are spatial statistical analysis scripts (R-based scripts), mostly computed on raw meteorological and/or soil moisture data. These provide processed output in the form of a grid, a plot, or a kml. 2) Complex models: A more intense data analysis pipeline centered (initially) around the physical process model, Alpine3D, and the MeteoIO plugin; depending on the data set, this may require a tightly coupled infrastructure. SMSCG already supports Alpine3D executions as both regular grid jobs and as virtual software appliances. A dedicated appliance with the Alpine3D specific libraries has been created and made available through the SMSCG infrastructure. The analysis pipelines are activated and supervised by simple control scripts that, depending on the data fetched from the meteorological stations, launch new instances of the Alpine3D appliance, execute location-based subroutines at each grid point and store the results back into the central repository for post-processing. An optional extension of this infrastructure will be to provide a 'ring buffer'-type database infrastructure, such that model results (e.g. test runs made to check parameter dependency or for development) can be visualised and downloaded after completion without submitting them to a permanent storage infrastructure. Data organization Data collected from sensors are archived and classified in distributed sites connected with an open-source software middleware, GSN. Publicly available data are available through common web services and via a cloud storage server (based on Swift). Collocation of the data and processing in the cloud would eventually eliminate data transfer requirements. Execution control logic Execution of the data analysis pipelines (for both the R-based analysis and the Alpine3D simulations) has been implemented using the GC3Pie framework developed by UZH. (https://code.google.com/p/gc3pie/). This allows large-scale, fault-tolerant execution of the pipelines to be described in terms of software appliances. GC3Pie also allows supervision of the execution of large campaigns of appliances as a single simulation. This poster will present the fundamental architectural components of the data analysis pipelines together with initial experimental results.
Sustainable infrastructure system modeling under uncertainties and dynamics
NASA Astrophysics Data System (ADS)
Huang, Yongxi
Infrastructure systems support human activities in transportation, communication, water use, and energy supply. The dissertation research focuses on critical transportation infrastructure and renewable energy infrastructure systems. The goal of the research efforts is to improve the sustainability of the infrastructure systems, with an emphasis on economic viability, system reliability and robustness, and environmental impacts. The research efforts in critical transportation infrastructure concern the development of strategic robust resource allocation strategies in an uncertain decision-making environment, considering both uncertain service availability and accessibility. The study explores the performances of different modeling approaches (i.e., deterministic, stochastic programming, and robust optimization) to reflect various risk preferences. The models are evaluated in a case study of Singapore and results demonstrate that stochastic modeling methods in general offers more robust allocation strategies compared to deterministic approaches in achieving high coverage to critical infrastructures under risks. This general modeling framework can be applied to other emergency service applications, such as, locating medical emergency services. The development of renewable energy infrastructure system development aims to answer the following key research questions: (1) is the renewable energy an economically viable solution? (2) what are the energy distribution and infrastructure system requirements to support such energy supply systems in hedging against potential risks? (3) how does the energy system adapt the dynamics from evolving technology and societal needs in the transition into a renewable energy based society? The study of Renewable Energy System Planning with Risk Management incorporates risk management into its strategic planning of the supply chains. The physical design and operational management are integrated as a whole in seeking mitigations against the potential risks caused by feedstock seasonality and demand uncertainty. Facility spatiality, time variation of feedstock yields, and demand uncertainty are integrated into a two-stage stochastic programming (SP) framework. In the study of Transitional Energy System Modeling under Uncertainty, a multistage stochastic dynamic programming is established to optimize the process of building and operating fuel production facilities during the transition. Dynamics due to the evolving technologies and societal changes and uncertainty due to demand fluctuations are the major issues to be addressed.
Review of EuCARD project on accelerator infrastructure in Europe
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2013-01-01
The aim of big infrastructural and research programs (like pan-European Framework Programs) and individual projects realized inside these programs in Europe is to structure the European Research Area - ERA in this way as to be competitive with the leaders of the world. One of this projects in EuCARD (European Coordination of Accelerator Research and Development) with the aim to structure and modernize accelerator, (including accelerators for big free electron laser machines) research infrastructure. This article presents the periodic development of EuCARD which took place between the annual meeting, April 2012 in Warsaw and SC meeting in Uppsala, December 2012. The background of all these efforts are achievements of the LHC machine and associated detectors in the race for new physics. The LHC machine works in the regime of p-p, Pb-p, Pb-Pb (protons and lead ions). Recently, a discovery by the LHC of Higgs like boson, has started vivid debates on the further potential of this machine and the future. The periodic EuCARD conference, workshop and meetings concern building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The aim of the discussion is not only summarize the current status but make plans and prepare practically to building new infrastructures. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. Accelerator technology is intensely developed in all developed nations and regions of the world. The EuCARD project contains a lot of subjects related directly and indirectly to photon physics and photonics, as well as optoelectronics, electronics and integration of these with large research infrastructure.
Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H
2005-01-01
Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.
Data management in Oceanography at SOCIB
NASA Astrophysics Data System (ADS)
Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan
2014-05-01
SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution: Data web services using Thredds, Geoserver and RESTful own services. • Catalogue: Metadata is provided through the ncISO plugin in Thredds and Geonetwork. • Visualization: web and mobile applications to present SOCIB data to different user profiles. SOCIB data services and applications have been developed to provide response to science and society needs (eg. European initiatives such as Emodnet or Copernicus), by targeting different user profiles (eg. researchers, technicians, policy and decision makers, educators, students, and society in general). For example, SOCIB has developed applications to: 1) allow researchers and technicians to access oceanographic information; 2) provide decision support for oil spills response; 3) disseminate information about the coastal state for tourists and recreational users; 4) present coastal research in educational programs; and 5) offer easy and fast access to marine information through mobile devices. In conclusion, the organizational and conceptual structure of SOCIB's Data Centre and the components developed provide an example of marine information systems within the framework of new ocean observatories and/or marine research infrastructures.
Demotes-Mainard, Jacques
2010-12-01
Clinical research plays a key role both in the development of innovative health products and in the optimisation of medical strategies, leading to evidence-based practice and healthcare cost containment. ECRIN is a distributed ESFRI-roadmap pan-European infrastructure designed to support multinational clinical research, making Europe a single area for clinical studies, taking advantage of its population size to access patients, and unlocking latent scientific providing services to multinational. Servicing of multinational trials started during the preparatory phase, and ECRIN has applied for ERIC status in 2011. In parallel, ECRIN has also proposed an FP7 integrating activity project to further develop, upgrade and expand the ECRIN infrastructure built up during the past FP6 and FP7 projects, facilitating an efficient organization of clinical research in Europe, with ECRIN developing generic tools and providing generic services for multinational studies, and supporting the construction of pan-European disease-oriented networks that will in turn act as ECRIN users. This organization will improve Europe's attractiveness for industry trials, boost its scientific competitiveness, and result in better healthcare for European citizens. The three medical areas supported in this project (rare diseases, medical devices, and nutrition) will serve as pilots for other biomedical research fields. By creating a single area for clinical research in Europe, this structure will contribute to the implementation of the Europe flagship initiative 2020 'Innovation Union', whose objectives include defragmentation of research and educational capacities, tackling the major societal challenges (starting with healthy aging), and removing barriers to bringing ideas to the market.
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.
Redefining Tactical Operations for MER Using Cloud Computing
NASA Technical Reports Server (NTRS)
Joswig, Joseph C.; Shams, Khawaja S.
2011-01-01
The Mars Exploration Rover Mission (MER) includes the twin rovers, Spirit and Opportunity, which have been performing geological research and surface exploration since early 2004. The rovers' durability well beyond their original prime mission (90 sols or Martian days) has allowed them to be a valuable platform for scientific research for well over 2000 sols, but as a by-product it has produced new challenges in providing efficient and cost-effective tactical operational planning. An early stage process adaptation was the move to distributed operations as mission scientists returned to their places of work in the summer of 2004, but they would still came together via teleconference and connected software to plan rover activities a few times a week. This distributed model has worked well since, but it requires the purchase, operation, and maintenance of a dedicated infrastructure at the Jet Propulsion Laboratory. This server infrastructure is costly to operate and the periodic nature of its usage (typically heavy usage for 8 hours every 2 days) has made moving to a cloud based tactical infrastructure an extremely tempting proposition. In this paper we will review both past and current implementations of the tactical planning application focusing on remote plan saving and discuss the unique challenges present with long-latency, distributed operations. We then detail the motivations behind our move to cloud based computing services and as well as our system design and implementation. We will discuss security and reliability concerns and how they were addressed
EUDAT: A New Cross-Disciplinary Data Infrastructure For Science
NASA Astrophysics Data System (ADS)
Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter
2013-04-01
In recent years significant investments have been made by the European Commission and European member states to create a pan-European e-Infrastructure supporting multiple research communities. As a result, a European e-Infrastructure ecosystem is currently taking shape, with communication networks, distributed grids and HPC facilities providing European researchers from all fields with state-of-the-art instruments and services that support the deployment of new research facilities on a pan-European level. However, the accelerated proliferation of data - newly available from powerful new scientific instruments, simulations and the digitization of existing resources - has created a new impetus for increasing efforts and investments in order to tackle the specific challenges of data management, and to ensure a coherent approach to research data access and preservation. EUDAT is a pan-European initiative that started in October 2011 and which aims to help overcome these challenges by laying out the foundations of a Collaborative Data Infrastructure (CDI) in which centres offering community-specific support services to their users could rely on a set of common data services shared between different research communities. Although research communities from different disciplines have different ambitions and approaches - particularly with respect to data organization and content - they also share many basic service requirements. This commonality makes it possible for EUDAT to establish common data services, designed to support multiple research communities, as part of this CDI. During the first year, EUDAT has been reviewing the approaches and requirements of a first subset of communities from linguistics (CLARIN), solid earth sciences (EPOS), climate sciences (ENES), environmental sciences (LIFEWATCH), and biological and medical sciences (VPH), and shortlisted four generic services to be deployed as shared services on the EUDAT infrastructure. These services are data replication from site to site, data staging to compute facilities, metadata, and easy storage. A number of enabling services such as distributed authentication and authorization, persistent identifiers, hosting of services, workspaces and centre registry were also discussed. The services being designed in EUDAT will thus be of interest to a broad range of communities that lack their own robust data infrastructures, or that are simply looking for additional storage and/or computing capacities to better access, use, re-use, and preserve their data. The first pilots were completed in 2012 and a pre-production ready operational infrastructure, comprised of five sites (RZG, CINECA, SARA, CSC, FZJ), offering 480TB of online storage and 4PB of near-line (tape) storage, initially serving four user communities (ENES, EPOS, CLARIN, VPH) was established. These services shall be available to all communities in a production environment by 2014. Although EUDAT has initially focused on a subset of research communities, it aims to engage with other communities interested in adapting their solutions or contributing to the design of the infrastructure. Discussions with other research communities - belonging to the fields of environmental sciences, biomedical science, physics, social sciences and humanities - have already begun and are following a pattern similar to the one we adopted with the initial communities. The next step will consist of integrating representatives from these communities into the existing pilots and task forces so as to include them in the process of designing the services and, ultimately, shaping the future CDI.
Spacing and length of passing sidings and the incremental capacity of single track.
DOT National Transportation Integrated Search
2016-02-18
The objective of this study is to evaluate the effect of initial siding spacing and distribution of siding length on the incremental capacity of infrastructure investments on single-track railway lines. Previous research has shown a linear reduction ...
Condition Assessment for Drinking Water Transmission and Distribution Mains
This project seeks to improve the capability to characterize the condition of water infrastructure. The integrity of buried drinking water mains is critical, as it influences water quality, losses, pressure and cost. This research complements the U.S. Environmental Protection A...
NASA Astrophysics Data System (ADS)
Turrini, Diego; de Sanctis, Maria Cristina; Carraro, Francesco; Fonte, Sergio; Giacomini, Livia; Politi, Romolo
In the framework of the Sixth Framework Programme (FP6) for Research and Technological Development of the European Community, the Europlanet project started the Integrated and Distributed Information Service (IDIS) initiative. The goal of this initiative was to "...offer to the planetary science community a common and user-friendly access to the data and infor-mation produced by the various types of research activities: earth-based observations, space observations, modelling and theory, laboratory experiments...". Four scientific nodes, repre-sentative of a significant fraction of the scientific themes covered by planetary sciences, were created: the Interiors and Surfaces node, the Atmospheres node, the Plasma node and the Small Bodies and Dust node. The original Europlanet program evolved into the Europlanet Research Infrastructure project, funded by the Seventh Framework Programme (FP7) for Research and Technological Development, and the IDIS initiative has been renewed with the addiction of a new scientific node, the Planetary Dynamics node. Here we present the Small Bodies and Dust node (SBDN) and the services it already provides to the scientific community, i.e. a searchable database of resources related to its thematic domains, an online and searchable cat-alogue of emission lines observed in the visible spectrum of comet 153P/2002 C1 Ikeya-Zhang supplemented by a visualization facility, a set of models of the simulated evolution of comet 67P/Churyumov-Gerasimenko with a particular focus on the effects of the distribution of dust and a information system on meteors through the Virtual Meteor Observatory. We will also introduce the new services that will be implemented and made available in the course of the Europlanet Research Infrastructure project.
Optical stabilization for time transfer infrastructure
NASA Astrophysics Data System (ADS)
Vojtech, Josef; Altmann, Michal; Skoda, Pavel; Horvath, Tomas; Slapak, Martin; Smotlacha, Vladimir; Havlis, Ondrej; Munster, Petr; Radil, Jan; Kundrat, Jan; Altmannova, Lada; Velc, Radek; Hula, Miloslav; Vohnout, Rudolf
2017-08-01
In this paper, we propose and present verification of all-optical methods for stabilization of the end-to-end delay of an optical fiber link. These methods are verified for deployment within infrastructure for accurate time and stable frequency distribution, based on sharing of fibers with research and educational network carrying live data traffic. Methods range from path length control, through temperature conditioning method to transmit wavelength control. Attention is given to achieve continuous control for relatively broad range of delays. We summarize design rules for delay stabilization based on the character and the total delay jitter.
Dynamic VM Provisioning for TORQUE in a Cloud Environment
NASA Astrophysics Data System (ADS)
Zhang, S.; Boland, L.; Coddington, P.; Sevior, M.
2014-06-01
Cloud computing, also known as an Infrastructure-as-a-Service (IaaS), is attracting more interest from the commercial and educational sectors as a way to provide cost-effective computational infrastructure. It is an ideal platform for researchers who must share common resources but need to be able to scale up to massive computational requirements for specific periods of time. This paper presents the tools and techniques developed to allow the open source TORQUE distributed resource manager and Maui cluster scheduler to dynamically integrate OpenStack cloud resources into existing high throughput computing clusters.
SEE-GRID eInfrastructure for Regional eScience
NASA Astrophysics Data System (ADS)
Prnjat, Ognjen; Balaz, Antun; Vudragovic, Dusan; Liabotis, Ioannis; Sener, Cevat; Marovic, Branko; Kozlovszky, Miklos; Neagu, Gabriel
In the past 6 years, a number of targeted initiatives, funded by the European Commission via its information society and RTD programmes and Greek infrastructure development actions, have articulated a successful regional development actions in South East Europe that can be used as a role model for other international developments. The SEEREN (South-East European Research and Education Networking initiative) project, through its two phases, established the SEE segment of the pan-European G ´EANT network and successfully connected the research and scientific communities in the region. Currently, the SEE-LIGHT project is working towards establishing a dark-fiber backbone that will interconnect most national Research and Education networks in the region. On the distributed computing and storage provisioning i.e. Grid plane, the SEE-GRID (South-East European GRID e-Infrastructure Development) project, similarly through its two phases, has established a strong human network in the area of scientific computing and has set up a powerful regional Grid infrastructure, and attracted a number of applications from different fields from countries throughout the South-East Europe. The current SEEGRID-SCI project, ending in April 2010, empowers the regional user communities from fields of meteorology, seismology and environmental protection in common use and sharing of the regional e-Infrastructure. Current technical initiatives in formulation are focusing on a set of coordinated actions in the area of HPC and application fields making use of HPC initiatives. Finally, the current SEERA-EI project brings together policy makers - programme managers from 10 countries in the region. The project aims to establish a communication platform between programme managers, pave the way towards common e-Infrastructure strategy and vision, and implement concrete actions for common funding of electronic infrastructures on the regional level. The regional vision on establishing an e-Infrastructure compatible with European developments, and empowering the scientists in the region in equal participation in the use of pan- European infrastructures, is materializing through the above initiatives. This model has a number of concrete operational and organizational guidelines which can be adapted to help e-Infrastructure developments in other world regions. In this paper we review the most important developments and contributions by the SEEGRID- SCI project.
e!DAL - a framework to store, share and publish research data
2014-01-01
Background The life-science community faces a major challenge in handling “big data”, highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the “big data life cycle”. The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. Results e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed “out-of-the-box” as an on-site repository. Conclusions e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK’s role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de. PMID:24958009
e!DAL--a framework to store, share and publish research data.
Arend, Daniel; Lange, Matthias; Chen, Jinbo; Colmsee, Christian; Flemming, Steffen; Hecht, Denny; Scholz, Uwe
2014-06-24
The life-science community faces a major challenge in handling "big data", highlighting the need for high quality infrastructures capable of sharing and publishing research data. Data preservation, analysis, and publication are the three pillars in the "big data life cycle". The infrastructures currently available for managing and publishing data are often designed to meet domain-specific or project-specific requirements, resulting in the repeated development of proprietary solutions and lower quality data publication and preservation overall. e!DAL is a lightweight software framework for publishing and sharing research data. Its main features are version tracking, metadata management, information retrieval, registration of persistent identifiers (DOI), an embedded HTTP(S) server for public data access, access as a network file system, and a scalable storage backend. e!DAL is available as an API for local non-shared storage and as a remote API featuring distributed applications. It can be deployed "out-of-the-box" as an on-site repository. e!DAL was developed based on experiences coming from decades of research data management at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). Initially developed as a data publication and documentation infrastructure for the IPK's role as a data center in the DataCite consortium, e!DAL has grown towards being a general data archiving and publication infrastructure. The e!DAL software has been deployed into the Maven Central Repository. Documentation and Software are also available at: http://edal.ipk-gatersleben.de.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-11
... Known as Brinson Partners, Inc., Corporate Center Division; Group Technology Infrastructure Services... Division, Group Technology Infrastructure Services, Distributed Systems and Storage Group, Chicago... Infrastructure Services, Distributed Systems and Storage Group have their wages reported under a separate...
International Convergence on Geoscience Cyberinfrastructure
NASA Astrophysics Data System (ADS)
Allison, M. L.; Atkinson, R.; Arctur, D. K.; Cox, S.; Jackson, I.; Nativi, S.; Wyborn, L. A.
2012-04-01
There is growing international consensus on addressing the challenges to cyber(e)-infrastructure for the geosciences. These challenges include: Creating common standards and protocols; Engaging the vast number of distributed data resources; Establishing practices for recognition of and respect for intellectual property; Developing simple data and resource discovery and access systems; Building mechanisms to encourage development of web service tools and workflows for data analysis; Brokering the diverse disciplinary service buses; Creating sustainable business models for maintenance and evolution of information resources; Integrating the data management life-cycle into the practice of science. Efforts around the world are converging towards de facto creation of an integrated global digital data network for the geosciences based on common standards and protocols for data discovery and access, and a shared vision of distributed, web-based, open source interoperable data access and integration. Commonalities include use of Open Geospatial Consortium (OGC) and ISO specifications and standardized data interchange mechanisms. For multidisciplinarity, mediation, adaptation, and profiling services have been successfully introduced to leverage the geosciences standards which are commonly used by the different geoscience communities -introducing a brokering approach which extends the basic SOA archetype. Principal challenges are less technical than cultural, social, and organizational. Before we can make data interoperable, we must make people interoperable. These challenges are being met by increased coordination of development activities (technical, organizational, social) among leaders and practitioners in national and international efforts across the geosciences to foster commonalities across disparate networks. In doing so, we will 1) leverage and share resources, and developments, 2) facilitate and enhance emerging technical and structural advances, 3) promote interoperability across scientific domains, 4) support the promulgation and institutionalization of agreed-upon standards, protocols, and practice, and 5) enhance knowledge transfer not only across the community, but into the domain sciences, 6) lower existing entry barriers for users and data producers, 7) build on the existing disciplinary infrastructures leveraging their service buses. . All of these objectives are required for establishing a permanent and sustainable cyber(e)-infrastructure for the geosciences. The rationale for this approach is well articulated in the AuScope mission statement: "Many of these problems can only be solved on a national, if not global scale. No single researcher, research institution, discipline or jurisdiction can provide the solutions. We increasingly need to embrace e-Research techniques and use the internet not only to access nationally distributed datasets, instruments and compute infrastructure, but also to build online, 'virtual' communities of globally dispersed researchers." Multidisciplinary interoperability can be successfully pursued by adopting a "system of systems" or a "Network of Networks" philosophy. This approach aims to: (a) supplement but not supplant systems mandates and governance arrangements; (b) keep the existing capacities as autonomous as possible; (c) lower entry barriers; (d) Build incrementally on existing infrastructures (information systems); (e) incorporate heterogeneous resources by introducing distribution and mediation functionalities. This approach has been adopted by the European INSPIRE (Infrastructure for Spatial Information in the European Community) initiative and by the international GEOSS (Global Earth Observation System of Systems) programme.
The challenge of developing ethical guidelines for a research infrastructure
NASA Astrophysics Data System (ADS)
Kutsch, Werner Leo
2016-04-01
The mission of the Integrated Carbon Observation System (ICOS RI) is to enable research to understand the greenhouse gas (GHG) budgets and perturbations. The ICOS RI provides the long-term observations required to understand the present state and predict future behaviour of the global carbon cycle and GHG emissions. Technological developments and implementations, related to GHGs, will be promoted by the linking of research, education and innovation. In order to provide this data ICOS RI is a distributed research infrastructure. The backbones of ICOS RI are the national measurement stations such as ICOS atmosphere, ecosystem and ocean stations. ICOS Central Facilities are the European level ICOS RI Centres, which have the specific tasks in collecting and processing the data and samples received from the national measurement networks. During the establishment of ICOS RI ethical guidelines were developed. These guidelines describe principles of ethics in the research activities that should be applied within ICOS RI. They should be acknowledged and followed by all researchers affiliated to ICOS RI and should be supported by all participating institutions. The presentation describes (1) the general challenge to develop ethical guidelines in a complex international infrastructure and (2) gives an overview about the content that includes different kinds of conflicts of interests, data ethics and social responsibility.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
European distributed seismological data archives infrastructure: EIDA
NASA Astrophysics Data System (ADS)
Clinton, John; Hanka, Winfried; Mazza, Salvatore; Pederson, Helle; Sleeman, Reinoud; Stammler, Klaus; Strollo, Angelo
2014-05-01
The European Integrated waveform Data Archive (EIDA) is a distributed Data Center system within ORFEUS that (a) securely archives seismic waveform data and related metadata gathered by European research infrastructures, and (b) provides transparent access to the archives for the geosciences research communities. EIDA was founded in 2013 by ORFEUS Data Center, GFZ, RESIF, ETH, INGV and BGR to ensure sustainability of a distributed archive system and the implementation of standards (e.g. FDSN StationXML, FDSN webservices) and coordinate new developments. Under the mandate of the ORFEUS Board of Directors and Executive Committee the founding group is responsible for steering and maintaining the technical developments and organization of the European distributed seismic waveform data archive and the integration within broader multidisciplanry frameworks like EPOS. EIDA currently offers uniform data access to unrestricted data from 8 European archives (www.orfeus-eu.org/eida), linked by the Arclink protocol, hosting data from 75 permanent networks (1800+ stations) and 33 temporary networks (1200+) stations). Moreover, each archive may also provide unique, restricted datasets. A webinterface, developed at GFZ, offers interactive access to different catalogues (EMSC, GFZ, USGS) and EIDA waveform data. Clients and toolboxes like arclink_fetch and ObsPy can connect directly to any EIDA node to collect data. Current developments are directed to the implementation of quality parameters and strong motion parameters.
Distributed Stress Sensing and Non-Destructive Tests Using Mechanoluminescence Materials
NASA Astrophysics Data System (ADS)
Rahimi, Mohammad Reza
Rapid aging of infrastructure systems is currently pervasive in the US and the anticipated cost until 2020 for rehabilitation of aging lifeline will reach 3.6 trillion US dollars (ASCE 2013). Reliable condition or serviceability assessment is critically important in decision-making for economic and timely maintenance of the infrastructure systems. Advanced sensors and nondestructive test (NDT) methods are the key technologies for structural health monitoring (SHM) applications that can provide information on the current state of structures. There are many traditional sensors and NDT methods, for examples, strain gauges, ultrasound, radiography and other X-ray, etc. to detect any defect on the infrastructure. Considering that civil infrastructure is typically large-scale and exhibits complex behavior, estimation of structural conditions by the local sensing and NDT methods is a challenging task. Non-contact and distributed (or full-field) sensing and NDT method are desirable that can provide rich information on the civil infrastructure's state. Materials with the ability of emitting light, especially in the visible range, are named as luminescent materials. Mechanoluminescence (ML) phenomenon is the light emission from luminescent materials as a response of an induced mechanical stress. ML materials offer new opportunities for SHM that can directly visualize the stress and crack distributions on the surface of structures through ML light emission. Although material research for ML phenomena have been made substantially, applications of the ML sensors to full-field stress and crack visualization are still at infant stage and have yet to be full-fledged. Moreover, practical applications of the ML sensors for SHM of civil infrastructure have difficulties since numerous challenging problems (e.g. environmental effect) arise in actual applications. In order to realize a practical SHM system employing ML sensors, more research needs to be conducted, for examples, fundamental understandings of physics of ML phenomenon, method for quantitative stress measurements, calibration method for ML sensors, improvement of sensitivity, optimal manufacturing and design of ML sensors, environmental effects of ML phenomenon (e.g. temperature), image processing and analysis, etc. In this research, fundamental ML phenomena of two most promising ML sensing materials were experimentally studied and a methodology for full-field quantitative strain measurements, for the first time, was proposed along with a standardized calibration method. Characteristics and behavior of ML composites and thin films coated on the structure have been studied under various material tests including compression, tension, pure shear, bending, etc. In addition, ML emission sensitivity to the manufacturing parameters and experimental conditions was addressed in order to find optimal design the ML sensor. A phenomenological stress-optics transduction model for predicting the ML light intensity from a thin-film ML coating sensor subjected to in-plane stresses was proposed. A new full-field quantitative strain measuring methodology by ML thin film sensor was developed, for the first time, in order to visualize and measure the strain field. The results from the ML sensor were compared and verified by finite element simulation results. For NDT applications of ML sensors, experimental tests were conducted to visualize the cracks on structural surfaces and detect damages on structural components. In summary, this research proposes and realizes a new distributed stress sensor and NDT method using ML sensing materials. The proposed method is experimentally validated to be effective for stress measurement and crack visualizations. Successful completion of this research provides a leap toward a commercial light intensity-based optic sensor to be used as a new full-field stress measurement technology and NDT method.
Modeling and Managing Risk in Billing Infrastructures
NASA Astrophysics Data System (ADS)
Baiardi, Fabrizio; Telmon, Claudio; Sgandurra, Daniele
This paper discusses risk modeling and risk management in information and communications technology (ICT) systems for which the attack impact distribution is heavy tailed (e.g., power law distribution) and the average risk is unbounded. Systems with these properties include billing infrastructures used to charge customers for services they access. Attacks against billing infrastructures can be classified as peripheral attacks and backbone attacks. The goal of a peripheral attack is to tamper with user bills; a backbone attack seeks to seize control of the billing infrastructure. The probability distribution of the overall impact of an attack on a billing infrastructure also has a heavy-tailed curve. This implies that the probability of a massive impact cannot be ignored and that the average impact may be unbounded - thus, even the most expensive countermeasures would be cost effective. Consequently, the only strategy for managing risk is to increase the resilience of the infrastructure by employing redundant components.
Integrating sea floor observatory data: the EMSO data infrastructure
NASA Astrophysics Data System (ADS)
Huber, Robert; Azzarone, Adriano; Carval, Thierry; Doumaz, Fawzi; Giovanetti, Gabriele; Marinaro, Giuditta; Rolin, Jean-Francois; Beranzoli, Laura; Waldmann, Christoph
2013-04-01
The European research infrastructure EMSO is a European network of fixed-point, deep-seafloor and water column observatories deployed in key sites of the European Continental margin and Arctic. It aims to provide the technological and scientific framework for the investigation of the environmental processes related to the interaction between the geosphere, biosphere, and hydrosphere and for a sustainable management by long-term monitoring also with real-time data transmission. Since 2006, EMSO is on the ESFRI (European Strategy Forum on Research Infrastructures) roadmap and has entered its construction phase in 2012. Within this framework, EMSO is contributing to large infrastructure integration projects such as ENVRI and COOPEUS. The EMSO infrastructure is geographically distributed in key sites of European waters, spanning from the Arctic, through the Atlantic and Mediterranean Sea to the Black Sea. It is presently consisting of thirteen sites which have been identified by the scientific community according to their importance respect to Marine Ecosystems, Climate Changes and Marine GeoHazards. The data infrastructure for EMSO is being designed as a distributed system. Presently, EMSO data collected during experiments at each EMSO site are locally stored and organized in catalogues or relational databases run by the responsible regional EMSO nodes. Three major institutions and their data centers are currently offering access to EMSO data: PANGAEA, INGV and IFREMER. In continuation of the IT activities which have been performed during EMSOs twin project ESONET, EMSO is now implementing the ESONET data architecture within an operational EMSO data infrastructure. EMSO aims to be compliant with relevant marine initiatives such as MyOceans, EUROSITES, EuroARGO, SEADATANET and EMODNET as well as to meet the requirements of international and interdisciplinary projects such as COOPEUS and ENVRI, EUDAT and iCORDI. A major focus is therefore set on standardization and interoperability of the EMSO data infrastructure. Beneath common standards for metadata exchange such as OpenSearch or OAI-PMH, EMSO has chosen to implement core standards of the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) suite of standards, such as Catalogue Service for Web (CS-W), Sensor Observation Service (SOS) and Observations and Measurements (O&M). Further, strong integration efforts are currently undertaken to harmonize data formats e.g NetCDF as well as the used ontologies and terminologies. The presentation will also give information to users about the discovery and visualization procedure for the EMSO data presently available.
NASA Astrophysics Data System (ADS)
van Hemert, Jano; Vilotte, Jean-Pierre
2010-05-01
Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)
Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment
Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel
2008-01-01
Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979
NASA Astrophysics Data System (ADS)
De Bruin, T.
2012-12-01
The Wadden Sea, an UNESCO World Heritage Site along the Northern coasts of The Netherlands, Germany and Denmark, is a very valuable, yet also highly vulnerable tidal flats area. Knowledge is key to the sustainable management of the Wadden Sea. This knowledge should be reliable, founded on promptly accessible information and sufficiently broad to integrate both ecological and economic analyses. The knowledge is gained from extensive monotoring of both ecological and socio-economic parameters. Even though many organisations, research institutes, government agencies and NGOs carry out monitoring, there is no central overview of monitoring activities, nor easy access to the resulting data. The 'Wadden Sea Long-Term Ecosystem Research' (WaLTER) project (2011-2015) aims to set-up an integrated monitoring plan for the main environmental and management issues relevant to the Wadden Sea, such as sea-level rise, fisheries management, recreation and industry activities. The WaLTER data access infrastructure will be a distributed system of data providers, with a centralized data access portal. It is based on and makes use of the existing data access infrastructure of the Netherlands National Oceanographic Data Committee (NL-NODC), which has been operational since early 2009. The NL-NODC system is identical to and in fact developed by the European SeaDataNet project, furthering standardisation on a pan-European scale. The presentation will focus on the use of a distributed data access infrastructure to address the needs of different user groups such as policy makers, scientists and the general public.
An EMSO data case study within the INDIGO-DC project
NASA Astrophysics Data System (ADS)
Monna, Stephen; Marcucci, Nicola M.; Marinaro, Giuditta; Fiore, Sandro; D'Anca, Alessandro; Antonacci, Marica; Beranzoli, Laura; Favali, Paolo
2017-04-01
We present our experience based on a case study within the INDIGO-DataCloud (INtegrating Distributed data Infrastructures for Global ExplOitation) project (www.indigo-datacloud.eu). The aim of INDIGO-DC is to develop a data and computing platform targeting scientific communities. Our case study is an example of activities performed by INGV using data from seafloor observatories that are nodes of the infrastructure EMSO (European Multidisciplinary Seafloor and water column Observatory)-ERIC (www.emso-eu.org). EMSO is composed of several deep-seafloor and water column observatories, deployed at key sites in the European waters, thus forming a widely distributed pan-European infrastructure. In our case study we consider data collected by the NEMO-SN1 observatory, one of the EMSO nodes used for geohazard monitoring, located in the Western Ionian Sea in proximity of Etna volcano. Starting from the case study, through an agile approach, we defined some requirements for INDIGO developers, and tested some of the proposed INDIGO solutions that are of interest for our research community. Given that EMSO is a distributed infrastructure, we are interested in INDIGO solutions that allow access to distributed data storage. Access should be both user-oriented and machine-oriented, and with the use of a common identity and access system. For this purpose, we have been testing: - ONEDATA (https://onedata.org), as global data management system. - INDIGO-IAM as Identity and Access Management system. Another aspect we are interested in is the efficient data processing, and we have focused on two types of INDIGO products: - Ophidia (http://ophidia.cmcc.it), a big data analytics framework for eScience for the analysis of multidimensional data. - A collection of INDIGO Services to run processes for scientific computing through the INDIGO Orchestrator.
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Astrophysics Data System (ADS)
Behnke, J.; Lindsay, F. E.; Lowe, D. R.; Mitchell, A. E.; Lynnes, C.
2016-12-01
NASA's Earth Observing System Data and Information System (EOSDIS) has been a central component of the NASA Earth observation program since the 1990's. The data collected by NASA's remote sensing instruments represent a significant public investment in research. EOSDIS provides free and open access to this data to a worldwide public research community. From the very beginning, EOSDIS was conceived as a system built on partnerships between NASA Centers, US agencies and academia. EOSDIS manages a wide range of Earth science discipline data that include cryosphere, land cover change, polar processes, field campaigns, ocean surface, digital elevation, atmosphere dynamics and composition, and inter-disciplinary research, among many others. Over the years, EOSDIS has evolved to support increasingly complex and diverse NASA Earth Science data collections. EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities/connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. . EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users. While the separation into domain-specific science archives helps to manage the wide variety of missions and datasets, the common services and practices serve to knit the overall system together into a coherent whole, with sharing of data, metadata, information and software making EOSDIS more than the simple sum of its parts. This paper will describe those parts and how the whole system works together to deliver Earth science data to millions of users.
Barriers to the conduct of randomised clinical trials within all disease areas.
Djurisic, Snezana; Rath, Ana; Gaber, Sabrina; Garattini, Silvio; Bertele, Vittorio; Ngwabyt, Sandra-Nadia; Hivert, Virginie; Neugebauer, Edmund A M; Laville, Martine; Hiesmayr, Michael; Demotes-Mainard, Jacques; Kubiak, Christine; Jakobsen, Janus C; Gluud, Christian
2017-08-01
Randomised clinical trials are key to advancing medical knowledge and to enhancing patient care, but major barriers to their conduct exist. The present paper presents some of these barriers. We performed systematic literature searches and internal European Clinical Research Infrastructure Network (ECRIN) communications during face-to-face meetings and telephone conferences from 2013 to 2017 within the context of the ECRIN Integrating Activity (ECRIN-IA) project. The following barriers to randomised clinical trials were identified: inadequate knowledge of clinical research and trial methodology; lack of funding; excessive monitoring; restrictive privacy law and lack of transparency; complex regulatory requirements; and inadequate infrastructures. There is a need for more pragmatic randomised clinical trials conducted with low risks of systematic and random errors, and multinational cooperation is essential. The present paper presents major barriers to randomised clinical trials. It also underlines the value of using a pan-European-distributed infrastructure to help investigators overcome barriers for multi-country trials in any disease area.
INDIGO: Building a DataCloud Framework to support Open Science
NASA Astrophysics Data System (ADS)
Chen, Yin; de Lucas, Jesus Marco; Aguilar, Fenando; Fiore, Sandro; Rossi, Massimiliano; Ferrari, Tiziana
2016-04-01
New solutions are required to support Data Intensive Science in the emerging panorama of e-infrastructures, including Grid, Cloud and HPC services. The architecture proposed by the INDIGO-DataCloud (INtegrating Distributed data Infrastructures for Global ExplOitation) (https://www.indigo-datacloud.eu/) H2020 project, provides the path to integrate IaaS resources and PaaS platforms to provide SaaS solutions, while satisfying the requirements posed by different Research Communities, including several in Earth Science. This contribution introduces the INDIGO DataCloud architecture, describes the methodology followed to assure the integration of the requirements from different research communities, including examples like ENES, LifeWatch or EMSO, and how they will build their solutions using different INDIGO components.
Advanced e-Infrastructures for Civil Protection applications: the CYCLOPS Project
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Nativi, S.; Verlato, M.; Ayral, P. A.; Fiorucci, P.; Pina, A.; Oliveira, J.; Sorani, R.
2009-04-01
During the full cycle of the emergency management, Civil Protection operative procedures involve many actors belonging to several institutions (civil protection agencies, public administrations, research centers, etc.) playing different roles (decision-makers, data and service providers, emergency squads, etc.). In this context the sharing of information is a vital requirement to make correct and effective decisions. Therefore a European-wide technological infrastructure providing a distributed and coordinated access to different kinds of resources (data, information, services, expertise, etc.) could enhance existing Civil Protection applications and even enable new ones. Such European Civil Protection e-Infrastructure should be designed taking into account the specific requirements of Civil Protection applications and the state-of-the-art in the scientific and technological disciplines which could make the emergency management more effective. In the recent years Grid technologies have reached a mature state providing a platform for secure and coordinated resource sharing between the participants collected in the so-called Virtual Organizations. Moreover the Earth and Space Sciences Informatics provide the conceptual tools for modeling the geospatial information shared in Civil Protection applications during its entire lifecycle. Therefore a European Civil Protection e-infrastructure might be based on a Grid platform enhanced with Earth Sciences services. In the context of the 6th Framework Programme the EU co-funded Project CYCLOPS (CYber-infrastructure for CiviL protection Operative ProcedureS), ended in December 2008, has addressed the problem of defining the requirements and identifying the research strategies and innovation guidelines towards an advanced e-Infrastructure for Civil Protection. Starting from the requirement analysis CYCLOPS has proposed an architectural framework for a European Civil Protection e-Infrastructure. This architectural framework has been evaluated through the development of prototypes of two operative applications used by the Italian Civil Protection for Wild Fires Risk Assessment (RISICO) and by the French Civil Protection for Flash Flood Risk Management (SPC-GD). The results of these studies and proof-of-concepts have been used as the basis for the definition of research and innovation strategies aiming to the detailed design and implementation of the infrastructure. In particular the main research themes and topics to be addressed have been identified and detailed. Finally the obstacles to the innovation required for the adoption of this infrastructure and possible strategies to overcome them have been discussed.
The Microbial Resource Research Infrastructure MIRRI: Strength through Coordination
Stackebrandt, Erko; Schüngel, Manuela; Martin, Dunja; Smith, David
2015-01-01
Microbial resources have been recognized as essential raw materials for the advancement of health and later for biotechnology, agriculture, food technology and for research in the life sciences, as their enormous abundance and diversity offer an unparalleled source of unexplored solutions. Microbial domain biological resource centres (mBRC) provide live cultures and associated data to foster and support the development of basic and applied science in countries worldwide and especially in Europe, where the density of highly advanced mBRCs is high. The not-for-profit and distributed project MIRRI (Microbial Resource Research Infrastructure) aims to coordinate access to hitherto individually managed resources by developing a pan-European platform which takes the interoperability and accessibility of resources and data to a higher level. Providing a wealth of additional information and linking to datasets such as literature, environmental data, sequences and chemistry will enable researchers to select organisms suitable for their research and enable innovative solutions to be developed. The current independent policies and managed processes will be adapted by partner mBRCs to harmonize holdings, services, training, and accession policy and to share expertise. The infrastructure will improve access to enhanced quality microorganisms in an appropriate legal framework and to resource-associated data in a more interoperable way. PMID:27682123
The Microbial Resource Research Infrastructure MIRRI: Strength through Coordination.
Stackebrandt, Erko; Schüngel, Manuela; Martin, Dunja; Smith, David
2015-11-18
Microbial resources have been recognized as essential raw materials for the advancement of health and later for biotechnology, agriculture, food technology and for research in the life sciences, as their enormous abundance and diversity offer an unparalleled source of unexplored solutions. Microbial domain biological resource centres (mBRC) provide live cultures and associated data to foster and support the development of basic and applied science in countries worldwide and especially in Europe, where the density of highly advanced mBRCs is high. The not-for-profit and distributed project MIRRI (Microbial Resource Research Infrastructure) aims to coordinate access to hitherto individually managed resources by developing a pan-European platform which takes the interoperability and accessibility of resources and data to a higher level. Providing a wealth of additional information and linking to datasets such as literature, environmental data, sequences and chemistry will enable researchers to select organisms suitable for their research and enable innovative solutions to be developed. The current independent policies and managed processes will be adapted by partner mBRCs to harmonize holdings, services, training, and accession policy and to share expertise. The infrastructure will improve access to enhanced quality microorganisms in an appropriate legal framework and to resource-associated data in a more interoperable way.
The United States’ water and wastewater infrastructure is large (i.e., 16,000 publicly owned treatment works, 59,000 community water supplies, 600,000 miles of sewer, 1,000,000 miles of drinking water distribution piping), complex and expensive. The reliable and efficient functio...
NASA Astrophysics Data System (ADS)
Waldmann, H. C.; Koop-Jakobsen, K.
2014-12-01
The GEOSS Common Infrastructure (GCI) enables earth observations data providers to make their resources available in a global context and allow users of earth observations data to search, access and use data, tools and services available through the Global Earth Observation System of Systems. COOPEUS views the GCI as an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges, and the research infrastructures (RIs) in COOPEUS are currently in the process of registering resources and services within the GCI. To promote this work, COOPEUS and GEOSS held a joint workshop in July 2014, where the main scope was to get data managers of the COOPEUS RIs involved and establish the GCI as part of the COOPEUS interoperability framework. The workshop revealed that data policies of the individual RIs can often be the first impediment for their use of the GCI. As many RIs are administering data from many sources, permission to distribute the data must be in place before registration in the GCI. Through hands-on exercises registering resources from the COOPEUS RIs, the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS RIs. These exercises gave important feedback for the practical implementation of the GCI as well as the challenges lying ahead. For the COOPEUS RIs providing data the benefits includes improved discovery and access to data and information, increased visibility of available data, information and services, which will promote the structuring of the existing environmental research infrastructure landscape and improve the interoperability. However, in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated like for instance allowing for bulk data registration; the resource registration and feedback by COOPEUS partners can play an important role in these efforts.
Grid computing technology for hydrological applications
NASA Astrophysics Data System (ADS)
Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V.
2011-06-01
SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
The data access infrastructure of the Wadden Sea Long Term Ecosystem Research (WaLTER) project
NASA Astrophysics Data System (ADS)
De Bruin, T.
2011-12-01
The Wadden Sea, North of The Netherlands, Germany and Danmark, is one of the most important tidal areas in the world. In 2009, the Wadden Sea was listed on the UNESCO World Heritage list. The area is noted for its ecological diversity and value, being a stopover for large numbers of migrating birds. The Wadden Sea is also used intensively for economic activities by inhabitants of the surrounding coasts and islands, as well as by the many tourists visiting the area every year. A whole series of monitoring programmes is carried out by a range of governmental bodies and institutes to study the natural processes occuring in the Wadden Sea ecosystems as well as the influence of human activities on those ecosystems. Yet, the monitoring programmes are scattered and it is difficult to get an overview of those monitoring activities or to get access to the data resulting from those monitoring programmes. The Wadden Sea Long Term Ecosystem Research (WaLTER) project aims to: 1. To provide a base set of consistent, standardized, long-term data on changes in the Wadden Sea ecological and socio-economic system in order to model and understand interrelationships with human use, climate variation and possible other drivers. 2. To provide a research infrastructure, open access to commonly shared databases, educational facilities and one or more field sites in which experimental, innovative and process-driven research can be carried out. This presentation will introduce the WaLTER-project and explain the rationale for this project. The presentation will focus on the data access infrastructure which will be used for WaLTER. This infrastructure is part of the existing and operational infrastructure of the National Oceanographic Data Committee (NODC) in the Netherlands. The NODC forms the Dutch node in the European SeaDataNet consortium, which has built an European, distributed data access infrastructure. WaLTER, NODC and SeaDataNet all use the same technology, developed within the SeaDataNet-project, resulting in a high level of standardization across Europe. Benefits and pitfalls of using this infrastructure will be addressed.
Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping
2016-03-01
Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.
2017-12-01
Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
GIS-and Web-based Water Resource Geospatial Infrastructure for Oil Shale Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Wei; Minnick, Matthew; Geza, Mengistu
2012-09-30
The Colorado School of Mines (CSM) was awarded a grant by the National Energy Technology Laboratory (NETL), Department of Energy (DOE) to conduct a research project en- titled GIS- and Web-based Water Resource Geospatial Infrastructure for Oil Shale Development in October of 2008. The ultimate goal of this research project is to develop a water resource geo-spatial infrastructure that serves as “baseline data” for creating solutions on water resource management and for supporting decisions making on oil shale resource development. The project came to the end on September 30, 2012. This final project report will report the key findings frommore » the project activity, major accomplishments, and expected impacts of the research. At meantime, the gamma version (also known as Version 4.0) of the geodatabase as well as other various deliverables stored on digital storage media will be send to the program manager at NETL, DOE via express mail. The key findings from the project activity include the quantitative spatial and temporal distribution of the water resource throughout the Piceance Basin, water consumption with respect to oil shale production, and data gaps identified. Major accomplishments of this project include the creation of a relational geodatabase, automated data processing scripts (Matlab) for database link with surface water and geological model, ArcGIS Model for hydrogeologic data processing for groundwater model input, a 3D geological model, surface water/groundwater models, energy resource development systems model, as well as a web-based geo-spatial infrastructure for data exploration, visualization and dissemination. This research will have broad impacts of the devel- opment of the oil shale resources in the US. The geodatabase provides a “baseline” data for fur- ther study of the oil shale development and identification of further data collection needs. The 3D geological model provides better understanding through data interpolation and visualization techniques of the Piceance Basin structure spatial distribution of the oil shale resources. The sur- face water/groundwater models quantify the water shortage and better understanding the spatial distribution of the available water resources. The energy resource development systems model reveals the phase shift of water usage and the oil shale production, which will facilitate better planning for oil shale development. Detailed descriptions about the key findings from the project activity, major accomplishments, and expected impacts of the research will be given in the sec- tion of “ACCOMPLISHMENTS, RESULTS, AND DISCUSSION” of this report.« less
NASA Astrophysics Data System (ADS)
Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre
2016-04-01
The mission of EGI-Engage project [1] is to accelerate the implementation of the Open Science Commons vision, where researchers from all disciplines have easy and open access to the innovative digital services, data, knowledge and expertise they need for collaborative and excellent research. The Open Science Commons is grounded on three pillars: the e-Infrastructure Commons, an ecosystem of services that constitute the foundation layer of distributed infrastructures; the Open Data Commons, where observations, results and applications are increasingly available for scientific research and for anyone to use and reuse; and the Knowledge Commons, in which communities have shared ownership of knowledge, participate in the co-development of software and are technically supported to exploit state-of-the-art digital services. To develop the Knowledge Commons, EGI-Engage is supporting the work of a set of community-specific Competence Centres, with participants from user communities (scientific institutes), National Grid Initiatives (NGIs), technology and service providers. Competence Centres collect and analyse requirements, integrate community-specific applications into state-of-the-art services, foster interoperability across e-Infrastructures, and evolve services through a user-centric development model. One of these Competence Centres is focussed on the European Plate Observing System (EPOS) [2] as representative of the solid earth science communities. EPOS is a pan-European long-term plan to integrate data, software and services from the distributed (and already existing) Research Infrastructures all over Europe, in the domain of the solid earth science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data Intensive analysis and also explores possible interaction with other Common Data Infrastructure initiatives as EUDAT. In the presentation, the state of the art of the two use cases, together with the open challenges and the future application will be discussed. Also, possible integration of EGI solutions with EPOS and other e-infrastructure providers will be considered. [1] EGI-ENGAGE https://www.egi.eu/about/egi-engage/ [2] EPOS http://www.epos-eu.org/
NASA Technical Reports Server (NTRS)
Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi
2010-01-01
The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.
Public key infrastructure for DOE security research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aiken, R.; Foster, I.; Johnston, W.E.
This document summarizes the Department of Energy`s Second Joint Energy Research/Defence Programs Security Research Workshop. The workshop, built on the results of the first Joint Workshop which reviewed security requirements represented in a range of mission-critical ER and DP applications, discussed commonalties and differences in ER/DP requirements and approaches, and identified an integrated common set of security research priorities. One significant conclusion of the first workshop was that progress in a broad spectrum of DOE-relevant security problems and applications could best be addressed through public-key cryptography based systems, and therefore depended upon the existence of a robust, broadly deployed public-keymore » infrastructure. Hence, public-key infrastructure ({open_quotes}PKI{close_quotes}) was adopted as a primary focus for the second workshop. The Second Joint Workshop covered a range of DOE security research and deployment efforts, as well as summaries of the state of the art in various areas relating to public-key technologies. Key findings were that a broad range of DOE applications can benefit from security architectures and technologies built on a robust, flexible, widely deployed public-key infrastructure; that there exists a collection of specific requirements for missing or undeveloped PKI functionality, together with a preliminary assessment of how these requirements can be met; that, while commercial developments can be expected to provide many relevant security technologies, there are important capabilities that commercial developments will not address, due to the unique scale, performance, diversity, distributed nature, and sensitivity of DOE applications; that DOE should encourage and support research activities intended to increase understanding of security technology requirements, and to develop critical components not forthcoming from other sources in a timely manner.« less
Vibration Monitoring of Power Distribution Poles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark Scott; Gail Heath; John Svoboda
2006-04-01
Some of the most visible and least monitored elements of our national security infrastructure are the poles and towers used for the distribution of our nation’s electrical power. Issues surrounding these elements within the United States include safety such as unauthorized climbing and access, vandalism such as nut/bolt removal or destructive small arms fire, and major vandalism such as the downing of power poles and towers by the cutting of the poles with a chainsaw or torches. The Idaho National Laboratory (INL) has an ongoing research program working to develop inexpensive and sensitive sensor platforms for the monitoring and characterizationmore » of damage to the power distribution infrastructure. This presentation covers the results from the instrumentation of a variety of power poles and wires with geophone assemblies and the recording of vibration data when power poles were subjected to a variety of stimuli. Initial results indicate that, for the majority of attacks against power poles, the resulting signal can be seen not only on the targeted pole but on sensors several poles away in the distribution network and a distributed sensor system can be used to monitor remote and critical structures.« less
EGI-EUDAT integration activity - Pair data and high-throughput computing resources together
NASA Astrophysics Data System (ADS)
Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana
2016-04-01
EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are relevant European Research infrastructure in the field of Earth Science (EPOS and ICOS), Bioinformatics (BBMRI and ELIXIR) and Space Physics (EISCAT-3D). The first outcome of this activity has been the definition of a generic use case that captures the typical user scenario with respect the integrated use of the EGI and EUDAT infrastructures. This generic use case allows a user to instantiate a set of Virtual Machine images on the EGI Federated Cloud to perform computational jobs that analyse data previously stored on EUDAT long-term storage systems. The results of such analysis can be staged back to EUDAT storages, and if needed, allocated with Permanent identifyers (PIDs) for future use. The implementation of this generic use case requires the following integration activities between EGI and EUDAT: (1) harmonisation of the user authentication and authorisation models, (2) implementing interface connectors between the relevant EGI and EUDAT services, particularly EGI Cloud compute facilities and EUDAT long-term storage and PID systems. In the presentation, the collected user requirements and the implementation status of the universal use case will be showed. Furthermore, how the universal use case is currently applied to satisfy EPOS and ICOS needs will be described.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments
NASA Astrophysics Data System (ADS)
Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.
Effective Team Support: From Modeling to Software Agents
NASA Technical Reports Server (NTRS)
Remington, Roger W. (Technical Monitor); John, Bonnie; Sycara, Katia
2003-01-01
The purpose of this research contract was to perform multidisciplinary research between CMU psychologists, computer scientists and engineers and NASA researchers to design a next generation collaborative system to support a team of human experts and intelligent agents. To achieve robust performance enhancement of such a system, we had proposed to perform task and cognitive modeling to thoroughly understand the impact technology makes on the organization and on key individual personnel. Guided by cognitively-inspired requirements, we would then develop software agents that support the human team in decision making, information filtering, information distribution and integration to enhance team situational awareness. During the period covered by this final report, we made substantial progress in modeling infrastructure and task infrastructure. Work is continuing under a different contract to complete empirical data collection, cognitive modeling, and the building of software agents to support the teams task.
NASA's Participation in the National Computational Grid
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)
1998-01-01
Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.
Scalable collaborative risk management technology for complex critical systems
NASA Technical Reports Server (NTRS)
Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.
2004-01-01
We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.
A PKI Approach for Deploying Modern Secure Distributed E-Learning and M-Learning Environments
ERIC Educational Resources Information Center
Kambourakis, Georgios; Kontoni, Denise-Penelope N.; Rouskas, Angelos; Gritzalis, Stefanos
2007-01-01
While public key cryptography is continuously evolving and its installed base is growing significantly, recent research works examine its potential use in e-learning or m-learning environments. Public key infrastructure (PKI) and attribute certificates (ACs) can provide the appropriate framework to effectively support authentication and…
The web-based EnviroAtlas is an easy-to-use mapping and analysis tool built by the U.S. Environmental Protection Agency and its partners to provide information, data, and research on the relationships between ecosystems, built infrastructure, and societal well-being. The tool is ...
This project will contribute valuable information on the performance characteristics of new technology for use in infrastructure rehabilitation, and will provide additional credibility to the U.S. Environment Protection Agency’s (EPA) Office of Research and Development’s (ORD) fo...
Anomaly-based intrusion detection for SCADA systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D.; Usynin, A.; Hines, J. W.
2006-07-01
Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper willmore » briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)« less
The Icelandic volcanological data node and data service
NASA Astrophysics Data System (ADS)
Vogfjord, Kristin; Sigmundsson, Freysteinn; Futurevolc Team
2013-04-01
Through funding from the European FP7 programme, the International Civil Aviation Authority (ICAO), as well as the local Icelandic government and RANNÍS research fund, the establishment of the Icelandic volcano observatory (VO) as a cross-disciplinary, international volcanological data node and data service is starting to materialize. At the core of this entity is the close collaboration between the Icelandic Meteorological Office (IMO), a natural hazard monitoring and research institution, and researchers at the Earth Science Institute of the University of Iceland, ensuring long-term sustainable access to research quality data and products. Existing Icelandic Earth science monitoring and research infrastructures are being prepared for integration with the European EPOS infrastructure. Because the VO is located at a Met Office, this infrastructure also includes meteorological infrastructures relevant to volcanology. Furthermore, the FP7 supersite project, FUTUREVOLC cuts across disciplines to bring together European researchers from Earth science, atmospheric science, remote sensing and space science focussed on combined processing of the different data sources and results to generate a multiparametric volcano monitoring and early warning system. Integration with atmospheric and space science is to meet the need for better estimates of the volcanic eruption source term and dispersion, which depend not only on the magma flow rate and composition, but also on atmosphere-plume interaction and dispersion. This should lead to better estimates of distribution of ash in the atmosphere. FUTUREVOLC will significantly expand the existing Icelandic EPOS infrastructure to an even more multidisciplinary volcanological infrastructure. A central and sustainable part of the project is the establishment of a research-quality data centre at the VO. This data centre will be able to serve as a volcanological data node within EPOS, making multidisciplinary data accessible to scientists and stakeholders, and enabling the generation of products and services useful for civil protection, societal infrastructure and international aviation. The 2010 Eyjafjallajökull eruption demonstrated that eruption and dispersion of volcanic ash in the atmosphere can have far-reaching detrimental effects on aviation. The aviation community is therefore an important stakeholder in volcano monitoring, but interaction between the two communities is not well established. Traditionally Met Offices provide services vital to aviation safety and therefore have strong ties to the aviation community, with internationally established protocols for interaction. The co-habitation of a Met Office with a VO establishes a firm connection between these communities and allows adaptation of already established protocols to facilitate access to information and development of services for aviation, as well as sources of support for the VO.
Towards an integrated European strong motion data distribution
NASA Astrophysics Data System (ADS)
Luzi, Lucia; Clinton, John; Cauzzi, Carlo; Puglia, Rodolfo; Michelini, Alberto; Van Eck, Torild; Sleeman, Reinhoud; Akkar, Sinan
2013-04-01
Recent decades have seen a significant increase in the quality and quantity of strong motion data collected in Europe, as dense and often real-time and continuously monitored broadband strong motion networks have been constructed in many nations. There has been a concurrent increase in demand for access to strong motion data not only from researchers for engineering and seismological studies, but also from civil authorities and seismic networks for the rapid assessment of ground motion and shaking intensity following significant earthquakes (e.g. ShakeMaps). Aside from a few notable exceptions on the national scale, databases providing access to strong motion data has not appeared to keep pace with these developments. In the framework of the EC infrastructure project NERA (2010 - 2014), that integrates key research infrastructures in Europe for monitoring earthquakes and assessing their hazard and risk, the network activity NA3 deals with the networking of acceleration networks and SM data. Within the NA3 activity two infrastructures are being constructed: i) a Rapid Response Strong Motion (RRSM) database, that following a strong event, automatically parameterises all available on-scale waveform data within the European Integrated waveform Data Archives (EIDA) and makes the waveforms easily available to the seismological community within minutes of an event; and ii) a European Strong Motion (ESM) database of accelerometric records, with associated metadata relevant to earthquake engineering and seismology research communities, using standard, manual processing that reflects the state of the art and research needs in these fields. These two separate repositories form the core infrastructures being built to distribute strong motion data in Europe in order to guarantee rapid and long-term availability of high quality waveform data to both the international scientific community and the hazard mitigation communities. These infrastructures will provide the access to strong motion data in an eventual EPOS seismological service. A working group on Strong Motion data is being created at ORFEUS in 2013. This body, consisting of experts in strong motion data collection, processing and research from across Europe, will provide the umbrella organisation that will 1) have the political clout to negotiate data sharing agreements with strong motion data providers and 2) manage the software during a transition from the end of NERA to the EPOS community. We expect the community providing data to the RRSM and ESM will gradually grow, under the supervision of ORFEUS, and eventually include strong motion data from networks from all European countries that can have an open data policy.
Adriaens, Peter; Goovaerts, Pierre; Skerlos, Steven; Edwards, Elizabeth; Egli, Thomas
2003-12-01
Recent commercial and residential development have substantially impacted the fluxes and quality of water that recharge the aquifers and discharges to streams, lakes and wetlands and, ultimately, is recycled for potable use. Whereas the contaminant sources may be varied in scope and composition, these issues of urban water sustainability are of public health concern at all levels of economic development worldwide, and require cheap and innovative environmental sensing capabilities and interactive monitoring networks, as well as tailored distributed water treatment technologies. To address this need, a roundtable was organized to explore the potential role of advances in biotechnology and bioengineering to aid in developing causative relationships between spatial and temporal changes in urbanization patterns and groundwater and surface water quality parameters, and to address aspects of socioeconomic constraints in implementing sustainable exploitation of water resources. An interactive framework for quantitative analysis of the coupling between human and natural systems requires integrating information derived from online and offline point measurements with Geographic Information Systems (GIS)-based remote sensing imagery analysis, groundwater-surface water hydrologic fluxes and water quality data to assess the vulnerability of potable water supplies. Spatially referenced data to inform uncertainty-based dynamic models can be used to rank watershed-specific stressors and receptors to guide researchers and policymakers in the development of targeted sensing and monitoring technologies, as well as tailored control measures for risk mitigation of potable water from microbial and chemical environmental contamination. The enabling technologies encompass: (i) distributed sensing approaches for microbial and chemical contamination (e.g. pathogens, endocrine disruptors); (ii) distributed application-specific, and infrastructure-adaptive water treatment systems; (iii) geostatistical integration of monitoring data and GIS layers; and (iv) systems analysis of microbial and chemical proliferation in distribution systems. This operational framework is aimed at technology implementation while maximizing economic and public health benefits. The outcomes of the roundtable will further research agendas in information technology-based monitoring infrastructure development, integration of processes and spatial analysis, as well as in new educational and training platforms for students, practitioners and regulators. The potential for technology diffusion to emerging economies with limited financial resources is substantial.
DRIHM: Distributed Research Infrastructure for Hydro-Meteorology
NASA Astrophysics Data System (ADS)
Parodi, A.; Rebora, N.; Kranzlmueller, D.; Schiffers, M.; Clematis, A.; Tafferner, A.; Garrote, L. M.; Llasat Botija, M.; Caumont, O.; Richard, E.; Cros, P.; Dimitrijevic, V.; Jagers, B.; Harpham, Q.; Hooper, R. P.
2012-12-01
Hydro-Meteorology Research (HMR) is an area of critical scientific importance and of high societal relevance. It plays a key role in guiding predictions relevant to the safety and prosperity of humans and ecosystems from highly urbanized areas, to coastal zones, and to agricultural landscapes. Of special interest and urgency within HMR is the problem of understanding and predicting the impacts of severe hydro-meteorological events, such as flash-floods and landslides in complex orography areas, on humans and the environment, under the incoming climate change effects. At the heart of this challenge lies the ability to have easy access to hydrometeorological data and models, and facilitate the collaboration between meteorologists, hydrologists, and Earth science experts for accelerated scientific advances in this field. To face these problems the DRIHM (Distributed Research Infrastructure for Hydro-Meteorology) project is developing a prototype e-Science environment to facilitate this collaboration and provide end-to-end HMR services (models, datasets and post-processing tools) at the European level, with the ability to expand to global scale (e.g. cooperation with Earth Cube related initiatives). The objectives of DRIHM are to lead the definition of a common long-term strategy, to foster the development of new HMR models and observational archives for the study of severe hydrometeorological events, to promote the execution and analysis of high-end simulations, and to support the dissemination of predictive models as decision analysis tools. DRIHM combines the European expertise in HMR, in Grid and High Performance Computing (HPC). Joint research activities will improve the efficient use of the European e-Infrastructures, notably Grid and HPC, for HMR modelling and observational databases, model evaluation tool sets and access to HMR model results. Networking activities will disseminate DRIHM results at the European and global levels in order to increase the cohesion of European and possibly worldwide HMR communities and increase the awareness of ICT potential for HMR. Service activities will deploy the end-to-end DRIHM services and tools in support of HMR networks and virtual organizations on top of the existing European e-Infrastructures.
The UK DNA banking network: a "fair access" biobank.
Yuille, Martin; Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William
2010-08-01
The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. 'Fair access' principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally.
PRACE - The European HPC Infrastructure
NASA Astrophysics Data System (ADS)
Stadelmeyer, Peter
2014-05-01
The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. This talk gives a general overview about PRACE and the PRACE research infrastructure (RI). PRACE is established as an international not-for-profit association and the PRACE RI is a pan-European supercomputing infrastructure which offers access to computing and data management resources at partner sites distributed throughout Europe. Besides a short summary about the organization, history, and activities of PRACE, it is explained how scientists and researchers from academia and industry from around the world can access PRACE systems and which education and training activities are offered by PRACE. The overview also contains a selection of PRACE contributions to societal challenges and ongoing activities. Examples of the latter are beside others petascaling, application benchmark suite, best practice guides for efficient use of key architectures, application enabling / scaling, new programming models, and industrial applications. The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 4 PRACE members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements RI-261557, RI-283493 and RI-312763. For more information, see www.prace-ri.eu
Building the European Seismological Research Infrastructure: results from 4 years NERIES EC project
NASA Astrophysics Data System (ADS)
van Eck, T.; Giardini, D.
2010-12-01
The EC Research Infrastructure (RI) project, Network of Research Infrastructures for European Seismology (NERIES), implemented a comprehensive European integrated RI for earthquake seismological data that is scalable and sustainable. NERIES opened a significant amount of additional seismological data, integrated different distributed data archives, implemented and produced advanced analysis tools and advanced software packages and tools. A single seismic data portal provides a single access point and overview for European seismological data available for the earth science research community. Additional data access tools and sites have been implemented to meet user and robustness requirements, notably those at the EMSC and ORFEUS. The datasets compiled in NERIES and available through the portal include among others: - The expanded Virtual European Broadband Seismic Network (VEBSN) with real-time access to more then 500 stations from > 53 observatories. This data is continuously monitored, quality controlled and archived in the European Integrated Distributed waveform Archive (EIDA). - A unique integration of acceleration datasets from seven networks in seven European or associated countries centrally accessible in a homogeneous format, thus forming the core comprehensive European acceleration database. Standardized parameter analysis and actual software are included in the database. - A Distributed Archive of Historical Earthquake Data (AHEAD) for research purposes, containing among others a comprehensive European Macroseismic Database and Earthquake Catalogue (1000 - 1963, M ≥5.8), including analysis tools. - Data from 3 one year OBS deployments at three sites, Atlantic, Ionian and Ligurian Sea within the general SEED format, thus creating the core integrated data base for ocean, sea and land based seismological observatories. Tools to facilitate analysis and data mining of the RI datasets are: - A comprehensive set of European seismological velocity reference model including a standardized model description with several visualisation tools currently adapted on a global scale. - An integrated approach to seismic hazard modelling and forecasting, a community accepted forecasting testing and model validation approach and the core hazard portal developed along the same technologies as the NERIES data portal. - Implemented homogeneous shakemap estimation tools at several large European observatories and a complementary new loss estimation software tool. - A comprehensive set of new techniques for geotechnical site characterization with relevant software packages documented and maintained (www.geopsy.org). - A set of software packages for data mining, data reduction, data exchange and information management in seismology as research and observatory analysis tools NERIES has a long-term impact and is coordinated with related US initiatives IRIS and EarthScope. The follow-up EC project of NERIES, NERA (2010 - 2014), is funded and will integrate the seismological and the earthquake engineering infrastructures. NERIES further provided the proof of concept for the ESFRI2008 initiative: the European Plate Observing System (EPOS). Its preparatory phase (2010 - 2014) is also funded by the EC.
Common Technologies for Environmental Research Infrastructures in ENVRIplus
NASA Astrophysics Data System (ADS)
Paris, Jean-Daniel
2016-04-01
Environmental and geoscientific research infrastructures (RIs) are dedicated to distinct aspects of the ocean, atmosphere, ecosystems, or solid Earth research, yet there is significant commonality in the way they conceive, develop, operate and upgrade their observation systems and platforms. Many environmental Ris are distributed network of observatories (be it drifting buoys, geophysical observatories, ocean-bottom stations, atmospheric measurements sites) with needs for remote operations. Most RIs have to deal with calibration and standardization issues. RIs use a variety of measurements technologies, but this variety is based on a small, common set of physical principles. All RIs have set their own research and development priorities, and developed their solution to their problems - however many problems are common across RIs. Finally, RIs may overlap in terms of scientific perimeter. In ENVRIplus we aim, for the first time, to identify common opportunities for innovation, to support common research and development across RIs on promising issues, and more generally to create a forum to spread state of the art techniques among participants. ENVRIplus activities include 1) measurement technologies: where are the common types of measurement for which we can share expertise or common development? 2) Metrology : how do we tackle together the diversified challenge of quality assurance and standardization? 3) Remote operations: can we address collectively the need for autonomy, robustness and distributed data handling? And 4) joint operations for research: are we able to demonstrate that together, RIs are able to provide relevant information to support excellent research. In this process we need to nurture an ecosystem of key players. Can we involve all the key technologists of the European RIs for a greater mutual benefit? Can we pave the way to a growing common market for innovative European SMEs, with a common programmatic approach conducive to targeted R&D? Can we develop a common metrological language adapted to the observation of our environment? We aim at creating a space for exchange on the "hardware" issues of our networks of observatories, a forum that allows fast transmission across RIs of best practices and state of the art technology, a laboratory for joint research and co-development, where research infrastructures and their communities join efforts on well-identified objectives.
802.11 Wireless Infrastructure To Enhance Medical Response to Disasters
Arisoylu, Mustafa; Mishra, Rajesh; Rao, Ramesh; Lenert, Leslie A.
2005-01-01
802.11 (WiFi) is a well established network communications protocol that has wide applicability in civil infrastructure. This paper describes research that explores the design of 802.11 networks enhanced to support data communications in disaster environments. The focus of these efforts is to create network infrastructure to support operations by Metropolitan Medical Response System (MMRS) units and Federally-sponsored regional teams that respond to mass casualty events caused by a terrorist attack with chemical, biological, nuclear or radiological weapons or by a hazardous materials spill. In this paper, we describe an advanced WiFi-based network architecture designed to meet the needs of MMRS operations. This architecture combines a Wireless Distribution Systems for peer-to-peer multihop connectivity between access points with flexible and shared access to multiple cellular backhauls for robust connectivity to the Internet. The architecture offers a high bandwidth data communications infrastructure that can penetrate into buildings and structures while also supporting commercial off-the-shelf end-user equipment such as PDAs. It is self-configuring and is self-healing in the event of a loss of a portion of the infrastructure. Testing of prototype units is ongoing. PMID:16778990
NASA Astrophysics Data System (ADS)
Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark
Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.
NASA Astrophysics Data System (ADS)
Cole, M.; Bambacus, M.; Lynnes, C.; Sauer, B.; Falke, S.; Yang, W.
2007-12-01
NASA's vast array of scientific data within its Distributed Active Archive Centers (DAACs) is especially valuable to both traditional research scientists as well as the emerging market of Earth Science Information Partners. For example, the air quality science and management communities are increasingly using satellite derived observations in their analyses and decision making. The Air Quality Cluster in the Federation of Earth Science Information Partners (ESIP) uses web infrastructures of interoperability, or Service Oriented Architecture (SOA), to extend data exploration, use, and analysis and provides a user environment for DAAC products. In an effort to continually offer these NASA data to the broadest research community audience, and reusing emerging technologies, both NASA's Goddard Earth Science (GES) and Land Process (LP) DAACs have engaged in a web services pilot project. Through these projects both GES and LP have exposed data through the Open Geospatial Consortiums (OGC) Web Services standards. Reusing several different existing applications and implementation techniques, GES and LP successfully exposed a variety data, through distributed systems to be ingested into multiple end-user systems. The results of this project will enable researchers world wide to access some of NASA's GES & LP DAAC data through OGC protocols. This functionality encourages inter-disciplinary research while increasing data use through advanced technologies. This paper will concentrate on the implementation and use of OGC Web Services, specifically Web Map and Web Coverage Services (WMS, WCS) at GES and LP DAACs, and the value of these services within scientific applications, including integration with the DataFed air quality web infrastructure and in the development of data analysis web applications.
NASA Astrophysics Data System (ADS)
Hernández Ernst, Vera; Poigné, Axel; Los, Walter
2010-05-01
Understanding and managing the complexity of the biodiversity system in relation to global changes concerning land use and climate change with their social and economic implications is crucial to mitigate species loss and biodiversity changes in general. The sustainable development and exploitation of existing biodiversity resources require flexible and powerful infrastructures offering, on the one hand, the access to large-scale databases of observations and measures, to advanced analytical and modelling software, and to high performance computing environments and, on the other hand, the interlinkage of European scientific communities among each others and with national policies. The European Strategy Forum on Research Infrastructures (ESFRI) selected the "LifeWatch e-science and technology infrastructure for biodiversity research" as a promising development to construct facilities to contribute to meet those challenges. LifeWatch collaborates with other selected initiatives (e.g. ICOS, ANAEE, NOHA, and LTER-Europa) to achieve the integration of the infrastructures at landscape and regional scales. This should result in a cooperating cluster of such infrastructures supporting an integrated approach for data capture and transmission, data management and harmonisation. Besides, facilities for exploration, forecasting, and presentation using heterogeneous and distributed data and tools should allow the interdisciplinary scientific research at any spatial and temporal scale. LifeWatch is an example of a new generation of interoperable research infrastructures based on standards and a service-oriented architecture that allow for linkage with external resources and associated infrastructures. External data sources will be established data aggregators as the Global Biodiversity Information Facility (GBIF) for species occurrences and other EU Networks of Excellence like the Long-Term Ecological Research Network (LTER), GMES, and GEOSS for terrestrial monitoring, the MARBEF network for marine data, and the Consortium for European Taxonomic Facilities (CETAF) and its European Distributed Institute for Taxonomy (EDIT) for taxonomic data. But also "smaller" networks and "volunteer scientists" may send data (e.g. GPS supported species observations) to a LifeWatch repository. Autonomous operating wireless environmental sensors and other smart hand-held devices will contribute to increase data capture activities. In this way LifeWatch will directly underpin the development of GEOBON, the biodiversity component if GEOSS, the Global Earth observation System. To overcome all major technical difficulties imposed by the variety of currently and future technologies, protocols, data formats, etc., LifeWatch will define and use common open interfaces. For this purpose, the LifeWatch Reference Model was developed during the preparatory phase specifying the service-oriented architecture underlying the ICT-infrastructure. The Reference Model identifies key requirements and key architectural concepts to support workflows for scientific in-silico experiments, tracking of provenance, and semantic enhancement, besides meeting the functional requirements mentioned before. It provides guidelines for the specification and implementation of services and information models, defining as well a number of generic services and models. Another key issue addressed by the Reference Model is that the cooperation of many developer teams residing in many European countries has to be organized to obtain compatible results in that conformance with the specifications and policies of the Reference Model will be required. The LifeWatch Reference Model is based on the ORCHESTRA Reference Model for geospatial-oriented architectures and services networks that provides a generic framework and has been endorsed as best practice by the Open Geospatial Consortium (OGC). The LifeWatch Infrastructure will allow (interdisciplinary) scientific researchers to collaborate by creating e-Laboratories or by composing e-Services which can be shared and jointly developed. For it a long-term vision for the LifeWatch Biodiversity Workbench Portal has been developed as a one-stop application for the LifeWatch infrastructure based on existing and emerging technologies. There the user can find all available resources such as data, workflows, tools, etc. and access LifeWatch applications that integrate different resource and provides key capabilities like resource discovery and visualisation, creation of workflows, creation and management of provenance, and the support of collaborative activities. While LifeWatch developers will construct components for solving generic LifeWatch tasks, users may add their own facilities to fulfil individual needs. Examples for application of the LifeWatch Reference Model and the LifeWatch Biodiversity Workbench Portal will be given.
A Development of Lightweight Grid Interface
NASA Astrophysics Data System (ADS)
Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.
2011-12-01
In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.
Climate Science's Globally Distributed Infrastructure
NASA Astrophysics Data System (ADS)
Williams, D. N.
2016-12-01
The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.
NASA Astrophysics Data System (ADS)
Parodi, A.; Craig, G. C.; Clematis, A.; Kranzlmueller, D.; Schiffers, M.; Morando, M.; Rebora, N.; Trasforini, E.; D'Agostino, D.; Keil, K.
2010-09-01
Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modeling tools, post processing methodologies and observational data and corresponding ICT (Information and Communication Technology) technologies are available. Recent European efforts in developing a platform for e-Science, such as EGEE (Enabling Grids for E-sciencE), SEEGRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, have demonstrated their abilities to provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind and the fact that European ICT-infrastructures are in the progress of transferring to a sustainable and permanent service utility as underlined by the European Grid Initiative (EGI) and the Partnership for Advanced Computing in Europe (PRACE), the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS, co-Founded by the EC under the 7th Framework Programme) project has been initiated. The goal of DRIHMS is the promotion of the Grids in particular and e-Infrastructures in general within the European hydrometeorological research (HMR) community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change. This paper will be devoted to provide an overview of DRIHMS ideas and to present the results of the DRIHMS HMR and ICT surveys.
NASA Astrophysics Data System (ADS)
Cofino, A. S.; Fernández Quiruelas, V.; Blanco Real, J. C.; García Díez, M.; Fernández, J.
2013-12-01
Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the WRF4G project objective is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is used by many groups, in the climate research community, to carry on downscaling simulations. Therefore this community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the simulations and the data. Thus,another objective of theWRF4G project consists on the development of a generic adaptation of WRF to DCIs. It should simplify the access to the DCIs for the researchers, and also to free them from the technical and computational aspects of the use of theses DCI. Finally, in order to demonstrate the ability of WRF4G solving actual scientific challenges with interest and relevance on the climate science (implying a high computational cost) we will shown results from different kind of downscaling experiments, like ERA-Interim re-analysis, CMIP5 models, or seasonal. WRF4G is been used to run WRF simulations which are contributing to the CORDEX initiative and others projects like SPECS and EUPORIAS. This work is been partially funded by the European Regional Development Fund (ERDF) and the Spanish National R&D Plan 2008-2011 (CGL2011-28864)
Using Predictive Analytics to Predict Power Outages from Severe Weather
NASA Astrophysics Data System (ADS)
Wanik, D. W.; Anagnostou, E. N.; Hartman, B.; Frediani, M. E.; Astitha, M.
2015-12-01
The distribution of reliable power is essential to businesses, public services, and our daily lives. With the growing abundance of data being collected and created by industry (i.e. outage data), government agencies (i.e. land cover), and academia (i.e. weather forecasts), we can begin to tackle problems that previously seemed too complex to solve. In this session, we will present newly developed tools to aid decision-support challenges at electric distribution utilities that must mitigate, prepare for, respond to and recover from severe weather. We will show a performance evaluation of outage predictive models built for Eversource Energy (formerly Connecticut Light & Power) for storms of all types (i.e. blizzards, thunderstorms and hurricanes) and magnitudes (from 20 to >15,000 outages). High resolution weather simulations (simulated with the Weather and Research Forecast Model) were joined with utility outage data to calibrate four types of models: a decision tree (DT), random forest (RF), boosted gradient tree (BT) and an ensemble (ENS) decision tree regression that combined predictions from DT, RF and BT. The study shows that the ENS model forced with weather, infrastructure and land cover data was superior to the other models we evaluated, especially in terms of predicting the spatial distribution of outages. This research has the potential to be used for other critical infrastructure systems (such as telecommunications, drinking water and gas distribution networks), and can be readily expanded to the entire New England region to facilitate better planning and coordination among decision-makers when severe weather strikes.
Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila
2015-11-01
Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
NASA Astrophysics Data System (ADS)
Longo, S.; Nativi, S.; Leone, C.; Migliorini, S.; Mazari Villanova, L.
2012-04-01
Italian Polar Metadata System C.Leone, S.Longo, S.Migliorini, L.Mazari Villanova, S. Nativi The Italian Antarctic Research Programme (PNRA) is a government initiative funding and coordinating scientific research activities in polar regions. PNRA manages two scientific Stations in Antarctica - Concordia (Dome C), jointly operated with the French Polar Institute "Paul Emile Victor", and Mario Zucchelli (Terra Nova Bay, Southern Victoria Land). In addition National Research Council of Italy (CNR) manages one scientific Station in the Arctic Circle (Ny-Alesund-Svalbard Islands), named Dirigibile Italia. PNRA started in 1985 with the first Italian Expedition in Antarctica. Since then each research group has collected data regarding biology and medicine, geodetic observatory, geophysics, geology, glaciology, physics and atmospheric chemistry, earth-sun relationships and astrophysics, oceanography and marine environment, chemistry contamination, law and geographic science, technology, multi and inter disciplinary researches, autonomously with different formats. In 2010 the Italian Ministry of Research assigned the scientific coordination of the Programme to CNR, which is in charge of the management and sharing of the scientific results carried out in the framework of the PNRA. Therefore, CNR is establishing a new distributed cyber(e)-infrastructure to collect, manage, publish and share polar research results. This is a service-based infrastructure building on Web technologies to implement resources (i.e. data, services and documents) discovery, access and visualization; in addition, semantic-enabled functionalities will be provided. The architecture applies the "System of Systems" principles to build incrementally on the existing systems by supplementing but not supplanting their mandates and governance arrangements. This allows to keep the existing capacities as autonomous as possible. This cyber(e)-infrastructure implements multi-disciplinary interoperability following a Brokering approach and supporting the relevant international standards recognized by European and international standards, including: GEO/GEOSS, INSPIRE and SCAR. The Brokering approach is empowered by a technology developed by CNR, advanced by the FP7 EuroGEOSS project, and recently adopted by the GEOSS Common Infrastructure (GCI).
Code of Federal Regulations, 2012 CFR
2012-01-01
... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...
Code of Federal Regulations, 2010 CFR
2010-01-01
... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...
Code of Federal Regulations, 2014 CFR
2014-01-01
... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...
Code of Federal Regulations, 2013 CFR
2013-01-01
... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...
Code of Federal Regulations, 2011 CFR
2011-01-01
... distribution system means any system of community infrastructure whose primary function is the distribution of... communication system means any system of community infrastructure whose primary function is the provision of... primary function is the supplying of water and/or the collection and treatment of waste water and whose...
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
NASA Astrophysics Data System (ADS)
De Bruin, T.; Thijsse, P.
2013-12-01
The Wadden Sea, an UNESCO World Heritage Site along the Northern coasts of The Netherlands, Germany and Denmark, is a very valuable, yet also highly vulnerable tidal flats area. It is noted for its ecological diversity and value, being a stopover for large numbers of migrating birds. The Wadden Sea is also used intensively for economic activities by inhabitants of the surrounding coasts and islands, as well as by the many tourists visiting the area every year. A whole series of monitoring programmes of both ecological and socio-economic parameters is carried out by a range of governmental bodies and institutes, to study the natural processes occuring in the Wadden Sea ecosystems as well as the influence of human activities on those ecosystems. Yet, the monitoring programmes are scattered and it is difficult to get an overview of those monitoring activities or to get access to the data resulting from those monitoring programmes. The Wadden Sea Long Term Ecosystem Research (WaLTER) project aims to: 1. Provide access through one data portal to a base set of consistent, standardized, long-term data on changes in the Wadden Sea ecological and socio-economic systems, in order to model and understand interrelationships with human use, climate variation and possible other drivers. 2. Provide a research infrastructure, open access to commonly shared databases, educational facilities and one or more field sites in which experimental, innovative and process-driven research can be carried out. This presentation will, after a short introduction of the WaLTER-project (2011-2015), focus on the distributed data access infrastructure which is being developed and used for WaLTER. This is based on and makes use of the existing data access infrastructure of the Netherlands National Oceanographic Data Committee (NL-NODC), which has been operational since early 2009. The NL-NODC system is identical to and in fact developed by the European SeaDataNet project, furthering standardisation on a pan-European scale. The WaLTER data portal will provide a centralized overview of all relevant Wadden Sea data, both from environmental as well as socio-economic disciplines and it will provide access to a system of distributed data sources. Much emphasis is given to address the different needs of various groups of users, such as policy makers, scientists and the general public. Benefits and pitfalls (and ways to circumvent the latter) of using this infrastructure with data from widely different disciplines will be addressed.
LXtoo: an integrated live Linux distribution for the bioinformatics community
2012-01-01
Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356
LXtoo: an integrated live Linux distribution for the bioinformatics community.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
2012-07-19
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Advanced Optical Burst Switched Network Concepts
NASA Astrophysics Data System (ADS)
Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian
In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.
Building sustainable multi-functional prospective electronic clinical data systems.
Randhawa, Gurvaneet S; Slutsky, Jean R
2012-07-01
A better alignment in the goals of the biomedical research enterprise and the health care delivery system can help fill the large gaps in our knowledge of the impact of clinical interventions on patient outcomes in the real world. There are several initiatives underway to align the research priorities of patients, providers, researchers, and policy makers. These include Agency for Healthcare Research and Quality (AHRQ)-supported projects to build flexible prospective clinical electronic data infrastructure that meet the needs of these diverse users. AHRQ has previously supported the creation of 2 distributed research networks as a new approach to conduct comparative effectiveness research (CER) while protecting a patient's confidential information and the proprietary needs of a clinical organization. It has applied its experience in building these networks in directing the American Recovery and Reinvestment Act funds for CER to support new clinical electronic infrastructure projects that can be used for several purposes including CER, quality improvement, clinical decision support, and disease surveillance. In addition, AHRQ has funded a new Electronic Data Methods forum to advance the methods in clinical informatics, research analytics, and governance by actively engaging investigators from the American Recovery and Reinvestment Act-funded projects and external stakeholders.
The Sunrise project: An R&D project for a national information infrastructure prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Juhnyoung
1995-02-01
Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to a prototype National Information Infrastructure (NII) development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multimedia technologies, and data mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; and (3) To define a new way of collaboration between computer science and industrially relevant research.« less
Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership
NASA Astrophysics Data System (ADS)
Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya
CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.
The Infrastructure of Academic Research.
ERIC Educational Resources Information Center
Davey, Ken
1996-01-01
Canadian university infrastructures have eroded as seen in aging equipment, deteriorating facilities, and fewer skilled personnel to maintain and operate research equipment. Research infrastructure includes administrative overhead, facilities and equipment, and research personnel including faculty, technicians, and students. The biggest erosion of…
Security and Policy for Group Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ian Foster; Carl Kesselman
2006-07-31
“Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration ofmore » new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.« less
IsoMAP (Isoscape Modeling, Analysis, and Prediction)
NASA Astrophysics Data System (ADS)
Miller, C. C.; Bowen, G. J.; Zhang, T.; Zhao, L.; West, J. B.; Liu, Z.; Rapolu, N.
2009-12-01
IsoMAP is a TeraGrid-based web portal aimed at building the infrastructure that brings together distributed multi-scale and multi-format geospatial datasets to enable statistical analysis and modeling of environmental isotopes. A typical workflow enabled by the portal includes (1) data source exploration and selection, (2) statistical analysis and model development; (3) predictive simulation of isotope distributions using models developed in (1) and (2); (4) analysis and interpretation of simulated spatial isotope distributions (e.g., comparison with independent observations, pattern analysis). The gridded models and data products created by one user can be shared and reused among users within the portal, enabling collaboration and knowledge transfer. This infrastructure and the research it fosters can lead to fundamental changes in our knowledge of the water cycle and ecological and biogeochemical processes through analysis of network-based isotope data, but it will be important A) that those with whom the data and models are shared can be sure of the origin, quality, inputs, and processing history of these products, and B) the system is agile and intuitive enough to facilitate this sharing (rather than just ‘allow’ it). IsoMAP researchers are therefore building into the portal’s architecture several components meant to increase the amount of metadata about users’ products and to repurpose those metadata to make sharing and discovery more intuitive and robust to both expected, professional users as well as unforeseeable populations from other sectors.
An interoperable research data infrastructure to support climate service development
NASA Astrophysics Data System (ADS)
De Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena
2018-02-01
Accessibility, availability, re-use and re-distribution of scientific data are prerequisites to build climate services across Europe. From this perspective the Institute of Biometeorology of the National Research Council (IBIMET-CNR), aiming at contributing to the sharing and integration of research data, has developed a research data infrastructure to support the scientific activities conducted in several national and international research projects. The proposed architecture uses open-source tools to ensure sustainability in the development and deployment of Web applications with geographic features and data analysis functionalities. The spatial data infrastructure components are organized in typical client-server architecture and interact from the data provider download data process to representation of the results to end users. The availability of structured raw data as customized information paves the way for building climate service purveyors
to support adaptation, mitigation and risk management at different scales.
This work is a bottom-up collaborative initiative between different IBIMET-CNR research units (e.g. geomatics and information and communication technology - ICT; agricultural sustainability; international cooperation in least developed countries - LDCs) that embrace the same approach for sharing and re-use of research data and informatics solutions based on co-design, co-development and co-evaluation among different actors to support the production and application of climate services. During the development phase of Web applications, different users (internal and external) were involved in the whole process so as to better define user needs and suggest the implementation of specific custom functionalities. Indeed, the services are addressed to researchers, academics, public institutions and agencies - practitioners who can access data and findings from recent research in the field of applied meteorology and climatology.
Data distribution service-based interoperability framework for smart grid testbed infrastructure
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
2016-03-02
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Integrating Data Distribution and Data Assimilation Between the OOI CI and the NOAA DIF
NASA Astrophysics Data System (ADS)
Meisinger, M.; Arrott, M.; Clemesha, A.; Farcas, C.; Farcas, E.; Im, T.; Schofield, O.; Krueger, I.; Klacansky, I.; Orcutt, J.; Peach, C.; Chave, A.; Raymer, D.; Vernon, F.
2008-12-01
The Ocean Observatories Initiative (OOI) is an NSF funded program to establish the ocean observing infrastructure of the 21st century benefiting research and education. It is currently approaching final design and promises to deliver cyber and physical observatory infrastructure components as well as substantial core instrumentation to study environmental processes of the ocean at various scales, from coastal shelf-slope exchange processes to the deep ocean. The OOI's data distribution network lies at the heart of its cyber- infrastructure, which enables a multitude of science and education applications, ranging from data analysis, to processing, visualization and ontology supported query and mediation. In addition, it fundamentally supports a class of applications exploiting the knowledge gained from analyzing observational data for objective-driven ocean observing applications, such as automatically triggered response to episodic environmental events and interactive instrument tasking and control. The U.S. Department of Commerce through NOAA operates the Integrated Ocean Observing System (IOOS) providing continuous data in various formats, rates and scales on open oceans and coastal waters to scientists, managers, businesses, governments, and the public to support research and inform decision-making. The NOAA IOOS program initiated development of the Data Integration Framework (DIF) to improve management and delivery of an initial subset of ocean observations with the expectation of achieving improvements in a select set of NOAA's decision-support tools. Both OOI and NOAA through DIF collaborate on an effort to integrate the data distribution, access and analysis needs of both programs. We present details and early findings from this collaboration; one part of it is the development of a demonstrator combining web-based user access to oceanographic data through ERDDAP, efficient science data distribution, and scalable, self-healing deployment in a cloud computing environment. ERDDAP is a web-based front-end application integrating oceanographic data sources of various formats, for instance CDF data files as aggregated through NcML or presented using a THREDDS server. The OOI-designed data distribution network provides global traffic management and computational load balancing for observatory resources; it makes use of the OpenDAP Data Access Protocol (DAP) for efficient canonical science data distribution over the network. A cloud computing strategy is the basis for scalable, self-healing organization of an observatory's computing and storage resources, independent of the physical location and technical implementation of these resources.
Towards a Multi-Mission, Airborne Science Data System Environment
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Hardman, S.; Law, E.; Freeborn, D.; Kay-Im, E.; Lau, G.; Oswald, J.
2011-12-01
NASA earth science instruments are increasingly relying on airborne missions. However, traditionally, there has been limited common infrastructure support available to principal investigators in the area of science data systems. As a result, each investigator has been required to develop their own computing infrastructures for the science data system. Typically there is little software reuse and many projects lack sufficient resources to provide a robust infrastructure to capture, process, distribute and archive the observations acquired from airborne flights. At NASA's Jet Propulsion Laboratory (JPL), we have been developing a multi-mission data system infrastructure for airborne instruments called the Airborne Cloud Computing Environment (ACCE). ACCE encompasses the end-to-end lifecycle covering planning, provisioning of data system capabilities, and support for scientific analysis in order to improve the quality, cost effectiveness, and capabilities to enable new scientific discovery and research in earth observation. This includes improving data system interoperability across each instrument. A principal characteristic is being able to provide an agile infrastructure that is architected to allow for a variety of configurations of the infrastructure from locally installed compute and storage services to provisioning those services via the "cloud" from cloud computer vendors such as Amazon.com. Investigators often have different needs that require a flexible configuration. The data system infrastructure is built on the Apache's Object Oriented Data Technology (OODT) suite of components which has been used for a number of spaceborne missions and provides a rich set of open source software components and services for constructing science processing and data management systems. In 2010, a partnership was formed between the ACCE team and the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) mission to support the data processing and data management needs. A principal goal is to provide support for the Fourier Transform Spectrometer (FTS) instrument which will produce over 700,000 soundings over the life of their three-year mission. The cost to purchase and operate a cluster-based system in order to generate Level 2 Full Physics products from this data was prohibitive. Through an evaluation of cloud computing solutions, Amazon's Elastic Compute Cloud (EC2) was selected for the CARVE deployment. As the ACCE infrastructure is developed and extended to form an infrastructure for airborne missions, the experience of working with CARVE has provided a number of lessons learned and has proven to be important in reinforcing the unique aspects of airborne missions and the importance of the ACCE infrastructure in developing a cost effective, flexible multi-mission capability that leverages emerging capabilities in cloud computing, workflow management, and distributed computing.
Flexible services for the support of research.
Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John
2013-01-28
Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.
Distributed data networks: a blueprint for Big Data sharing and healthcare analytics.
Popovic, Jennifer R
2017-01-01
This paper defines the attributes of distributed data networks and outlines the data and analytic infrastructure needed to build and maintain a successful network. We use examples from one successful implementation of a large-scale, multisite, healthcare-related distributed data network, the U.S. Food and Drug Administration-sponsored Sentinel Initiative. Analytic infrastructure-development concepts are discussed from the perspective of promoting six pillars of analytic infrastructure: consistency, reusability, flexibility, scalability, transparency, and reproducibility. This paper also introduces one use case for machine learning algorithm development to fully utilize and advance the portfolio of population health analytics, particularly those using multisite administrative data sources. © 2016 New York Academy of Sciences.
The Ocean Observatories Initiative: Data, Data and More Data
NASA Astrophysics Data System (ADS)
Crowley, M. F.; Vardaro, M.; Belabbassi, L.; Smith, M. J.; Garzio, L. M.; Knuth, F.; Glenn, S. M.; Schofield, O.; Lichtenwalner, C. S.; Kerfoot, J.
2016-02-01
The Ocean Observatories Initiative (OOI), a project funded by the National Science Foundation (NSF) and managed by the Consortium for Ocean Leadership, is a networked infrastructure of science-driven sensor systems that measure the physical, chemical, geological, and biological variables in the ocean and seafloor on coastal, regional, and global scales. OOI long term research arrays have been installed off the Washington coast (Cabled), Massachusetts and Oregon coasts (Coastal) and off Alaska, Greenland, Chile and Argentina (Global). Woods Hole Oceanographic Institution and Oregon State University are responsible for the coastal and global moorings and their autonomous vehicles. The University of Washington is responsible for cabled seafloor systems and moorings. Rutgers University operates the Cyberinfrastructure (CI) portion of the OOI, which acquires, processes and distributes data to the scientists, researchers, educators and the public. It also provides observatory mission command and control, data assessment and distribution, and long-term data management. This talk will present an overview of the OOI infrastructure and its three primary websites which include: 1) An OOI overview website offering technical information on the infrastructure ranging from instruments to science goals, news, deployment updates, and information on the proposal process, 2) The Education and Public Engagement website where students can view and analyze exactly the same data that scientists have access to at exactly the same time, but with simple visualization tools and compartmentalized lessons that lead them through complex science questions, and 3) The primary data access website and machine to machine interface where anyone can plot or download data from the over 700 instruments within the OOI Network.
NASA Astrophysics Data System (ADS)
Dawes, N.; Salehi, A.; Clifton, A.; Bavay, M.; Aberer, K.; Parlange, M. B.; Lehning, M.
2010-12-01
It has long been known that environmental processes are cross-disciplinary, but data has continued to be acquired and held for a single purpose. Swiss Experiment is a rapidly evolving cross-disciplinary, distributed sensor data infrastructure, where tools for the environmental science community stem directly from computer science research. The platform uses the bleeding edge of computer science to acquire, store and distribute data and metadata from all environmental science disciplines at a variety of temporal and spatial resolutions. SwissEx is simultaneously developing new technologies to allow low cost, high spatial and temporal resolution measurements such that small areas can be intensely monitored. This data is then combined with existing widespread, low density measurements in the cross-disciplinary platform to provide well documented datasets, which are of use to multiple research disciplines. We present a flexible, generic infrastructure at an advanced stage of development. The infrastructure makes the most of Web 2.0 technologies for a collaborative working environment and as a user interface for a metadata database. This environment is already closely integrated with GSN, an open-source database middleware developed under Swiss Experiment for acquisition and storage of generic time-series data (2D and 3D). GSN can be queried directly by common data processing packages and makes data available in real-time to models and 3rd party software interfaces via its web service interface. It also provides real-time push or pull data exchange between instances, a user management system which leaves data owners in charge of their data, advanced real-time processing and much more. The SwissEx interface is increasingly gaining users and supporting environmental science in Switzerland. It is also an integral part of environmental education projects ClimAtscope and O3E, where the technologies can provide rapid feedback of results for children of all ages and where the data from their own stations can be compared to national data networks.
NASA Astrophysics Data System (ADS)
Garcia, Oscar; Mihai Toma, Daniel; Dañobeitia, Juanjo; del Rio, Joaquin; Bartolome, Rafael; Martínez, Enoc; Nogueras, Marc; Bghiel, Ikram; Lanteri, Nadine; Rolin, Jean Francois; Beranzoli, Laura; Favali, Paolo
2017-04-01
The EMSODEV project (EMSO implementation and operation: DEVelopment of instrument module) is an Horizon-2020 UE project whose overall objective is the operation of eleven seafloor observatories and four test sites. These infrastructures are distributed throughout European seas, from the Arctic across the Atlantic and the Mediterranean to the Black Sea, and are managed by the European consortium EMSO-ERIC (European Research Infrastructure Consortium) with the participation of 8 European countries and other associated partners. Recently, we have implemented a Generic Sensor Module (EGIM) within the EMSO-ERIC distributed marine research infrastructure. EGIM is able to operate on any EMSO observatory node, mooring line, seabed station, cabled or non-cabled and surface buoy. The main role of EGIM is to measure homogeneously a set of core variables using the same hardware, sensor references, qualification methods, calibration methods, data format and access, maintenance procedures in several European ocean locations. The EGIM module acquires a wide range of ocean parameters in a long-term consistent, accurate and comparable manner from disciplines such as biology, geology, chemistry, physics, engineering, and computer science, from polar to subtropical environments, through the water column down to the deep sea. Our work includes developing standard-compliant generic software for Sensor Web Enablement (SWE) on EGIM and to perform the first onshore and offshore test bench, to support the sensors data acquisition on a new interoperable EGIM system. EGIM in its turn is linked to an acquisition drives processes, a centralized Sensor Observation Service (SOS) server and a laboratory monitor system (LabMonitor) that records events and alarms during acquisition. The measurements recorded along EMSO NODES are essential to accurately respond to the social and scientific challenges such as climate change, changes in marine ecosystems, and marine hazards. This presentation shows the first EGIM deployment and the SWE infrastructure, developed to manage the data acquisition from the underwater sensors and their insertion to the SOS interface.
Big Data from Europe's Natural Science Collections through DiSSCo
NASA Astrophysics Data System (ADS)
Addink, Wouter; Koureas, Dimitris; Casino, Ana
2017-04-01
DiSSCo, a Distributed System of Scientific Collections, will be a Research Infrastructure delivering big data describing the history of Planet Earth. Approximately 1.5 billion biological and geological specimens, representing the last 300 years of scientific study on the natural world, reside in collections all over Europe. These span 4.5 billion years of history, from the formation of the solar system to the present day. In the European landscape of environmental Research Infrastructures, different projects and landmarks describe services that aim at aggregating, monitoring, analysing and modelling geo-diversity information. The effectiveness of these services, however, is based on the quality and availability of primary reference data that today is scattered and uncomplete. DiSSCo provides the required bio-geographical, taxonomic and species trait data at the level of precision and accuracy required to enable and speed up research for the rapidly growing seven grand societal challenges that are priorities of the Europe 2020 strategy. DiSSCo enables better connections between collection data and observations in biodiversity observation networks, such as EU BON and GEOBON. This supports research areas like long term ecological research, for which the continuity and long term research is a strength of biological collections.
NASA Astrophysics Data System (ADS)
Testor, Pierre
2013-04-01
In the 1990 s, while gliders were being developed and successfully passing first tests, their potential use for ocean research started to be discussed in international conferences because they could help us improve the cost-effectiveness, sampling, and distribution of the ocean observations (see OceanObs'99 Conference Statement - UNESCO). After the prototype phase, in the 2000 s, one could only witness the growing glider activity throughout the world. The first glider experiments in Europe brought together several teams that were interested in the technology and a consortium formed naturally from these informal collaborations. Since 2006, Everyone's Gliding Observatories (EGO - http://www.ego-network.org) Workshops and Glider Schools have been organized, whilst becoming the international forum for glider activities. Some key challenges have emerged from the expansion of the glider system and require now setting up a sustainable European as well as a global system to operate glider and to ensure a smooth and sustained link to the Global Ocean Observing System (GOOS). Glider technology faces many scientific, technological and logistical issues. In particular, it approaches the challenge of controlling many steerable probes in a variable environment for better sampling. It also needs the development of new formats and procedures in order to build glider observatories at a global level. Several geographically distributed teams of oceanographers now operate gliders, and there is a risk of fragmentation. We will here present results from our consortium who intends to solve most of these issues through scientific and technological coordination and networking. This approach is supported by the ESF through Cooperation in the field of Scientific and Technical Research (COST). The COST Action ES0904 "EGO" started in July 2010 aiming to build international cooperation and capacities at the scientific, technological, and organizational levels, for sustained observations of the oceans with gliders. A major impact of this Action was the elaboration of the EU Collaborative Project GROOM, Gliders for Research, Ocean Observation and Management for the FP7 call "Capacities - Research Infrastructures", which addresses the topic "design studies for research infrastructures in all S&T fields" (see http://www.groom-fp.eu).
The EPOS Architecture: Integrated Services for solid Earth Science
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Consortium, Epos
2013-04-01
The European Plate Observing System (EPOS) represents a scientific vision and an IT approach in which innovative multidisciplinary research is made possible for a better understanding of the physical processes controlling earthquakes, volcanic eruptions, unrest episodes and tsunamis as well as those driving tectonics and Earth surface dynamics. EPOS has a long-term plan to facilitate integrated use of data, models and facilities from existing (but also new) distributed research infrastructures, for solid Earth science. One primary purpose of EPOS is to take full advantage of the new e-science opportunities coming available. The aim is to obtain an efficient and comprehensive multidisciplinary research platform for the Earth sciences in Europe. The EPOS preparatory phase (EPOS PP), funded by the European Commission within the Capacities program, started on November 1st 2010 and it has completed its first two years of activity. EPOS is presently mid-way through its preparatory phase and to date it has achieved all the objectives, milestones and deliverables planned in its roadmap towards construction. The EPOS mission is to integrate the existing research infrastructures (RIs) in solid Earth science warranting increased accessibility and usability of multidisciplinary data from monitoring networks, laboratory experiments and computational simulations. This is expected to enhance worldwide interoperability in the Earth Sciences and establish a leading, integrated European infrastructure offering services to researchers and other stakeholders. The Preparatory Phase aims at leveraging the project to the level of maturity required to implement the EPOS construction phase, with a defined legal structure, detailed technical planning and financial plan. We will present the EPOS architecture, which relies on the integration of the main outcomes from legal, governance and financial work following the strategic EPOS roadmap and according to the technical work done during the first two years in order to establish an effective implementation plan guaranteeing long term sustainability for the infrastructure and the associated services. We plan to describe the RIs to be integrated in EPOS and to illustrate the initial suite of integrated and thematic core services to be offered to the users. We will present examples of combined data analyses and we will address the importance of opening our research infrastructures to users from different communities. We will describe the use-cases identified so far in order to allow stakeholders and potential future users to understand and interact with the EPOS infrastructure. In this framework, we also discuss the global perspectives for data infrastructures in order to verify the coherency of the EPOS plans and present the EPOS contributions. We also discuss the international cooperation initiatives in which EPOS is involved emphasizing the implications for solid Earth data infrastructures. In particular, EPOS and the satellite Earth Observation communities are collaborating in order to promote the integration of data from in-situ monitoring networks and satellite observing systems. Finally, we will also discuss the priorities for the third year of activity and the key actions planned to better involve users in EPOS. In particular, we will discuss the work done to finalize the design phase as well as the activities to start the validation and testing phase of the EPOS infrastructure.
MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.
Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui
A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.
Facilities | Hydrogen and Fuel Cells | NREL
integration research. Photo of the Hydrogen Infrastructure Testing and Research Facility building, with hydrogen fueling station and fuel cell vehicles. Hydrogen Infrastructure Testing and Research Facility The Hydrogen Infrastructure Testing and Research Facility (HITRF) at the ESIF combines electrolyzers, a
The UK DNA banking network: a “fair access” biobank
Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William
2009-01-01
The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. ‘Fair access’ principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally. PMID:19672698
Peek, N; Holmes, J H; Sun, J
2014-08-15
To review technical and methodological challenges for big data research in biomedicine and health. We discuss sources of big datasets, survey infrastructures for big data storage and big data processing, and describe the main challenges that arise when analyzing big data. The life and biomedical sciences are massively contributing to the big data revolution through secondary use of data that were collected during routine care and through new data sources such as social media. Efficient processing of big datasets is typically achieved by distributing computation over a cluster of computers. Data analysts should be aware of pitfalls related to big data such as bias in routine care data and the risk of false-positive findings in high-dimensional datasets. The major challenge for the near future is to transform analytical methods that are used in the biomedical and health domain, to fit the distributed storage and processing model that is required to handle big data, while ensuring confidentiality of the data being analyzed.
NASA Astrophysics Data System (ADS)
Hereld, Mark; Hudson, Randy; Norris, John; Papka, Michael E.; Uram, Thomas
2009-07-01
The Computer Supported Collaborative Work research community has identified that the technology used to support distributed teams of researchers, such as email, instant messaging, and conferencing environments, are not enough. Building from a list of areas where it is believed technology can help support distributed teams, we have divided our efforts into support of asynchronous and synchronous activities. This paper will describe two of our recent efforts to improve the productivity of distributed science teams. One effort focused on supporting the management and tracking of milestones and results, with the hope of helping manage information overload. The second effort focused on providing an environment that supports real-time analysis of data. Both of these efforts are seen as add-ons to the existing collaborative infrastructure, developed to enhance the experience of teams working at a distance by removing barriers to effective communication.
AGING WATER INFRASTRUCTURE RESEARCH PROGRAM: ADDRESSING THE CHALLENGE THROUGH INNOVATION
A driving force behind the Sustainable Water Infrastructure (SI) initiative and the Aging Water Infrastructure (AWI) research program is the Clean Water and Drinking Water Infrastructure Gap Analysis. In this report, EPA estimated that if operation, maintenance, and capital inves...
States of Cybersecurity: Electricity Distribution System Discussions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pena, Ivonne; Ingram, Michael; Martin, Maurice
State and local entities that oversee the reliable, affordable provision of electricity are faced with growing and evolving threats from cybersecurity risks to our nation's electricity distribution system. All-hazards system resilience is a shared responsibility among electric utilities and their regulators or policy-setting boards of directors. Cybersecurity presents new challenges and should be a focus for states, local governments, and Native American tribes that are developing energy-assurance plans to protect critical infrastructure. This research sought to investigate the implementation of governance and policy at the distribution utility level that facilitates cybersecurity preparedness to inform the U.S. Department of Energy (DOE),more » Office of Energy Policy and Systems Analysis; states; local governments; and other stakeholders on the challenges, gaps, and opportunities that may exist for future analysis. The need is urgent to identify the challenges and inconsistencies in how cybersecurity practices are being applied across the United States to inform the development of best practices, mitigations, and future research and development investments in securing the electricity infrastructure. By examining the current practices and applications of cybersecurity preparedness, this report seeks to identify the challenges and persistent gaps between policy and execution and reflect the underlying motivations of distinct utility structures as they play out at the local level. This study aims to create an initial baseline of cybersecurity preparedness within the distribution electricity sector. The focus of this study is on distribution utilities not bound by the cybersecurity guidelines of the North American Electric Reliability Corporation (NERC) to examine the range of mechanisms taken by state regulators, city councils that own municipal utilities, and boards of directors of rural cooperatives.« less
COOPEUS - connecting research infrastructures in environmental sciences
NASA Astrophysics Data System (ADS)
Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert
2015-04-01
The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources, the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS research infrastructures. COOPEUS recognizes the potential for the GCI to become an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges. Recommendations from the workshop participants also revealed that in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated. However, also the data policies of the individual research infrastructure, or lack thereof, can prevent the use of the GCI or other portals, due to unclarities regarding data management authority and data ownership. COOPEUS shall continue to promote cross-disciplinary data exchange in the environmental field and will in the future expand to also include other geographical areas.
Aging Water Infrastructure Research Program Update: Innovation & Research for the 21st Century
This slide presentation summarizes key elements of the EOA, Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI)) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hyunju; Pandit, Arka; Crittenden, John
The population growth coupled with increasing urbanization is predicted to exert a huge demand on the growth and retrofit of urban infrastructure, particularly in water and energy systems. The U.S. population is estimated to grow by 23% (UN, 2009) between 2005 and 2030. The corresponding increases in energy and water demand were predicted as 14% (EIA, 2009) and 20% (Elcock, 2008), respectively. The water-energy nexus needs to be better understood to satisfy the increased demand in a sustainable manner without conflicting with environmental and economic constraints. Overall, 4% of U.S. power generation is used for water distribution (80%) and treatmentmore » (20%). 3% of U.S. water consumption (100 billion gallons per day, or 100 BGD) and 40% of U.S. water withdrawal (340 BGD) are for thermoelectric power generation (Goldstein and Smith, 2002). The water demand for energy production is predicted to increase most significantly among the water consumption sectors by 2030. On the other hand, due to the dearth of conventional water sources, energy intensive technologies are increasingly in use to treat seawater and brackish groundwater for water supply. Thus comprehending the interrelation and interdependency between water and energy system is imperative to evaluate sustainable water and energy supply alternatives for cities. In addition to the water-energy nexus, decentralized or distributed concept is also beneficial for designing sustainable water and energy infrastructure as these alternatives require lesser distribution lines and space in a compact urban area. Especially, the distributed energy infrastructure is more suited to interconnect various large and small scale renewable energy producers which can be expected to mitigate greenhouse gas (GHG) emissions. In the case of decentralized water infrastructure, on-site wastewater treatment facility can provide multiple benefits. Firstly, it reduces the potable water demand by reusing the treated water for non-potable uses and secondly, it also reduces the wastewater load to central facility. In addition, lesser dependency on the distribution network contributes to increased reliability and resiliency of the infrastructure. The goal of this research is to develop a framework which seeks an optimal combination of decentralized water and energy alternatives and centralized infrastructures based on physical and socio-economic environments of a region. Centralized and decentralized options related to water, wastewater and stormwater and distributed energy alternatives including photovoltaic (PV) generators, fuel cells and microturbines are investigated. In the context of the water-energy nexus, water recovery from energy alternatives and energy recovery from water alternatives are reflected. Alternatives recapturing nutrients from wastewater are also considered to conserve depleting resources. The alternatives are evaluated in terms of their life-cycle environmental impact and economic performance using a hybrid life cycle assessment (LCA) tool and cost benefit analysis, respectively. Meeting the increasing demand of a test bed, an optimal combination of the alternatives is designed to minimize environmental and economic impacts including CO2 emissions, human health risk, natural resource use, and construction and operation cost. The framework determines the optimal combination depending on urban density, transmission or conveyance distance or network, geology, climate, etc. Therefore, it will be also able to evaluate infrastructure resiliency against physical and socio-economic challenges such as population growth, severe weather, energy and water shortage, economic crisis, and so on.« less
An authentication infrastructure for today and tomorrow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1996-06-01
The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less
[caCORE: core architecture of bioinformation on cancer research in America].
Gao, Qin; Zhang, Yan-lei; Xie, Zhi-yun; Zhang, Qi-peng; Hu, Zhang-zhi
2006-04-18
A critical factor in the advancement of biomedical research is the ease with which data can be integrated, redistributed and analyzed both within and across domains. This paper summarizes the Biomedical Information Core Infrastructure built by National Cancer Institute Center for Bioinformatics in America (NCICB). The main product from the Core Infrastructure is caCORE--cancer Common Ontologic Reference Environment, which is the infrastructure backbone supporting data management and application development at NCICB. The paper explains the structure and function of caCORE: (1) Enterprise Vocabulary Services (EVS). They provide controlled vocabulary, dictionary and thesaurus services, and EVS produces the NCI Thesaurus and the NCI Metathesaurus; (2) The Cancer Data Standards Repository (caDSR). It provides a metadata registry for common data elements. (3) Cancer Bioinformatics Infrastructure Objects (caBIO). They provide Java, Simple Object Access Protocol and HTTP-XML application programming interfaces. The vision for caCORE is to provide a common data management framework that will support the consistency, clarity, and comparability of biomedical research data and information. In addition to providing facilities for data management and redistribution, caCORE helps solve problems of data integration. All NCICB-developed caCORE components are distributed under open-source licenses that support unrestricted usage by both non-profit and commercial entities, and caCORE has laid the foundation for a number of scientific and clinical applications. Based on it, the paper expounds caCORE-base applications simply in several NCI projects, of which one is CMAP (Cancer Molecular Analysis Project), and the other is caBIG (Cancer Biomedical Informatics Grid). In the end, the paper also gives good prospects of caCORE, and while caCORE was born out of the needs of the cancer research community, it is intended to serve as a general resource. Cancer research has historically contributed to many areas beyond tumor biology. At the same time, the paper makes some suggestions about the study at the present time on biomedical informatics in China.
Co-location and Self-Similar Topologies of Urban Infrastructure Networks
NASA Astrophysics Data System (ADS)
Klinkhamer, Christopher; Zhan, Xianyuan; Ukkusuri, Satish; Elisabeth, Krueger; Paik, Kyungrock; Rao, Suresh
2016-04-01
The co-location of urban infrastructure is too obvious to be easily ignored. For reasons of practicality, reliability, and eminent domain, the spatial locations of many urban infrastructure networks, including drainage, sanitary sewers, and road networks, are well correlated. However, important questions dealing with correlations in the network topologies of differing infrastructure types remain unanswered. Here, we have extracted randomly distributed, nested subnets from the urban drainage, sanitary sewer, and road networks in two distinctly different cities: Amman, Jordan; and Indianapolis, USA. Network analyses were performed for each randomly chosen subnet (location and size), using a dual-mapping approach (Hierarchical Intersection Continuity Negotiation). Topological metrics for each infrastructure type were calculated and compared for all subnets in a given city. Despite large differences in the climate, governance, and populace of the two cities, and functional properties of the different infrastructure types, these infrastructure networks are shown to be highly spatially homogenous. Furthermore, strong correlations are found between topological metrics of differing types of surface and subsurface infrastructure networks. Also, the network topologies of each infrastructure type for both cities are shown to exhibit self-similar characteristics (i.e., power law node-degree distributions, [p(k) = ak-γ]. These findings can be used to assist city planners and engineers either expanding or retrofitting existing infrastructure, or in the case of developing countries, building new cities from the ground up. In addition, the self-similar nature of these infrastructure networks holds significant implications for the vulnerability of these critical infrastructure networks to external hazards and ways in which network resilience can be improved.
Development of a public health nursing data infrastructure.
Monsen, Karen A; Bekemeier, Betty; P Newhouse, Robin; Scutchfield, F Douglas
2012-01-01
An invited group of national public health nursing (PHN) scholars, practitioners, policymakers, and other stakeholders met in October 2010 identifying a critical need for a national PHN data infrastructure to support PHN research. This article summarizes the strengths, limitations, and gaps specific to PHN data and proposes a research agenda for development of a PHN data infrastructure. Future implications are suggested, such as issues related to the development of the proposed PHN data infrastructure and future research possibilities enabled by the infrastructure. Such a data infrastructure has potential to improve accountability and measurement, to demonstrate the value of PHN services, and to improve population health. © 2012 Wiley Periodicals, Inc.
Waggle: A Framework for Intelligent Attentive Sensing and Actuation
NASA Astrophysics Data System (ADS)
Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.
2014-12-01
Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.
NASA Astrophysics Data System (ADS)
Wee, B.; Car, N.; Percivall, G.; Allen, D.; Fitch, P. G.; Baumann, P.; Waldmann, H. C.
2014-12-01
The Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement (CRA) is designed to foster a global community to collaborate on e-infrastructure challenges. One of the deliverables is an implementation plan to address global data infrastructure interoperability challenges and align existing domestic and international capabilities. Work package three (WP3) of the CRA focuses on the harmonization of global data infrastructure for sharing environmental data. One of the subtasks under WP3 is the development of user scenarios that guide the development of applicable deliverables. This paper describes the proposed protocol for user scenario development. It enables the solicitation of user scenarios from a broad constituency, and exposes the mechanisms by which those solicitations are evaluated against requirements that map to the Belmont Challenge. The underlying principle of traceability forms the basis for a structured, requirements-driven approach resulting in work products amenable to trade-off analyses and objective prioritization. The protocol adopts the ISO Reference Model for Open Distributed Processing (RM-ODP) as a top level framework. User scenarios are developed within RM-ODP's "Enterprise Viewpoint". To harmonize with existing frameworks, the protocol utilizes the conceptual constructs of "scenarios", "use cases", "use case categories", and use case templates as adopted by recent GEOSS Architecture Implementation Project (AIP) deliverables and CSIRO's eReefs project. These constructs are encapsulated under the larger construct of "user scenarios". Once user scenarios are ranked by goodness-of-fit to the Belmont Challenge, secondary scoring metrics may be generated, like goodness-of-fit to FutureEarth science themes. The protocol also facilitates an assessment of the ease of implementing given user scenario using existing GEOSS AIP deliverables. In summary, the protocol results in a traceability graph that can be extended to coordinate across research programmes. If implemented using appropriate technologies and harmonized with existing ontologies, this approach enables queries, sensitivity analyses, and visualization of complex relationships.
NASA Astrophysics Data System (ADS)
Kutsch, W. L.
2015-12-01
Environmental research infrastructures and big data integration networks require common data policies, standardized workflows and sophisticated e-infrastructure to optimise the data life cycle. This presentation summarizes the experiences in developing the data life cycle for the Integrated Carbon Observation System (ICOS), a European Research Infrastructure. It will also outline challenges that still exist and visions for future development. As many other environmental research infrastructures ICOS RI built on a large number of distributed observational or experimental sites. Data from these sites are transferred to Thematic Centres and quality checked, processed and integrated there. Dissemination will be managed by the ICOS Carbon Portal. This complex data life cycle has been defined in detail by developing protocols and assigning responsibilities. Since data will be shared under an open access policy there is a strong need for common data citation tracking systems that allow data providers to identify downstream usage of their data so as to prove their importance and show the impact to stakeholders and the public. More challenges arise from interoperating with other infrastructures or providing data for global integration projects as done e.g. in the framework of GEOSS or in global integration approaches such as fluxnet or SOCAt. Here, common metadata systems are the key solutions for data detection and harvesting. The metadata characterises data, services, users and ICT resources (including sensors and detectors). Risks may arise when data of high and low quality are mixed during this process or unexperienced data scientists without detailed knowledge on the data aquisition derive scientific theories through statistical analyses. The vision of fully open data availability is expressed in a recent GEO flagship initiative that will address important issues needed to build a connected and interoperable global network for carbon cycle and greenhouse gas observations and aims to meet the most urgent needs for integration between different information sources and methodologies, between different regional networks and from data providers to users.
ERIC Educational Resources Information Center
Moore, Corey L.; Manyibe, Edward O.; Sanders, Perry; Aref, Fariborz; Washington, Andre L.; Robertson, Cherjuan Y.
2017-01-01
Purpose: The purpose of this multimethod study was to evaluate the institutional research capacity building and infrastructure model (IRCBIM), an emerging innovative and integrated approach designed to build, strengthen, and sustain adequate disability and health research capacity (i.e., research infrastructure and investigators' research skills)…
A parallel-processing approach to computing for the geographic sciences
Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Haga, Jim; Maddox, Brian; Feller, Mark
2001-01-01
The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting research into various areas, such as advanced computer architecture, algorithms to meet the processing needs for real-time image and data processing, the creation of custom datasets from seamless source data, rapid turn-around of products for emergency response, and support for computationally intense spatial and temporal modeling.
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.
2012-09-01
Upcoming and future astronomy research facilities will systematically generate terabyte-sized data sets moving astronomy into the Petascale data era. While such facilities will provide astronomers with unprecedented levels of accuracy and coverage, the increases in dataset size and dimensionality will pose serious computational challenges for many current astronomy data analysis and visualization tools. With such data sizes, even simple data analysis tasks (e.g. calculating a histogram or computing data minimum/maximum) may not be achievable without access to a supercomputing facility. To effectively handle such dataset sizes, which exceed today's single machine memory and processing limits, we present a framework that exploits the distributed power of GPUs and many-core CPUs, with a goal of providing data analysis and visualizing tasks as a service for astronomers. By mixing shared and distributed memory architectures, our framework effectively utilizes the underlying hardware infrastructure handling both batched and real-time data analysis and visualization tasks. Offering such functionality as a service in a “software as a service” manner will reduce the total cost of ownership, provide an easy to use tool to the wider astronomical community, and enable a more optimized utilization of the underlying hardware infrastructure.
A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection
NASA Technical Reports Server (NTRS)
Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald
2003-01-01
Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.
NASA Astrophysics Data System (ADS)
Strogen, Bret Michael
Production of fuel ethanol in the United States has increased ten-fold since 1993, largely as a result of government programs motivated by goals to improve domestic energy security, economic development, and environmental impacts. Over the next decade, the growth of and eventually the total production of second generation cellulosic biofuels is projected to exceed first generation (e.g., corn-based) biofuels, which will require continued expansion of infrastructure for producing and distributing ethanol and perhaps other biofuels. In addition to identifying potential differences in tailpipe emissions from vehicles operating with ethanol-blended or ethanol-free gasoline, environmental comparison of ethanol to petroleum fuels requires a comprehensive accounting of life-cycle environmental effects. Hundreds of published studies evaluate the life-cycle emissions from biofuels and petroleum, but the operation and maintenance of storage, handling, and distribution infrastructure and equipment for fuels and fuel feedstocks had not been adequately addressed. Little attention has been paid to estimating and minimizing emissions from these complex systems, presumably because they are believed to contribute a small fraction of total emissions for petroleum and first generation biofuels. This research aims to quantify the environmental impacts associated with the major components of fuel distribution infrastructure, and the impacts that will be introduced by expanding the parallel infrastructure needed to accommodate more biofuels in our existing systems. First, the components used in handling, storing, and transporting feedstocks and fuels are physically characterized by typical operating throughput, utilization, and lifespan. US-specific life-cycle GHG emission and water withdrawal factors are developed for each major distribution chain activity by applying a hybrid life-cycle assessment methodology to the manufacturing, construction, maintenance and operation of each component. In order to apply the new emission factors to policy-relevant scenarios, a projection is made for the fleet inventory of infrastructure components necessary to distribute 21 billion gallons of ethanol (the 2022 federal mandate for advanced biofuels under the Energy Independence and Security Act of 2007) derived entirely from Miscanthus grass, for comparison to the baseline petroleum system. Due to geographic, physical and chemical properties of biomass and alcohols, the distribution system for Miscanthus-based ethanol is more capital- and energy-intensive than petroleum per unit of fuel energy delivered. The transportation of biofuels away from producer regions poses environmental, health, and economic trade-offs that are herein evaluated using a simplified national distribution network model. In just the last ten years, ethanol transportation within the contiguous United States is estimated to have increased more than ten-fold in total t-km as ethanol has increasingly been transported away from Midwest producers due to air quality regulations pertaining to gasoline, renewable fuel mandates, and the 10% blending limit (i.e., the E10 blend wall). From 2004 to 2009, approximately 10 billion t-km of ethanol transportation are estimated to have taken place annually for reasons other than the E10 blend wall, leading to annual freight costs greater than $240 million and more than 300,000 tonnes of CO2-e emissions and significant emissions of criteria air pollutants from the combustion of more than 90 million liters of diesel. Although emissions from distribution activities are small when normalized to each unit of fuel, they are large in scale. Archetypal fuel distribution routes by rail and by truck are created to evaluate the significance of mode choice and route location on the severity of public health impacts from locomotive and truck emissions, by calculating the average PM2.5 pollution intake fraction along each route. Exposure to pollution resulting from trucking is found to be approximately twice as harmful as rail (while trucking is five times more energy intensive). Transporting fuel from the Midwest to California would result in slightly lower human health impacts than transportation to New Jersey, even though California is more than 50% farther from the Midwest than most coastal Northeast states. In summary, this dissertation integrated concepts from infrastructure management, climate and renewable fuel policy, fuel chemistry and combustion science, air pollution modeling, public health impact assessment, network optimization and geospatial analysis. In identifying and quantifying opportunities to minimize damage to the global climate and regional air quality from fuel distribution, results in this dissertation provide credence to the urgency of harmonizing policies and programs that address national and global energy and environmental goals. Under optimal future policy and economic conditions, infrastructure will be highly utilized and transportation minimized in order to reduce total economic, health, and environmental burdens associated with the entire supply and distribution chain for transportation fuels. (Abstract shortened by UMI.)
Highways of the future : a strategic plan for highway infrastructure research and development
DOT National Transportation Integrated Search
2008-07-01
This Highways of the FutureA Strategic Plan for Highway Infrastructure Research and Development was developed in response to a need expressed by the staff of the Federal Highway Administration (FHWA) Office of Infrastructure Research and Developme...
Geels, Mark J; Thøgersen, Regitze L; Guzman, Carlos A; Ho, Mei Mei; Verreck, Frank; Collin, Nicolas; Robertson, James S; McConkey, Samuel J; Kaufmann, Stefan H E; Leroy, Odile
2015-10-05
TRANSVAC was a collaborative infrastructure project aimed at enhancing European translational vaccine research and training. The objective of this four year project (2009-2013), funded under the European Commission's (EC) seventh framework programme (FP7), was to support European collaboration in the vaccine field, principally through the provision of transnational access (TNA) to critical vaccine research and development (R&D) infrastructures, as well as by improving and harmonising the services provided by these infrastructures through joint research activities (JRA). The project successfully provided all available services to advance 29 projects and, through engaging all vaccine stakeholders, successfully laid down the blueprint for the implementation of a permanent research infrastructure for early vaccine R&D in Europe. Copyright © 2015. Published by Elsevier Ltd.
Assessing equitable access to urban green space: the role of engineered water infrastructure.
Wendel, Heather E Wright; Downs, Joni A; Mihelcic, James R
2011-08-15
Urban green space and water features provide numerous social, environmental, and economic benefits, yet disparities often exist in their distribution and accessibility. This study examines the link between issues of environmental justice and urban water management to evaluate potential improvements in green space and surface water access through the revitalization of existing engineered water infrastructures, namely stormwater ponds. First, relative access to green space and water features were compared for residents of Tampa, Florida, and an inner-city community of Tampa (East Tampa). Although disparities were not found in overall accessibility between Tampa and East Tampa, inequalities were apparent when quality, diversity, and size of green spaces were considered. East Tampa residents had significantly less access to larger, more desirable spaces and water features. Second, this research explored approaches for improving accessibility to green space and natural water using three integrated stormwater management development scenarios. These scenarios highlighted the ability of enhanced water infrastructures to increase access equality at a variety of spatial scales. Ultimately, the "greening" of gray urban water infrastructures is advocated as a way to address environmental justice issues while also reconnecting residents with issues of urban water management.
NASA Astrophysics Data System (ADS)
Kuscahyadi, Febriana; Meilano, Irwan; Riqqi, Akhmad
2017-07-01
Special Region of Yogyakarta Province (DIY) is one of Indonesian regions that often harmed by varied natural disasters which caused huge negative impacts. The most catastrophic one is earthquake in May, 27th 2006 with 6.3 magnitude moment [1], evoked 5716 people died, and economic losses for Rp. 29.1 Trillion, [2]. Their impacts could be minimized by committing disaster risk reduction program. Therefore, it is necessary to measure the natural disaster resilience within a region. Since infrastructure are might be able as facilities that means for evacuations, distribute supplies, and post disaster recovery [3], this research concerns to establish spatial modelling of natural disaster resilience using infrastructure components based on BRIC in DIY Province. There are three infrastructure used in this model; they are school, health facilities, and roads. Distance analysis is used to determine the level of resilient zone. The result gives the spatial understanding as a map that urban areas have better disaster resilience than the rural areas. The coastal areas and mountains areas which are vulnerable towards disaster have less resilience since there are no enough facilities that will increase the disaster resilience
Shortliffe, E H; Bleich, H L; Caine, C G; Masys, D R; Simborg, D W
1996-01-01
Some observers feel that the federal government should play a more active leadership role in educating the medical community and in coordinating and encouraging a more rapid and effective implementation of clinically relevant applications of wide-area networking. Other people argue that the private sector is recognizing the importance of these issues and will, when the market demands it, adopt and enhance the telecommunications systems that are needed to produce effective uses of the National Information Infrastructure (NII) by the healthcare community. This debate identifies five areas for possible government involvement: convening groups for the development of standards; providing funding for research and development; ensuring the equitable distribution of resources, particularly to places and people considered by private enterprise to provide low opportunities for profit; protecting rights of privacy, intellectual property, and security; and overcoming the jurisdictional barriers to cooperation, particularly when states offer conflicting regulations. Arguments against government involvement include the likely emergence of an adequate infrastructure under free market forces, the often stifling effect of regulation, and the need to avoid a common-and-control mentality in an infrastructure that is best promoted collaboratively. PMID:8816347
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
Identifying Audiences of E-Infrastructures - Tools for Measuring Impact
van den Besselaar, Peter
2012-01-01
Research evaluation should take into account the intended scholarly and non-scholarly audiences of the research output. This holds too for research infrastructures, which often aim at serving a large variety of audiences. With research and research infrastructures moving to the web, new possibilities are emerging for evaluation metrics. This paper proposes a feasible indicator for measuring the scope of audiences who use web-based e-infrastructures, as well as the frequency of use. In order to apply this indicator, a method is needed for classifying visitors to e-infrastructures into relevant user categories. The paper proposes such a method, based on an inductive logic program and a Bayesian classifier. The method is tested, showing that the visitors are efficiently classified with 90% accuracy into the selected categories. Consequently, the method can be used to evaluate the use of the e-infrastructure within and outside academia. PMID:23239995
IT Infrastructure Projects: A Framework for Analysis. ECAR Research Bulletin
ERIC Educational Resources Information Center
Grochow, Jerrold M.
2014-01-01
Just as maintaining a healthy infrastructure of water delivery and roads is essential to the functioning of cities and towns, maintaining a healthy infrastructure of information technology is essential to the functioning of universities. Deterioration in IT infrastructure can lead to deterioration in research, teaching, and administration. Given…
Commonwealth Infrastructure Funding for Australian Universities: 2004 to 2011
ERIC Educational Resources Information Center
Koshy, Paul; Phillimore, John
2013-01-01
This paper provides an overview of recent trends in the provision of general infrastructure funding by the Commonwealth for Australian universities (Table A providers) over the period 2004 to 2011. It specifically examines general infrastructure development and excludes funding for research infrastructure through the Australian Research Council or…
OOI CyberInfrastructure - Next Generation Oceanographic Research
NASA Astrophysics Data System (ADS)
Farcas, C.; Fox, P.; Arrott, M.; Farcas, E.; Klacansky, I.; Krueger, I.; Meisinger, M.; Orcutt, J.
2008-12-01
Software has become a key enabling technology for scientific discovery, observation, modeling, and exploitation of natural phenomena. New value emerges from the integration of individual subsystems into networked federations of capabilities exposed to the scientific community. Such data-intensive interoperability networks are crucial for future scientific collaborative research, as they open up new ways of fusing data from different sources and across various domains, and analysis on wide geographic areas. The recently established NSF OOI program, through its CyberInfrastructure component addresses this challenge by providing broad access from sensor networks for data acquisition up to computational grids for massive computations and binding infrastructure facilitating policy management and governance of the emerging system-of-scientific-systems. We provide insight into the integration core of this effort, namely, a hierarchic service-oriented architecture for a robust, performant, and maintainable implementation. We first discuss the relationship between data management and CI crosscutting concerns such as identity management, policy and governance, which define the organizational contexts for data access and usage. Next, we detail critical services including data ingestion, transformation, preservation, inventory, and presentation. To address interoperability issues between data represented in various formats we employ a semantic framework derived from the Earth System Grid technology, a canonical representation for scientific data based on DAP/OPeNDAP, and related data publishers such as ERDDAP. Finally, we briefly present the underlying transport based on a messaging infrastructure over the AMQP protocol, and the preservation based on a distributed file system through SDSC iRODS.
A Cloud-based Infrastructure and Architecture for Environmental System Research
NASA Astrophysics Data System (ADS)
Wang, D.; Wei, Y.; Shankar, M.; Quigley, J.; Wilson, B. E.
2016-12-01
The present availability of high-capacity networks, low-cost computers and storage devices, and the widespread adoption of hardware virtualization and service-oriented architecture provide a great opportunity to enable data and computing infrastructure sharing between closely related research activities. By taking advantage of these approaches, along with the world-class high computing and data infrastructure located at Oak Ridge National Laboratory, a cloud-based infrastructure and architecture has been developed to efficiently deliver essential data and informatics service and utilities to the environmental system research community, and will provide unique capabilities that allows terrestrial ecosystem research projects to share their software utilities (tools), data and even data submission workflow in a straightforward fashion. The infrastructure will minimize large disruptions from current project-based data submission workflows for better acceptances from existing projects, since many ecosystem research projects already have their own requirements or preferences for data submission and collection. The infrastructure will eliminate scalability problems with current project silos by provide unified data services and infrastructure. The Infrastructure consists of two key components (1) a collection of configurable virtual computing environments and user management systems that expedite data submission and collection from environmental system research community, and (2) scalable data management services and system, originated and development by ORNL data centers.
Towards a distributed infrastructure for research drilling in Europe
NASA Astrophysics Data System (ADS)
Mevel, C.; Gatliff, R.; Ludden, J.; Camoin, G.; Horsfield, B.; Kopf, A.
2012-04-01
The EC-funded project "Deep Sea and Sub-Seafloor Frontier" (DS3F) aims at developing seafloor and sub seafloor sampling strategies for enhanced understanding of deep-sea and sub seafloor processes by connecting marine research in life and geosciences, climate and environmental change, with socio-economic issues and policy building. DS3F has identified access to sub seafloor sampling and instrumentation as a key element of this approach. There is a strong expertise in Europe concerning direct access to the sub seafloor. Within the international program IODP (Integrated Ocean Drilling Program), ECORD (European Consortium for Ocean Research Drilling) has successfully developed the concept of mission specific platforms (MSPs), contracted on a project basis to drill in ice covered and shallow water areas. The ECORD Science Operator, lead by the British Geological Survey (BGS) has build a internationally recognized expertise in scientific ocean drilling, from coring in challenging environment, through down hole measurements and laboratory analysis to core curation and data management. MARUM, at the Bremen University in Germany, is one of the three IODP core repositories. Europe is also at the forefront of scientific seabed drills, with the MeBo developed by MARUM as well as the BGS seabed rocks drills. Europe also plays a important role in continental scientific drilling and the European component of ICDP (International Continental Scientific Drilling Program) is strengthening, with the recent addition of France and foreseen addition of UK. Oceanic and continental drilling have very similar scientific objectives. Moreover, they share not only common technologies, but also common data handling systems. To develop an integrated approach to technology development and usage, a move towards a a distributed infrastructure for research drilling in Europe has been initiated by these different groups. Built on existing research & operational groups across Europe, it will facilitate the sharing of technological and scientific expertise for the benefit of the science community. It will link with other relevant infrastructure initiatives such as EMSO (European Marine Seafloor Observatories). It will raise the profile of scientific drilling in Europe and hopefully lead to better funding opportunities.
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A.; Behr, S.
2006-12-01
As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing the IPY initiative through a variety of cyberinfrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is likely to be collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circumarctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through two related activities: (1) an assessment - via interviews, questionnaires, a workshop, and other means - of community needs, readiness, and resources, and (2) the development of a prototype web mapping portal to demonstrate the purpose and function on an arctic geospatial one-stop portal technology and to solicit community input on design and function. The results of this project will be compiled into a comprehensive report guiding the research community and funding agencies in the design and implementation of an ASDI to contribute to a robust IPY data cyberinfrastructure.
Progress in satellite quantum key distribution
NASA Astrophysics Data System (ADS)
Bedington, Robert; Arrazola, Juan Miguel; Ling, Alexander
2017-08-01
Quantum key distribution (QKD) is a family of protocols for growing a private encryption key between two parties. Despite much progress, all ground-based QKD approaches have a distance limit due to atmospheric losses or in-fibre attenuation. These limitations make purely ground-based systems impractical for a global distribution network. However, the range of communication may be extended by employing satellites equipped with high-quality optical links. This manuscript summarizes research and development which is beginning to enable QKD with satellites. It includes a discussion of protocols, infrastructure, and the technical challenges involved with implementing such systems, as well as a top level summary of on-going satellite QKD initiatives around the world.
Requirements Engineering in Building Climate Science Software
NASA Astrophysics Data System (ADS)
Batcheller, Archer L.
Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.
Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila
2014-01-01
There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. PMID:24302285
The Climate-G Portal: a Grid Enabled Scientifc Gateway for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni
2010-05-01
Grid portals are web gateways aiming at concealing the underlying infrastructure through a pervasive, transparent, user-friendly, ubiquitous and seamless access to heterogeneous and geographical spread resources (i.e. storage, computational facilities, services, sensors, network, databases). Definitively they provide an enhanced problem-solving environment able to deal with modern, large scale scientific and engineering problems. Scientific gateways are able to introduce a revolution in the way scientists and researchers organize and carry out their activities. Access to distributed resources, complex workflow capabilities, and community-oriented functionalities are just some of the features that can be provided by such a web-based environment. In the context of the EGEE NA4 Earth Science Cluster, Climate-G is a distributed testbed focusing on climate change research topics. The Euro-Mediterranean Center for Climate Change (CMCC) is actively participating in the testbed providing the scientific gateway (Climate-G Portal) to access to the entire infrastructure. The Climate-G Portal has to face important and critical challenges as well as has to satisfy and address key requirements. In the following, the most relevant ones are presented and discussed. Transparency: the portal has to provide a transparent access to the underlying infrastructure preventing users from dealing with low level details and the complexity of a distributed grid environment. Security: users must be authenticated and authorized on the portal to access and exploit portal functionalities. A wide set of roles is needed to clearly assign the proper one to each user. The access to the computational grid must be completely secured, since the target infrastructure to run jobs is a production grid environment. A security infrastructure (based on X509v3 digital certificates) is strongly needed. Pervasivity and ubiquity: the access to the system must be pervasive and ubiquitous. This is easily true due to the nature of the needed web approach. Usability and simplicity: the portal has to provide simple, high level and user friendly interfaces to ease the access and exploitation of the entire system. Coexistence of general purpose and domain oriented services: along with general purpose services (file transfer, job submission, etc.), the portal has to provide domain based services and functionalities. Subsetting of data, visualization of 2D maps around a virtual globe, delivery of maps through OGC compliant interfaces (i.e. Web Map Service - WMS) are just some examples. Since april 2009, about 70 users (85% coming from the climate change community) got access to the portal. A key challenge of this work is the idea to provide users with an integrated working environment, that is a place where scientists can find huge amount of data, complete metadata support, a wide set of data access services, data visualization and analysis tools, easy access to the underlying grid infrastructure and advanced monitoring interfaces.
Pavement Technology and Airport Infrastructure Expansion Impact
NASA Astrophysics Data System (ADS)
Sabib; Setiawan, M. I.; Kurniasih, N.; Ahmar, A. S.; Hasyim, C.
2018-01-01
This research aims for analyzing construction and infrastructure development activities potential contribution towards Airport Performance. This research is correlation study with variable research that includes Airport Performance as X variable and construction and infrastructure development activities as Y variable. The population in this research is 148 airports in Indonesia. The sampling technique uses total sampling, which means 148 airports that becomes the population unit then all of it become samples. The results of coefficient correlation (R) test showed that construction and infrastructure development activities variable have a relatively strong relationship with Airport Performance variable, but the value of Adjusted R Square shows that an increase in the construction and infrastructure development activities is influenced by factor other than Airport Performance.
NASA Astrophysics Data System (ADS)
Asmi, A.; Sorvari, S.; Kutsch, W. L.; Laj, P.
2017-12-01
European long-term environmental research infrastructures (often referred as ESFRI RIs) are the core facilities for providing services for scientists in their quest for understanding and predicting the complex Earth system and its functioning that requires long-term efforts to identify environmental changes (trends, thresholds and resilience, interactions and feedbacks). Many of the research infrastructures originally have been developed to respond to the needs of their specific research communities, however, it is clear that strong collaboration among research infrastructures is needed to serve the trans-boundary research requires exploring scientific questions at the intersection of different scientific fields, conducting joint research projects and developing concepts, devices, and methods that can be used to integrate knowledge. European Environmental research infrastructures have already been successfully worked together for many years and have established a cluster - ENVRI cluster - for their collaborative work. ENVRI cluster act as a collaborative platform where the RIs can jointly agree on the common solutions for their operations, draft strategies and policies and share best practices and knowledge. Supporting project for the ENVRI cluster, ENVRIplus project, brings together 21 European research infrastructures and infrastructure networks to work on joint technical solutions, data interoperability, access management, training, strategies and dissemination efforts. ENVRI cluster act as one stop shop for multidisciplinary RI users, other collaborative initiatives, projects and programmes and coordinates and implement jointly agreed RI strategies.
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
Europlanet-RI IDIS - A Data Network in Support of Planetary Research
NASA Astrophysics Data System (ADS)
Schmidt, Walter; Capria, Maria Teresa; Chanteur, Gérard
2010-05-01
The "Europlanet Research Infrastructure - Europlanet RI", supported by the European Commission's Framework Program 7, aims at integrating major parts of the distributed European Planetary Research infrastructure with as diverse components as space exploration, ground-based observations, laboratory experiments and numerical modeling teams. A central part of Europlanet RI is the "Integrated and Distributed Information Service" (IDIS), a network of data and information access facilities in Europe via which information relevant for planetary research can be easily found and retrieved. This covers the wide range from contact addresses of possible research partners, laboratories and test facilities to the access of data collected with space missions or during laboratory or simulation tests and to model software useful for their interpretation. During the following three years the capabilities of the network will be extended to allow the combination of many different data sources for comperative studies including the results of modeling calculations and simulations of instrument observations. Together with the access to complex databases for spectra of atmospheric molecules and planetary surface material IDIS will offer a versatile working environment for making the scientific exploitation of the resources put into planetary research in the past and future more effective. Many of the mentioned capabilities are already available now. List of contact web-sites: Technical node for support and management aspects: http://www.idis.europlanet-ri.eu/ Planetary Surfaces and Interiors node: http://www.idis-interiors.europlanet-ri.eu/ Planetary Plasma node: http://www.idis-plasma.europlanet-ri.eu/ Planetary Atmospheres node: http://www.idis-atmos.europlanet-ri.eu/ Small Bodies and Dust node: http://www.idis-sbdn.europlanet-ri.eu/ Planetary Dynamics and Extraterrestrial Matter node: http://www.idis-dyn.europlanet-ri.eu/
Distributed telemedicine for the National Information Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.
1997-08-01
TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Policy Model of Sustainable Infrastructure Development (Case Study : Bandarlampung City, Indonesia)
NASA Astrophysics Data System (ADS)
Persada, C.; Sitorus, S. R. P.; Marimin; Djakapermana, R. D.
2018-03-01
Infrastructure development does not only affect the economic aspect, but also social and environmental, those are the main dimensions of sustainable development. Many aspects and actors involved in urban infrastructure development requires a comprehensive and integrated policy towards sustainability. Therefore, it is necessary to formulate an infrastructure development policy that considers various dimensions of sustainable development. The main objective of this research is to formulate policy of sustainable infrastructure development. In this research, urban infrastructure covers transportation, water systems (drinking water, storm water, wastewater), green open spaces and solid waste. This research was conducted in Bandarlampung City. This study use a comprehensive modeling, namely the Multi Dimensional Scaling (MDS) with Rapid Appraisal of Infrastructure (Rapinfra), it uses of Analytic Network Process (ANP) and it uses system dynamics model. The findings of the MDS analysis showed that the status of Bandarlampung City infrastructure sustainability is less sustainable. The ANP analysis produces 8 main indicators of the most influential in the development of sustainable infrastructure. The system dynamics model offered 4 scenarios of sustainable urban infrastructure policy model. The best scenario was implemented into 3 policies consist of: the integrated infrastructure management, the population control, and the local economy development.
NASA Astrophysics Data System (ADS)
Asmi, A.; Kutsch, W. L.
2015-12-01
Environmental Research Infrastructures are often built as bottom-up initiatives to provide products for specific target group, which often is very discipline specific. However, the societal or environmental challenges are typically not concentrated on specific disciplines, and require usage of data sets from many RIs. ENVRI PLUS is an initiative where the European environmental RIs work together to provide common technical background (in physical observation technologies and in data products and descriptions) to make the RI products more usable to user groups outside of the original RI target groups. ENVRI PLUS also includes many policy and dissemination concentrated actions to make the RI operations coherent and understandable to both scientists and other potential users. The actions include building common technological capital of the RIs (physical and data-oriented), creating common access procedures (especially for cross-diciplinary access), developing ethical guidelines and related policies, distributing know-how between RIs and building common communication and collaboration system for European environmental RIs. All ENVRI PLUS products are free to use, e.g. for use of new or existing environmental RIs worldwide.
Integrating thematic web portal capabilities into the NASA Earthdata Web Infrastructure
NASA Astrophysics Data System (ADS)
Wong, M. M.; McLaughlin, B. D.; Huang, T.; Baynes, K.
2015-12-01
The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. To assist the scientific community and general public in achieving a greater understanding of the interdisciplinary nature of Earth science and of key environmental and climate change topics, the NASA Earthdata web infrastructure is integrating new methods of presenting and providing access to Earth science information, data, research and results. This poster will present the process of integrating thematic web portal capabilities into the NASA Earthdata web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators. Earthdata is a part of the Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools.
Towards European organisation for integrated greenhouse gas observation system
NASA Astrophysics Data System (ADS)
Kaukolehto, Marjut; Vesala, Timo; Sorvari, Sanna; Juurola, Eija; Paris, Jean-Daniel
2013-04-01
Climate change is one the most challenging problems that humanity will have to cope with in the coming decades. The perturbed global biogeochemical cycles of the greenhouse gases (carbon dioxide, methane and nitrous oxide) are a major driving force of current and future climate change. Deeper understanding of the driving forces of climate change requires full quantification of the greenhouse gas emissions and sinks and their evolution. Regional greenhouse gas budgets, tipping-points, vulnerabilities and the controlling mechanisms can be assessed by long term, high precision observations in the atmosphere and at the ocean and land surface. ICOS RI is a distributed infrastructure for on-line, in-situ monitoring of greenhouse gases (GHG) necessary to understand their present-state and future sinks and sources. ICOS RI provides the long-term observations required to understand the present state and predict future behaviour of the global carbon cycle and greenhouse gas emissions. Linking research, education and innovation promotes technological development and demonstrations related to greenhouse gases. The first objective of ICOS RI is to provide effective access to coherent and precise data and to provide assessments of GHG inventories with high temporal and spatial resolution. The second objective is to provide profound information for research and understanding of regional budgets of greenhouse gas sources and sinks, their human and natural drivers, and the controlling mechanisms. ICOS is one of several ESFRI initiatives in the environmental science domain. There is significant potential for structural and synergetic interaction with several other ESFRI initiatives. ICOS RI is relevant for Joint Programming by providing the data access for the researchers and acting as a contact point for developing joint strategic research agendas among European member states. The preparatory phase ends in March 2013 and there will be an interim period before the legal entity will be set up. International negotiations have been going on for two years during which the constitutional documents have been processed and adopted. The instrument for the ICOS legal entity is the ERIC (European Research Infrastructure Consortium) steered by the General Assembly of its Members. ICOS is a highly distributed research infrastructure where three operative levels (ICOS National Networks, ICOS Central Facilities and ICOS ERIC) interact on several fields of research and governance. The governance structure of ICOS RI needs to reflect this complexity while maintaining the common vision, strategy and principles.
Building the scholarly society infrastructure in physics in interwar America
NASA Astrophysics Data System (ADS)
Scheiding, Tom
2013-11-01
Starting in the interwar years both the quantity and quality of physics research conducted within the United States increased dramatically. To accommodate these increases there needed to be significant changes to the infrastructure within the scholarly society and particularly to the organization's ability to publish and distribute scholarly journals. Significant changes to the infrastructure in physics in the United States began with the formation of the American Institute of Physics as an umbrella organization for the major scholarly societies in American physics in 1931. The American Institute of Physics played a critical role in bringing about an expansion in the size of and breadth of coverage within scholarly journals in physics. The priority the American Institute of Physics placed on establishing a strong publication program and the creation of the American Institute of Physics itself were stimulated by extensive involvement and financial investments from the Chemical Foundation. It was journals of sufficient size and providing an appropriate level of coverage that were essential after World War II as physicists made use of increased patronage and public support to conduct even more research. The account offered here suggests that in important respects the significant government patronage that resulted from World War II accelerated changes that were already underway.
Shewan, Louise G; Glatz, Jane A; Bennett, Christine C; Coats, Andrew J S
To investigate the perceptions of Australian health and medical researchers 4 years after the Wills Report recommended and led to a substantial increase in health and medical research funding in Australia. A telephone poll of 501 active health and medical researchers, conducted between 28 April and 5 May, 2003. Researchers' views on the adequacy of funding, infrastructure and support, salary, community recognition, the excitement of discovery and research outcomes such as publication and patenting in research. Research funding was the most important concern: 91% of researchers (455/498) viewed funding as "very" or "extremely" important to their role, but only 10% (52/500) were "very" or "extremely" satisfied with the level of funding. Research infrastructure and support were seen as "very" or "extremely" important by 90% of researchers (449/501), while only 21% (104/501) were "very" or "extremely" satisfied. Researchers in medical research institutes were significantly more likely to be satisfied (27% [56/205] "very" or "extremely" satisfied) with the level of infrastructure and support than those working in universities (15% [41/268] "very" or "extremely" satisfied; P = 0.001). Among the factors that motivate researchers, the excitement of discovery stood out in terms of both high importance and satisfaction. Publications were viewed as more important research outcomes than patenting or commercial ventures. Funding and infrastructure support remain overwhelmingly researchers' greatest concerns. University-based researchers were less satisfied with infrastructure and support than those in independent medical research institutes.
The International Symposium on Grids and Clouds and the Open Grid Forum
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be addressed while OGF exposed the state of current developments and issues to be resolved if commonalities are to be exploited. Another first is for the Proceedings for 2011, an open access online publishing scheme will ensure these Proceedings will appear more quickly and more people will have access to the results, providing a long-term online archive of the event. The symposium attracted more than 212 participants from 29 countries spanning Asia, Europe and the Americas. Coming so soon after the earthquake and tsunami in Japan, the participation of our Japanese colleagues was particularly appreciated. Keynotes by invited speakers highlighted the impact of distributed computing infrastructures in the social sciences and humanities, high energy physics, earth and life sciences. Plenary sessions entitled Grid Activities in Asia Pacific surveyed the state of grid deployment across 11 Asian countries. Through the parallel sessions, the impact of distributed computing infrastructures in a range of research disciplines was highlighted. Operational procedures, middleware and security aspects were addressed in a dedicated sessions. The symposium was covered online in real-time by the GridCast team from the GridTalk project. A running blog including summarises of specific sessions as well as video interviews with keynote speakers and personalities and photos. As with all regions of the world, grid and cloud computing has to be prove it is adding value to researchers if it is be accepted by them and demonstrate its impact on society as a while if it to be supported by national governments, funding agencies and the general public. ISGC has helped foster the emergence of a strong regional interest in the earth and life sciences, notably for natural disaster mitigation and bioinformatics studies. Prof. Simon C. Lin organised an intense social programme with a gastronomic tour of Taipei culminating with a banquet for all the symposium's participants at the hotel Palais de Chine. I would like to thank all the members of the programme committee, the participants and above all our hosts, Prof. Simon C. Lin and his excellent support team at Academia Sinica. Dr. Bob Jones Programme Chair 1 http://event.twgrid.org/isgc2011/ 2 http://www.gridforum.org/
Auscope: Australian Earth Science Information Infrastructure using Free and Open Source Software
NASA Astrophysics Data System (ADS)
Woodcock, R.; Cox, S. J.; Fraser, R.; Wyborn, L. A.
2013-12-01
Since 2005 the Australian Government has supported a series of initiatives providing researchers with access to major research facilities and information networks necessary for world-class research. Starting with the National Collaborative Research Infrastructure Strategy (NCRIS) the Australian earth science community established an integrated national geoscience infrastructure system called AuScope. AuScope is now in operation, providing a number of components to assist in understanding the structure and evolution of the Australian continent. These include the acquisition of subsurface imaging , earth composition and age analysis, a virtual drill core library, geological process simulation, and a high resolution geospatial reference framework. To draw together information from across the earth science community in academia, industry and government, AuScope includes a nationally distributed information infrastructure. Free and Open Source Software (FOSS) has been a significant enabler in building the AuScope community and providing a range of interoperable services for accessing data and scientific software. A number of FOSS components have been created, adopted or upgraded to create a coherent, OGC compliant Spatial Information Services Stack (SISS). SISS is now deployed at all Australian Geological Surveys, many Universities and the CSIRO. Comprising a set of OGC catalogue and data services, and augmented with new vocabulary and identifier services, the SISS provides a comprehensive package for organisations to contribute their data to the AuScope network. This packaging and a variety of software testing and documentation activities enabled greater trust and notably reduced barriers to adoption. FOSS selection was important, not only for technical capability and robustness, but also for appropriate licensing and community models to ensure sustainability of the infrastructure in the long term. Government agencies were sensitive to these issues and AuScope's careful selection has been rewarded by adoption. In some cases the features provided by the SISS solution are now significantly in advance of COTS offerings which will create expectations that can be passed back from users to their preferred vendors. Using FOSS, AuScope has addressed the challenge of data exchange across organisations nationally. The data standards (e.g. GeosciML) and platforms that underpin AuScope provide important new datasets and multi-agency links independent of underlying software and hardware differences. AuScope has created an infrastructure, a platform of technologies and the opportunity for new ways of working with and integrating disparate data at much lower cost. Research activities are now exploiting the information infrastructure to create virtual laboratories for research ranging from geophysics through water and the environment. Once again the AuScope community is making heavy use of FOSS to provide access to processing software and Cloud computing and HPC. The successful use of FOSS by AuScope, and the efforts made to ensure it is suitable for adoption, have resulted in the SISS being selected as a reference implementation for a number of Australian Government initiatives beyond AuScope in environmental information and bioregional assessments.
NASA Astrophysics Data System (ADS)
Chen, R. S.; Yetman, G.; de Sherbinin, A. M.
2015-12-01
Understanding the interactions between environmental and human systems, and in particular supporting the applications of Earth science data and knowledge in place-based decision making, requires systematic assessment of the distribution and dynamics of human population and the built human infrastructure in conjunction with environmental variability and change. The NASA Socioeconomic Data and Applications Center (SEDAC) operated by the Center for International Earth Science Information Network (CIESIN) at Columbia University has had a long track record in developing reference data layers for human population and settlements and is expanding its efforts on topics such as intercity roads, reservoirs and dams, and energy infrastructure. SEDAC has set as a strategic priority the acquisition, development, and dissemination of data resources derived from remote sensing and socioeconomic data on urban land use change, including temporally and spatially disaggregated data on urban change and rates of change, the built infrastructure, and critical facilities. We report here on a range of past and ongoing activities, including the Global Human Settlements Layer effort led by the European Commission's Joint Research Centre (JRC), the Global Exposure Database for the Global Earthquake Model (GED4GEM) project, the Global Roads Open Access Data Working Group (gROADS) of the Committee on Data for Science and Technology (CODATA), and recent work with ImageCat, Inc. to improve estimates of the exposure and fragility of buildings, road and rail infrastructure, and other facilities with respect to selected natural hazards. New efforts such as the proposed Global Human Settlement indicators initiative of the Group on Earth Observations (GEO) could help fill critical gaps and link potential reference data layers with user needs. We highlight key sectors and themes that require further attention, and the many significant challenges that remain in developing comprehensive, high quality, up-to-date, and well maintained reference data layers on population and built infrastructure. The need for improved indicators of sustainable development in the context of the post-2015 development framework provides an opportunity to link data efforts directly with international development needs and investments.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2009-04-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user`s own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2008-12-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology- specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user's own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
The GEOSS solution for enabling data interoperability and integrative research.
Nativi, Stefano; Mazzetti, Paolo; Craglia, Max; Pirrone, Nicola
2014-03-01
Global sustainability research requires an integrative research effort underpinned by digital infrastructures (systems) able to harness data and heterogeneous information across disciplines. Digital data and information sharing across systems and applications is achieved by implementing interoperability: a property of a product or system to work with other products or systems, present or future. There are at least three main interoperability challenges a digital infrastructure must address: technological, semantic, and organizational. In recent years, important international programs and initiatives are focusing on such an ambitious objective. This manuscript presents and combines the studies and the experiences carried out by three relevant projects, focusing on the heavy metal domain: Global Mercury Observation System, Global Earth Observation System of Systems (GEOSS), and INSPIRE. This research work recognized a valuable interoperability service bus (i.e., a set of standards models, interfaces, and good practices) proposed to characterize the integrative research cyber-infrastructure of the heavy metal research community. In the paper, the GEOSS common infrastructure is discussed implementing a multidisciplinary and participatory research infrastructure, introducing a possible roadmap for the heavy metal pollution research community to join GEOSS as a new Group on Earth Observation community of practice and develop a research infrastructure for carrying out integrative research in its specific domain.
JINR cloud infrastructure evolution
NASA Astrophysics Data System (ADS)
Baranov, A. V.; Balashov, N. A.; Kutovskiy, N. A.; Semenov, R. N.
2016-09-01
To fulfil JINR commitments in different national and international projects related to the use of modern information technologies such as cloud and grid computing as well as to provide a modern tool for JINR users for their scientific research a cloud infrastructure was deployed at Laboratory of Information Technologies of Joint Institute for Nuclear Research. OpenNebula software was chosen as a cloud platform. Initially it was set up in simple configuration with single front-end host and a few cloud nodes. Some custom development was done to tune JINR cloud installation to fit local needs: web form in the cloud web-interface for resources request, a menu item with cloud utilization statistics, user authentication via Kerberos, custom driver for OpenVZ containers. Because of high demand in that cloud service and its resources over-utilization it was re-designed to cover increasing users' needs in capacity, availability and reliability. Recently a new cloud instance has been deployed in high-availability configuration with distributed network file system and additional computing power.
Community-driven computational biology with Debian Linux.
Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles
2010-12-21
The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.
NASA Astrophysics Data System (ADS)
Lavric, J. V.; Juurola, E.; Vermeulen, A. T.; Kutsch, W. L.
2016-12-01
In a world that is undergoing climate change and is increasingly impacted by human influence, the need for globally integrated observations of greenhouse gases (GHG) and independent evaluation of their fluxes is becoming increasingly pressing. Since the 2015 COP21 meeting in Paris, such observation systems are also demanded by global stakeholders and policy makers. For successful monitoring and implementation of mitigation measures, the behavior of natural carbon pools must be well understood, the human carbon emission inventories better constrained, and the interaction between the two better studied. The Integrated Carbon Observation System (ICOS), currently comprising 12 member countries, is a European domain-overarching distributed research infrastructure dedicated to providing freely accessible long-term, high-quality data and data products on greenhouse gas (GHG) budgets and their evolution in terrestrial ecosystems, oceans and atmosphere. ICOS was built on the foundations of nationally-operated in-situ measurement facilities and modelling efforts. Today, it consists of National Networks, Central Facilities, and the European Research Infrastructure Consortium (ICOS ERIC), founded in November 2015. The long-term objective of ICOS is to remain independent, sustainable, on the forefront of scientific and technological development, and to find a good balance between scientific interests on one side and expectations of policy makers and society on the other. On the global scale, ICOS seeks to interlink with complementary research infrastructures (e.g. ACTRIS, IAGOS, etc.) to form partnerships that maximize the output and the effect of invested resources to the benefit of all stakeholders. A lot of attention will also be given to network design and attracting new partners from regions where such observations are still lacking in order to fill the gaps in the global observation network. In this presentation we present the latest developments concerning ICOS and its roadmap for the near future.
NASA Astrophysics Data System (ADS)
Bandaragoda, C.; Castronova, A. M.; Phuong, J.; Istanbulluoglu, E.; Strauch, R. L.; Nudurupati, S. S.; Tarboton, D. G.; Wang, S. W.; Yin, D.; Barnhart, K. R.; Tucker, G. E.; Hutton, E.; Hobley, D. E. J.; Gasparini, N. M.; Adams, J. M.
2017-12-01
The ability to test hypotheses about hydrology, geomorphology and atmospheric processes is invaluable to research in the era of big data. Although community resources are available, there remain significant educational, logistical and time investment barriers to their use. Knowledge infrastructure is an emerging intellectual framework to understand how people are creating, sharing and distributing knowledge - which has been dramatically transformed by Internet technologies. In addition to the technical and social components in a cyberinfrastructure system, knowledge infrastructure considers educational, institutional, and open source governance components required to advance knowledge. We are designing an infrastructure environment that lowers common barriers to reproducing modeling experiments for earth surface investigation. Landlab is an open-source modeling toolkit for building, coupling, and exploring two-dimensional numerical models. HydroShare is an online collaborative environment for sharing hydrologic data and models. CyberGIS-Jupyter is an innovative cyberGIS framework for achieving data-intensive, reproducible, and scalable geospatial analytics using the Jupyter Notebook based on ROGER - the first cyberGIS supercomputer, so that models that can be elastically reproduced through cloud computing approaches. Our team of geomorphologists, hydrologists, and computer geoscientists has created a new infrastructure environment that combines these three pieces of software to enable knowledge discovery. Through this novel integration, any user can interactively execute and explore their shared data and model resources. Landlab on HydroShare with CyberGIS-Jupyter supports the modeling continuum from fully developed modelling applications, prototyping new science tools, hands on research demonstrations for training workshops, and classroom applications. Computational geospatial models based on big data and high performance computing can now be more efficiently developed, improved, scaled, and seamlessly reproduced among multidisciplinary users, thereby expanding the active learning curriculum and research opportunities for students in earth surface modeling and informatics.
Biodiversity analysis in the digital era
2016-01-01
This paper explores what the virtual biodiversity e-infrastructure will look like as it takes advantage of advances in ‘Big Data’ biodiversity informatics and e-research infrastructure, which allow integration of various taxon-level data types (genome, morphology, distribution and species interactions) within a phylogenetic and environmental framework. By overcoming the data scaling problem in ecology, this integrative framework will provide richer information and fast learning to enable a deeper understanding of biodiversity evolution and dynamics in a rapidly changing world. The Atlas of Living Australia is used as one example of the advantages of progressing towards this future. Living in this future will require the adoption of new ways of integrating scientific knowledge into societal decision making. This article is part of the themed issue ‘From DNA barcodes to biomes’. PMID:27481789
ChemCalc: a building block for tomorrow's chemical infrastructure.
Patiny, Luc; Borel, Alain
2013-05-24
Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).
Holve, Erin; Segal, Courtney
2014-11-01
The 11 big health data networks participating in the AcademyHealth Electronic Data Methods Forum represent cutting-edge efforts to harness the power of big health data for research and quality improvement. This paper is a comparative case study based on site visits conducted with a subset of these large infrastructure grants funded through the Recovery Act, in which four key issues emerge that can inform the evolution of learning health systems, including the importance of acknowledging the challenges of scaling specialized expertise needed to manage and run CER networks; the delicate balance between privacy protections and the utility of distributed networks; emerging community engagement strategies; and the complexities of developing a robust business model for multi-use networks.
NREL Serves as the Energy Department's Showcase for Cutting-Edge Fuel Cell
vehicle on loan from Hyundai through a one-year Cooperative Research and Development Agreement and a B produced at the Hydrogen Infrastructure Testing and Research Facility (HITRF) located at NREL's Energy and infrastructure as part of the Energy Department's Hydrogen Fueling Infrastructure Research and
This slide presentation summarizes key elements of the EPA Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall r...
Increasing Road Infrastructure Capacity Through the Use of Autonomous Vehicles
2016-12-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. INCREASING ROAD ...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE INCREASING ROAD INFRASTRUCTURE CAPACITY THROUGH THE USE OF AUTONOMOUS VEHICLES 5. FUNDING...driverless vehicles, road infrastructure 15. NUMBER OF PAGES 65 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY
FOSS Tools for Research Infrastructures - A Success Story?
NASA Astrophysics Data System (ADS)
Stender, V.; Schroeder, M.; Wächter, J.
2015-12-01
Established initiatives and mandated organizations, e.g. the Initiative for Scientific Cyberinfrastructures (NSF, 2007) or the European Strategy Forum on Research Infrastructures (ESFRI, 2008), promote and foster the development of sustainable research infrastructures. The basic idea behind these infrastructures is the provision of services supporting scientists to search, visualize and access data, to collaborate and exchange information, as well as to publish data and other results. Especially the management of research data is gaining more and more importance. In geosciences these developments have to be merged with the enhanced data management approaches of Spatial Data Infrastructures (SDI). The Centre for GeoInformationTechnology (CeGIT) at the GFZ German Research Centre for Geosciences has the objective to establish concepts and standards of SDIs as an integral part of research infrastructure architectures. In different projects, solutions to manage research data for land- and water management or environmental monitoring have been developed based on a framework consisting of Free and Open Source Software (FOSS) components. The framework provides basic components supporting the import and storage of data, discovery and visualization as well as data documentation (metadata). In our contribution, we present our data management solutions developed in three projects, Central Asian Water (CAWa), Sustainable Management of River Oases (SuMaRiO) and Terrestrial Environmental Observatories (TERENO) where FOSS components build the backbone of the data management platform. The multiple use and validation of tools helped to establish a standardized architectural blueprint serving as a contribution to Research Infrastructures. We examine the question of whether FOSS tools are really a sustainable choice and whether the increased efforts of maintenance are justified. Finally it should help to answering the question if the use of FOSS for Research Infrastructures is a success story.
Infrastructures for Distributed Computing: the case of BESIII
NASA Astrophysics Data System (ADS)
Pellegrino, J.
2018-05-01
The BESIII is an electron-positron collision experiment hosted at BEPCII in Beijing and aimed to investigate Tau-Charm physics. Now BESIII has been running for several years and gathered more than 1PB raw data. In order to analyze these data and perform massive Monte Carlo simulations, a large amount of computing and storage resources is needed. The distributed computing system is based up on DIRAC and it is in production since 2012. It integrates computing and storage resources from different institutes and a variety of resource types such as cluster, grid, cloud or volunteer computing. About 15 sites from BESIII Collaboration from all over the world joined this distributed computing infrastructure, giving a significant contribution to the IHEP computing facility. Nowadays cloud computing is playing a key role in the HEP computing field, due to its scalability and elasticity. Cloud infrastructures take advantages of several tools, such as VMDirac, to manage virtual machines through cloud managers according to the job requirements. With the virtually unlimited resources from commercial clouds, the computing capacity could scale accordingly in order to deal with any burst demands. General computing models have been discussed in the talk and are addressed herewith, with particular focus on the BESIII infrastructure. Moreover new computing tools and upcoming infrastructures will be addressed.
A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC
NASA Astrophysics Data System (ADS)
Ahern, Tim; Trabant, Chad
2014-05-01
As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences.
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant's platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses.
CloudMan as a platform for tool, data, and analysis distribution.
Afgan, Enis; Chapman, Brad; Taylor, James
2012-11-27
Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.
Next Generation Distributed Computing for Cancer Research
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539
Next generation distributed computing for cancer research.
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.
Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment
NASA Astrophysics Data System (ADS)
Bowker, Geoffrey C.; Baker, Karen; Millerand, Florence; Ribes, David
This article presents Information Infrastructure Studies, a research area that takes up some core issues in digital information and organization research. Infrastructure Studies simultaneously addresses the technical, social, and organizational aspects of the development, usage, and maintenance of infrastructures in local communities as well as global arenas. While infrastructure is understood as a broad category referring to a variety of pervasive, enabling network resources such as railroad lines, plumbing and pipes, electrical power plants and wires, this article focuses on information infrastructure, such as computational services and help desks, or federating activities such as scientific data repositories and archives spanning the multiple disciplines needed to address such issues as climate warming and the biodiversity crisis. These are elements associated with the internet and, frequently today, associated with cyberinfrastructure or e-science endeavors. We argue that a theoretical understanding of infrastructure provides the context for needed dialogue between design, use, and sustainability of internet-based infrastructure services. This article outlines a research area and outlines overarching themes of Infrastructure Studies. Part one of the paper presents definitions for infrastructure and cyberinfrastructure, reviewing salient previous work. Part two portrays key ideas from infrastructure studies (knowledge work, social and political values, new forms of sociality, etc.). In closing, the character of the field today is considered.
Mishra, Amrita
2014-01-01
Abstract Omics research infrastructure such as databases and bio-repositories requires effective governance to support pre-competitive research. Governance includes the use of legal agreements, such as Material Transfer Agreements (MTAs). We analyze the use of such agreements in the mouse research commons, including by two large-scale resource development projects: the International Knockout Mouse Consortium (IKMC) and International Mouse Phenotyping Consortium (IMPC). We combine an analysis of legal agreements and semi-structured interviews with 87 members of the mouse model research community to examine legal agreements in four contexts: (1) between researchers; (2) deposit into repositories; (3) distribution by repositories; and (4) exchanges between repositories, especially those that are consortium members of the IKMC and IMPC. We conclude that legal agreements for the deposit and distribution of research reagents should be kept as simple and standard as possible, especially when minimal enforcement capacity and resources exist. Simple and standardized legal agreements reduce transactional bottlenecks and facilitate the creation of a vibrant and sustainable research commons, supported by repositories and databases. PMID:24552652
Intelligent Agents for the Digital Battlefield
1998-11-01
specific outcome of our long term research will be the development of a collaborative agent technology system, CATS , that will provide the underlying...software infrastructure needed to build large, heterogeneous, distributed agent applications. CATS will provide a software environment through which multiple...intelligent agents may interact with other agents, both human and computational. In addition, CATS will contain a number of intelligent agent components that will be useful for a wide variety of applications.
GEMSS: grid-infrastructure for medical service provision.
Benkner, S; Berti, G; Engelbrecht, G; Fingberg, J; Kohring, G; Middleton, S E; Schmidt, R
2005-01-01
The European GEMSS Project is concerned with the creation of medical Grid service prototypes and their evaluation in a secure service-oriented infrastructure for distributed on demand/supercomputing. Key aspects of the GEMSS Grid middleware include negotiable QoS support for time-critical service provision, flexible support for business models, and security at all levels in order to ensure privacy of patient data as well as compliance to EU law. The GEMSS Grid infrastructure is based on a service-oriented architecture and is being built on top of existing standard Grid and Web technologies. The GEMSS infrastructure offers a generic Grid service provision framework that hides the complexity of transforming existing applications into Grid services. For the development of client-side applications or portals, a pluggable component framework has been developed, providing developers with full control over business processes, service discovery, QoS negotiation, and workflow, while keeping their underlying implementation hidden from view. A first version of the GEMSS Grid infrastructure is operational and has been used for the set-up of a Grid test-bed deploying six medical Grid service prototypes including maxillo-facial surgery simulation, neuro-surgery support, radio-surgery planning, inhaled drug-delivery simulation, cardiovascular simulation and advanced image reconstruction. The GEMSS Grid infrastructure is based on standard Web Services technology with an anticipated future transition path towards the OGSA standard proposed by the Global Grid Forum. GEMSS demonstrates that the Grid can be used to provide medical practitioners and researchers with access to advanced simulation and image processing services for improved preoperative planning and near real-time surgical support.
International Symposium on Grids and Clouds (ISGC) 2014
NASA Astrophysics Data System (ADS)
The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).
NASA Astrophysics Data System (ADS)
Wilson, J. L.; Dressler, K.; Hooper, R. P.
2005-12-01
The river basin is a fundamental unit of the landscape and water in that defined landscape plays a central role in shaping the land surface, in dissolving minerals, in transporting chemicals, and in determining species distribution. Therefore, the river basin is a natural observatory for examining hydrologic phenomena and the complex interaction of physical, chemical, and biological processes that control them. CUAHSI, incorporated in 2001, is a community-based research infrastructure initiative formed to mobilize the hydrologic community through addressing key science questions and leveraging nationwide hydrologic resources from its member institutions and collaborative partners. Through an iterative community-based process, it has been previously proposed to develop a network of hydrologic infrastructure that organizes around scales on the order of 10,000 km2 to examine critical interfaces such as the land-surface, atmosphere, and human impact. Data collection will characterize the stores, fluxes, physical pathways, and residence time distributions of water, sediment, nutrients, and contaminants coherently at nested scales. These fundamental properties can be used by a wide range of scientific disciplines to address environmental questions. This more complete characterization will enable new linkages to be identified and hypotheses to be tested more incisively. With such a research platform, hydrologic science can advance beyond measuring streamflow or precipitation input to understanding how the river basin functions in both its internal processes and in responding to environmental stressors. That predictive understanding is needed to make informed decisions as development and even natural pressures stress existing water supplies and competing demands for water require non-traditional solutions that take into consideration economic, environmental, and social factors. Advanced hydrologic infrastructure will enable research for a broad range of multidisciplinary science questions. The CUAHSI science agenda has evolved through community input and research into several unifying theme areas, or categories. Three example categories are: forcing, internal processing, and evolution. Within each category, coherent (integrated in space and time) physical, chemical and biological data are needed to answer specific science questions. For example, in the case of "forcing": How do patterns in rainfall influence predictability of floods and droughts? Floods and droughts have long been considered random events. However, we now know that there are decadal patterns in rainfall and that rainfall recycles within the basin thereby intensifying floods and droughts. How does the internal state of the system combine with external forcing to determine the occurrence of hydrologic extremes?
Green Infrastructure Research at EPA's Edison Environmental Center
The presentation outline includes: (1) Green infrastructure research objectives (2) Introduction to ongoing research projects - Aspects of design, construction, and maintenence that affect function - Real-world applications of GI research
Distributed generation of shared RSA keys in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Liu, Yi-Liang; Huang, Qin; Shen, Ying
2005-12-01
Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.
Soga, Kenichi; Schooling, Jennifer
2016-08-06
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.
Soga, Kenichi; Schooling, Jennifer
2016-01-01
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845
Proto, Monica; Bavusi, Massimo; Bernini, Romeo; Bigagli, Lorenzo; Bost, Marie; Bourquin, Frédrèric; Cottineau, Louis-Marie; Cuomo, Vincenzo; Della Vecchia, Pietro; Dolce, Mauro; Dumoulin, Jean; Eppelbaum, Lev; Fornaro, Gianfranco; Gustafsson, Mats; Hugenschmidt, Johannes; Kaspersen, Peter; Kim, Hyunwook; Lapenna, Vincenzo; Leggio, Mario; Loperte, Antonio; Mazzetti, Paolo; Moroni, Claudio; Nativi, Stefano; Nordebo, Sven; Pacini, Fabrizio; Palombo, Angelo; Pascucci, Simone; Perrone, Angela; Pignatti, Stefano; Ponzo, Felice Carlo; Rizzo, Enzo; Soldovieri, Francesco; Taillade, Fédrèric
2010-01-01
The ISTIMES project, funded by the European Commission in the frame of a joint Call "ICT and Security" of the Seventh Framework Programme, is presented and preliminary research results are discussed. The main objective of the ISTIMES project is to design, assess and promote an Information and Communication Technologies (ICT)-based system, exploiting distributed and local sensors, for non-destructive electromagnetic monitoring of critical transport infrastructures. The integration of electromagnetic technologies with new ICT information and telecommunications systems enables remotely controlled monitoring and surveillance and real time data imaging of the critical transport infrastructures. The project exploits different non-invasive imaging technologies based on electromagnetic sensing (optic fiber sensors, Synthetic Aperture Radar satellite platform based, hyperspectral spectroscopy, Infrared thermography, Ground Penetrating Radar-, low-frequency geophysical techniques, Ground based systems for displacement monitoring). In this paper, we show the preliminary results arising from the GPR and infrared thermographic measurements carried out on the Musmeci bridge in Potenza, located in a highly seismic area of the Apennine chain (Southern Italy) and representing one of the test beds of the project.
Transport Infrastructure Surveillance and Monitoring by Electromagnetic Sensing: The ISTIMES Project
Proto, Monica; Bavusi, Massimo; Bernini, Romeo; Bigagli, Lorenzo; Bost, Marie; Bourquin, Frédrèric.; Cottineau, Louis-Marie; Cuomo, Vincenzo; Vecchia, Pietro Della; Dolce, Mauro; Dumoulin, Jean; Eppelbaum, Lev; Fornaro, Gianfranco; Gustafsson, Mats; Hugenschmidt, Johannes; Kaspersen, Peter; Kim, Hyunwook; Lapenna, Vincenzo; Leggio, Mario; Loperte, Antonio; Mazzetti, Paolo; Moroni, Claudio; Nativi, Stefano; Nordebo, Sven; Pacini, Fabrizio; Palombo, Angelo; Pascucci, Simone; Perrone, Angela; Pignatti, Stefano; Ponzo, Felice Carlo; Rizzo, Enzo; Soldovieri, Francesco; Taillade, Fédrèric
2010-01-01
The ISTIMES project, funded by the European Commission in the frame of a joint Call “ICT and Security” of the Seventh Framework Programme, is presented and preliminary research results are discussed. The main objective of the ISTIMES project is to design, assess and promote an Information and Communication Technologies (ICT)-based system, exploiting distributed and local sensors, for non-destructive electromagnetic monitoring of critical transport infrastructures. The integration of electromagnetic technologies with new ICT information and telecommunications systems enables remotely controlled monitoring and surveillance and real time data imaging of the critical transport infrastructures. The project exploits different non-invasive imaging technologies based on electromagnetic sensing (optic fiber sensors, Synthetic Aperture Radar satellite platform based, hyperspectral spectroscopy, Infrared thermography, Ground Penetrating Radar-, low-frequency geophysical techniques, Ground based systems for displacement monitoring). In this paper, we show the preliminary results arising from the GPR and infrared thermographic measurements carried out on the Musmeci bridge in Potenza, located in a highly seismic area of the Apennine chain (Southern Italy) and representing one of the test beds of the project. PMID:22163489
NASA Astrophysics Data System (ADS)
Mazzetti, P.; Nativi, S.; Verlato, M.; Angelini, V.
2009-04-01
In the context of the EU co-funded project CYCLOPS (http://www.cyclops-project.eu) the problem of designing an advanced e-Infrastructure for Civil Protection (CP) applications has been addressed. As a preliminary step, some studies about European CP systems and operational applications were performed in order to define their specific system requirements. At a higher level it was verified that CP applications are usually conceived to map CP Business Processes involving different levels of processing including data access, data processing, and output visualization. At their core they usually run one or more Earth Science models for information extraction. The traditional approach based on the development of monolithic applications presents some limitations related to flexibility (e.g. the possibility of running the same models with different input data sources, or different models with the same data sources) and scalability (e.g. launching several runs for different scenarios, or implementing more accurate and computing-demanding models). Flexibility can be addressed adopting a modular design based on a SOA and standard services and models, such as OWS and ISO for geospatial services. Distributed computing and storage solutions could improve scalability. Basing on such considerations an architectural framework has been defined. It is made of a Web Service layer providing advanced services for CP applications (e.g. standard geospatial data sharing and processing services) working on the underlying Grid platform. This framework has been tested through the development of prototypes as proof-of-concept. These theoretical studies and proof-of-concept demonstrated that although Grid and geospatial technologies would be able to provide significant benefits to CP applications in terms of scalability and flexibility, current platforms are designed taking into account requirements different from CP. In particular CP applications have strict requirements in terms of: a) Real-Time capabilities, privileging time-of-response instead of accuracy, b) Security services to support complex data policies and trust relationships, c) Interoperability with existing or planned infrastructures (e.g. e-Government, INSPIRE compliant, etc.). Actually these requirements are the main reason why CP applications differ from Earth Science applications. Therefore further research is required to design and implement an advanced e-Infrastructure satisfying those specific requirements. In particular five themes where further research is required were identified: Grid Infrastructure Enhancement, Advanced Middleware for CP Applications, Security and Data Policies, CP Applications Enablement, and Interoperability. For each theme several research topics were proposed and detailed. They are targeted to solve specific problems for the implementation of an effective operational European e-Infrastructure for CP applications.
Aging Water Infrastructure Research Program Innovation & Research for the 21st Century
The U.S. infrastructure is critical for providing essential services: protect public health and the environment and support and sustain our economy. Significant investment in water infrastructure: over 16,000 WWTPs serving 190 million people; about 54,000 community water syste...
Application of GIS in exploring spatial dimensions of Efficiency in Competitiveness of Regions
NASA Astrophysics Data System (ADS)
Rahmat, Shahid; Sen, Joy
2017-04-01
Infrastructure is an important component in building competitiveness of a region. Present global scenario of economic slowdown that is led by slump in demand of goods and services and decreasing capacity of government institutions in investing public infrastructure. Strategy of augmenting competitiveness of a region can be built around improving efficient distribution of public infrastructure in the region. This efficiency in the distribution of infrastructure will reduce the burden of government institution and improve the relative output of the region in relative lesser investment. A rigorous literature study followed by an expert opinion survey (RIDIT scores) reveals that Railway, Road, ICTs and Electricity infrastructure is very crucial for better competitiveness of a region. Discussion with Experts in ICTs, Railways and Electricity sectors were conducted to find the issues, hurdles and possible solution for the development of these sectors. In an underdeveloped country like India, there is a large constrain of financial resources, for investment in infrastructure sector. Judicious planning for allocation of resources for infrastructure provisions becomes very important for efficient and sustainable development. Data Envelopment Analysis (DEA) is the mathematical programming optimization tool that measure technical efficiency of the multiple-input and/or multiple-output case by constructing a relative technical efficiency score. This paper tries to utilize DEA to identify the efficiency at which present level of selected components of Infrastructure (Railway, Road, ICTs and Electricity) is utilized in order to build competitiveness of the region. This paper tries to identify a spatial pattern of efficiency of Infrastructure with the help of spatial auto-correlation and Hot-spot analysis in Arc GIS. This analysis leads to policy implications for efficient allocation of financial resources for the provision of infrastructure in the region and building a prerequisite to boost an efficient Regional Competitiveness.
Meeting the research infrastructure needs of micropolitan and rural communities.
Strasburger, Janette F
2009-05-01
In the 1800s, this country chose to establish land-grant colleges to see that the working class could attain higher education, and that the research needs of the agricultural and manufacturing segments of this country could be met. It seems contrary to our origins to see so little support at present for research infrastructure going to the very communities that need such research to sustain their populations, grow their economies, to attract physicians, to provide adequate health care, and to educate, retain, and employ their youth. Cities are viewed as sources for high-paying jobs, yet many of these same jobs could be translated to rural and micropolitan areas, provided that the resources are established to support it. One of the fastest growing economic periods in this country's history was during World War II, when even the smallest and most remote towns contributed substantially to the innovations, manufacture, and production of goods benefiting our nation as a whole. Rural areas have always lagged somewhat behind metropolitan areas in acquisition of new technology. Rural electricity and rural phone access are examples from the past. Testing our universities' abilities to grow distributive research networks beyond their campuses will create a competitive edge regionally, against global workplace, educational, and research competition, and will lay the groundwork for efficiency in research and for new innovation.
A Multi-Operator Simulation for Investigation of Distributed Air Traffic Management Concepts
NASA Technical Reports Server (NTRS)
Peters, Mark E.; Ballin, Mark G.; Sakosky, John S.
2002-01-01
This paper discusses the current development of an air traffic operations simulation that supports feasibility research for advanced air traffic management concepts. The Air Traffic Operations Simulation (ATOS) supports the research of future concepts that provide a much greater role for the flight crew in traffic management decision-making. ATOS provides representations of the future communications, navigation, and surveillance (CNS) infrastructure, a future flight deck systems architecture, and advanced crew interfaces. ATOS also provides a platform for the development of advanced flight guidance and decision support systems that may be required for autonomous operations.
NASA Hydrogen Research for Spaceport and Space Based Applications
NASA Technical Reports Server (NTRS)
Anderson, Tim
2006-01-01
The activities presented are a broad based approach to advancing key hydrogen related technologies in areas such as hydrogen production, distributed sensors for hydrogen-leak detection, laser instrumentation for hydrogen-leak detection, and cryogenic transport and storage. Presented are the results form 15 research projects, education, and outreach activities, system and trade studies, and project management. The work will aid in advancing the state-of-the-art for several critical technologies related to the implementation of a hydrogen infrastructure. Activities conducted are relevant to a number of propulsion and power systems for terrestrial, aeronautics, and aerospace applications.
NASA Astrophysics Data System (ADS)
Giardini, D.; van Eck, T.; Bossu, R.; Wiemer, S.
2009-04-01
The EC Research infrastructure project NERIES, an Integrated Infrastructure Initiative in seismology for 2006-2010 has passed its mid-term point. We will present a short concise overview of the current state of the project, established cooperation with other European and global projects and the planning for the last year of the project. Earthquake data archiving and access within Europe has dramatically improved during the last two years. This concerns earthquake parameters, digital broadband and acceleration waveforms and historical data. The Virtual European Broadband Seismic Network (VEBSN) consists currently of more then 300 stations. A new distributed data archive concept, the European Integrated Waveform Data Archive (EIDA), has been implemented in Europe connecting the larger European seismological waveform data. Global standards for earthquake parameter data (QuakeML) and tomography models have been developed and are being established. Web application technology has been and is being developed to make a jump start to the next generation data services. A NERIES data portal provides a number of services testing the potential capacities of new open-source web technologies. Data application tools like shakemaps, lossmaps, site response estimation and tools for data processing and visualisation are currently available, although some of these tools are still in an alpha version. A European tomography reference model will be discussed at a special workshop in June 2009. Shakemaps, coherent with the NEIC application, are implemented in, among others, Turkey, Italy, Romania, Switzerland, several countries. The comprehensive site response software is being distributed and used both inside and outside the project. NERIES organises several workshops inviting both consortium and non-consortium participants and covering a wide range of subjects: ‘Seismological observatory operation tools', ‘Tomography', ‘Ocean bottom observatories', 'Site response software training', ‘Historical earthquake catalogues', ‘Distribution of acceleration data', etc. Some of these workshops are coordinated with other organisations/projects, like ORFEUS, ESONET, IRIS, etc. NERIES still offers grants to individual researchers or groups to work at facilities such as the Swiss national seismological network (SED/ETHZ, Switzerland), the CEA/DASE facilities in France, the data scanning facilities at INGV (SISMOS), the array facilities of NORSAR (Norway) and the new Conrad Facility in Austria.
Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency
Abu Bakr, Muhammad; Lee, Sukhan
2017-01-01
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035
NASA Astrophysics Data System (ADS)
Beauregard, Stéphane; Therrien, Marie-Christine; Normandin, Julie-Maude
2010-05-01
Organizational Strategies for Critical Transportation Infrastructure: Characteristics of Urban Resilience. The Case of Montreal. Stéphane Beauregard M.Sc. Candidate École nationale d'administration publique Julie-Maude Normandin Ph.D. Candidate École nationale d'administration publique Marie-Christine Therrien Professor École nationale d'administration publique The proposed paper presents preliminary results on the resilience of organizations managing critical infrastructure in the Metropolitan Montreal area (Canada). A resilient city is characterized by a network of infrastructures and individuals capable of maintaining their activities in spite of a disturbance (Godschalk, 2002). Critical infrastructures provide essential services for the functioning of society. In a crisis situation, the interruption or a decrease in performance of critical infrastructures could have important impacts on the population. They are also vulnerable to accidents and cascading effects because on their complexity and tight interdependence (Perrow, 1984). For these reasons, protection and security of the essential assets and networks are one of the objectives of organizations and governments. But prevention and recovery are two endpoints of a continuum which include also intermediate concerns: ensuring organizational robustness or failing with elegance rather than catastrophically. This continuum also includes organizational resilience (or system), or the ability to recover quickly after an interruption has occurred. Wildavsky (1988) proposes that anticipation strategies work better against known problems while resilience strategies focus on unknown problems. Anticipation policies can unnecessarily immobilize investments against risks, while resilience strategies include the potential for a certain sacrifice in the interests of a more long-term survival and adaptation to changing threats. In addition, a too large confidence in anticipation strategies can bring loss of capacity of an organization to adapt to conditions. Each strategy must adapt to specific conditions. Where uncertainties important, resilience is probably the most appropriate. Where conditions are stable, and where future projections are generally fair, anticipating works better, although it should be used judiciously (Fiksel, 2003). Anticipation strategies immobilize specific or tangible resources and, can eventually be costly in the long-term. On the other hand, resilient systems and organizations are those that quickly acquire information about their environments, quickly change their behaviour and their structures, even if the circumstances are chaotic. They communicate easily and openly, and largely mobilize networks of expertise and support (Perrow, 1999). We conducted qualitative research to assess different variables that positively affect the organizational resilience in the management of critical infrastructure. We preferred a methodology allowing us to retain the complexity of the phenomenon, not affecting the nature of the system studied. Our methodology allows us to create pragmatic theoretical concepts (grounded theory) (Glaser and Strauss, 1967). Our main concern is not to separate the phenomenon studied in its context. This methodology allows us to better understand the coordination between the organizations network infrastructure essential by a process of "sweeping-in" (Dewey, 1938). After conducting a literature review of various concepts of our research (Comfort, L. K., 2002; Lagadec and Michel-Kerjan, 2004; Perrow, 1999; Weick and Sutcliffe, 2001; and more) we have conducted numerous interviews and distributed a questionnaire to highlight significant indicators. For the first part of this research, we targeted the transportation critical infrastructure of Montreal area because it is crucial and also this infrastructure includes public, parapublic and private organisations. The first results of this research demonstrate the contribution of different structural and functional factors that influence the intraorganizational resilience and interorganizational resilience for the transportation sector of Montreal.
Dynamic Collaboration Infrastructure for Hydrologic Science
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.
2016-12-01
Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.
Kania-Richmond, Ania; Menard, Martha B; Barberree, Beth; Mohring, Marvin
2017-04-01
Conducting research on massage therapy (MT) continues to be a significant challenge. To explore and identify the structures, processes, and resources required to enable viable, sustainable and high quality MT research activities in the Canadian context. Academically-based researchers and MT professionals involved in research. Formative evaluation and a descriptive qualitative approach were applied. Five main themes regarding the requirements of a productive and sustainable MT research infrastructure in Canada were identified: 1) core components, 2) variable components, 3) varying perspectives of stakeholder groups, 4) barriers to creating research infrastructure, and 5) negative metaphors. In addition, participants offered a number of recommendations on how to develop such an infrastructure. While barriers exist that require attention, participants' insights suggest there are various pathways through which a productive and sustainable MT research infrastructure can be achieved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Research infrastructure support to address ecosystem dynamics
NASA Astrophysics Data System (ADS)
Los, Wouter
2014-05-01
Predicting the evolution of ecosystems to climate change or human pressures is a challenge. Even understanding past or current processes is complicated as a result of the many interactions and feedbacks that occur within and between components of the system. This talk will present an example of current research on changes in landscape evolution, hydrology, soil biogeochemical processes, zoological food webs, and plant community succession, and how these affect feedbacks to components of the systems, including the climate system. Multiple observations, experiments, and simulations provide a wealth of data, but not necessarily understanding. Model development on the coupled processes on different spatial and temporal scales is sensitive for variations in data and of parameter change. Fast high performance computing may help to visualize the effect of these changes and the potential stability (and reliability) of the models. This may than allow for iteration between data production and models towards stable models reducing uncertainty and improving the prediction of change. The role of research infrastructures becomes crucial is overcoming barriers for such research. Environmental infrastructures are covering physical site facilities, dedicated instrumentation and e-infrastructure. The LifeWatch infrastructure for biodiversity and ecosystem research will provide services for data integration, analysis and modeling. But it has to cooperate intensively with the other kinds of infrastructures in order to support the iteration between data production and model computation. The cooperation in the ENVRI project (Common operations of environmental research infrastructures) is one of the initiatives to foster such multidisciplinary research.
NASA Astrophysics Data System (ADS)
Kershaw, P.
2016-12-01
CEDA, the Centre for Environmental Data Analysis, hosts a range of services on behalf of NERC (Natural Environment Research Council) for the UK environmental sciences community and its work with international partners. It is host to four data centres covering atmospheric science, earth observation, climate and space data domain areas. It holds this data on behalf of a number of different providers each with their own data policies which has thus required the development of a comprehensive system to manage access. With the advent of CMIP5, CEDA committed to be one of a number of centres to host the climate model outputs and make them available through the Earth System Grid Federation, a globally distributed software infrastructure developed for this purpose. From the outset, a means for restricting access to datasets was required, necessitating the development a federated system for authentication and authorisation so that access to data could be managed across multiple providers around the world. From 2012, CEDA has seen a further evolution with the development of JASMIN, a multi-petabyte data analysis facility. Hosted alongside the CEDA archive, it provides a range of services for users including a batch compute cluster, group workspaces and a community cloud. This has required significant changes and enhancements to the access control system. In common with many other examples in the research community, the experiences of the above underline the difficulties of developing collaborative e-Research infrastructures. Drawing from these there are some recurring themes: Clear requirements need to be established at the outset recognising that implementing strict access policies can incur additional development and administrative overhead. An appropriate balance is needed between ease of access desired by end users and metrics and monitoring required by resource providers. The major technical challenge is not with security technologies themselves but their effective integration with services and resources which they must protect. Effective policy and governance structures are needed for ongoing operations Federated identity infrastructures often exist only at the national level making it difficult for international research collaborations to exploit them.
Web Services and Handle Infrastructure - WDCC's Contributions to International Projects
NASA Astrophysics Data System (ADS)
Föll, G.; Weigelt, T.; Kindermann, S.; Lautenschlager, M.; Toussaint, F.
2012-04-01
Climate science demands on data management are growing rapidly as climate models grow in the precision with which they depict spatial structures and in the completeness with which they describe a vast range of physical processes. The ExArch project is exploring the challenges of developing a software management infrastructure which will scale to the multi-exabyte archives of climate data which are likely to be crucial to major policy decisions in by the end of the decade. The ExArch approach to future integration of exascale climate archives is based on one hand on a distributed web service architecture providing data analysis and quality control functionality across archvies. On the other hand a consistent persistent identifier infrastructure is deployed to support distributed data management and data replication. Distributed data analysis functionality is based on the CDO climate data operators' package. The CDO-Tool is used for processing of the archived data and metadata. CDO is a collection of command line Operators to manipulate and analyse Climate and forecast model Data. A range of formats is supported and over 500 operators are provided. CDO presently is designed to work in a scripting environment with local files. ExArch will extend the tool to support efficient usage in an exascale archive with distributed data and computational resources by providing flexible scheduling capabilities. Quality control will become increasingly important in an exascale computing context. Researchers will be dealing with millions of data files from multiple sources and will need to know whether the files satisfy a range of basic quality criterea. Hence ExArch will provide a flexible and extensible quality control system. The data will be held at more than 30 computing centres and data archives around the world, but for users it will appear as a single archive due to a standardized ExArch Web Processing Service. Data infrastructures such as the one built by ExArch can greatly benefit from assigning persistent identifiers (PIDs) to the main entities, such as data and metadata records. A PID should then not only consist of a globally unique identifier, but also support built-in facilities to relate PIDs to each other, to build multi-hierarchical virtual collections and to enable attaching basic metadata directly to PIDs. With such a toolset, PIDs can support crucial data management tasks. For example, data replication performed in ExArch can be supported through PIDs as they can help to establish durable links between identical copies. By linking derivative data objects together, their provenance can be traced with a level of detail and reliability currently unavailable in the Earth system modelling domain. Regarding data transfers, virtual collections of PIDs may be used to package data prior to transmission. If the PID of such a collection is used as the primary key in data transfers, safety of transfer and traceability of data objects across repositories increases. End-users can benefit from PIDs as well since they make data discovery independent from particular storage sites and enable user-friendly communication about primary research objects. A generic PID system can in fact be a fundamental building block for scientific e-infrastructures across projects and domains.
Infrastructure for Automatic Dynamic Deployment of J2EE Applications in Distributed Environments
2005-01-01
5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S...AND ADDRESS(ES) Defense Advanced Research Projects Agency,3701 North Fairfax Drive,Arlington,VA,22203-1714 8. PERFORMING ORGANIZATION REPORT...the paper is organized as follows. Section 2 provides necessary background for understanding the specifics of the J2EE component technology which are
Modeling of the Space Station Freedom data management system
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1990-01-01
The Data Management System (DMS) is the information and communications system onboard Space Station Freedom (SSF). Extensive modeling of the DMS is being conducted throughout NASA to aid in the design and development of this vital system. Activities discussed at NASA Ames Research Center to model the DMS network infrastructure are discussed with focus on the modeling of the Fiber Distributed Data Interface (FDDI) token-ring protocol and experimental testbedding of networking aspects of the DMS.
Data discovery and data processing for environmental research infrastructures
NASA Astrophysics Data System (ADS)
Los, Wouter; Beranzoli, Laura; Corriero, Giuseppe; Cossu, Roberto; Fiore, Nicola; Hardisty, Alex; Legré, Yannick; Pagano, Pasquale; Puglisi, Giuseppe; Sorvari, Sanna; Turunen, Esa
2013-04-01
The European ENVRI project (Common operations of Environmental Research Infrastructures) is addressing common ICT solutions for the research infrastructures as selected in the ESFRI Roadmap. More specifically, the project is looking for solutions that will assist interdisciplinary users who want to benefit from the data and other services of more than a single research infrastructure. However, the infrastructure architectures, the data, data formats, scales and granularity are very different. Indeed, they deal with diverse scientific disciplines, from plate tectonics, the deep sea, sea and land surface up to atmosphere and troposphere, from the dead to the living environment, and with a variety of instruments producing increasingly larger amounts of data. One of the approaches in the ENVRI project is to design a common Reference Model that will serve to promote infrastructure interoperability at the data, technical and service levels. The analysis of the characteristics of the environmental research infrastructures assisted in developing the Reference Model, and which is also an example for comparable infrastructures worldwide. Still, it is for users already now important to have the facilities available for multi-disciplinary data discovery and data processing. The rise of systems research, addressing Earth as a single complex and coupled system is requiring such capabilities. So, another approach in the project is to adapt existing ICT solutions to short term applications. This is being tested for a few study cases. One of these is looking for possible coupled processes following a volcano eruption in the vertical column from deep sea to troposphere. Another one deals with volcano either human impacts on atmospheric and sea CO2 pressure and the implications for sea acidification and marine biodiversity and their ecosystems. And a third one deals with the variety of sensor and satellites data sensing the area around a volcano cone. Preliminary results on these studies will be reported. The common results will assist in shaping more generic solutions to be adopted by the appropriate research infrastructures.
Cybersecurity Intrusion Detection and Monitoring for Field Area Network: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pietrowicz, Stanley
This report summarizes the key technical accomplishments, industry impact and performance of the I2-CEDS grant entitled “Cybersecurity Intrusion Detection and Monitoring for Field Area Network”. Led by Applied Communication Sciences (ACS/Vencore Labs) in conjunction with its utility partner Sacramento Municipal Utility District (SMUD), the project accelerated research on a first-of-its-kind cybersecurity monitoring solution for Advanced Meter Infrastructure and Distribution Automation field networks. It advanced the technology to a validated, full-scale solution that detects anomalies, intrusion events and improves utility situational awareness and visibility. The solution was successfully transitioned and commercialized for production use as SecureSmart™ Continuous Monitoring. Discoveries made withmore » SecureSmart™ Continuous Monitoring led to tangible and demonstrable improvements in the security posture of the US national electric infrastructure.« less
The open black box: The role of the end-user in GIS integration
Poore, B.S.
2003-01-01
Formalist theories of knowledge that underpin GIS scholarship on integration neglect the importance and creativity of end-users in knowledge construction. This has practical consequences for the success of large distributed databases that contribute to spatial-data infrastructures. Spatial-data infrastructures depend on participation at local levels, such as counties and watersheds, and they must be developed to support feedback from local users. Looking carefully at the work of scientists in a watershed in Puget Sound, Washington, USA during the salmon crisis reveals that the work of these end-users articulates different worlds of knowledge. This view of the user is consonant with recent work in science and technology studies and research into computer-supported cooperative work. GIS theory will be enhanced when it makes room for these users and supports their practical work. ?? / Canadian Association of Geographers.
Distribution of green infrastructure along walkable roads
Low-income and minority neighborhoods frequently lack healthful resources to which wealthier communities have access. Though important, the addition of facilities such as recreation centers can be costly and take time to implement. Urban green infrastructure, such as street trees...
Adapting Water Infrastructure to Non-stationary Climate Changes
Water supply and sanitation are carried out by three major types of water infrastructure: drinking water treatment and distribution, wastewater collection and treatment, and storm water collection and management. Their sustainability is measured by resilience against and adapta...
Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila
2014-01-01
There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
EPA Office of Research and Development Green Infrastructure Research
This presentation provides an overview introduction to the USEPA Office of Research and Development (ORD)'s ongoing green infrastructure (GI) research efforts for stormwater management. GI approaches that increase infiltration, evapotranspiration, and rainwater harvesting offer ...
Trinh-Shevrin, Chau; Ro, Marguerite; Tseng, Winston; Islam, Nadia Shilpi; Rey, Mariano J; Kwon, Simona C
2012-01-01
Considerable progress in Asian American health research has occurred over the last two decades. However, greater and sustained federal support is needed for reducing health disparities in Asian American communities. PURPOSE OF THE ARTICLE: This paper reviews federal policies that support infrastructure to conduct minority health research and highlights one model for strengthening research capacity and infrastructure in Asian American communities. Research center infrastructures can play a significant role in addressing pipeline/workforce challenges, fostering campus-community research collaborations, engaging communities in health, disseminating evidence-based strategies and health information, and policy development. Research centers provide the capacity needed for academic institutions and communities to work together synergistically in achieving the goal to reduce health disparities in the Asian American community. Policies that support the development of concentrated and targeted research for Asian Americans must continue so that these centers will reach their full potential.
In support of the Agency's Sustainable Water Infrastructure Initiative, EPA's Office of Research and Develpment initiated the Aging Water Infrastructure Research Program in 2007. The program, with its core focus on the support of strategic asset management, is designed to facili...
Towards a single seismological service infrastructure in Europe
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Frobert, L.; Van Eck, T.
2012-04-01
In the last five year services and data providers, within the seismological community in Europe, focused their efforts in migrating the way of opening their archives towards a Service Oriented Architecture (SOA). This process tries to follow pragmatically the technological trends and available solutions aiming at effectively improving all the data stewardship activities. These advancements are possible thanks to the cooperation and the follow-ups of several EC infrastructural projects that, by looking at general purpose techniques, combine their developments envisioning a multidisciplinary platform for the earth observation as the final common objective (EPOS, Earth Plate Observation System) One of the first results of this effort is the Earthquake Data Portal (http://www.seismicportal.eu), which provides a collection of tools to discover, visualize and access a variety of seismological data sets like seismic waveform, accelerometric data, earthquake catalogs and parameters. The Portal offers a cohesive distributed search environment, linking data search and access across multiple data providers through interactive web-services, map-based tools and diverse command-line clients. Our work continues under other EU FP7 projects. Here we will address initiatives in two of those projects. The NERA, (Network of European Research Infrastructures for Earthquake Risk Assessment and Mitigation) project will implement a Common Services Architecture based on OGC services APIs, in order to provide Resource-Oriented common interfaces across the data access and processing services. This will improve interoperability between tools and across projects, enabling the development of higher-level applications that can uniformly access the data and processing services of all participants. This effort will be conducted jointly with the VERCE project (Virtual Earthquake and Seismology Research Community for Europe). VERCE aims to enable seismologists to exploit the wealth of seismic data within a data-intensive computation framework, which will be tailored to the specific needs of the community. It will provide a new interoperable infrastructure, as the computational backbone laying behind the publicly available interfaces. VERCE will have to face the challenges of implementing a service oriented architecture providing an efficient layer between the Data and the Grid infrastructures, coupling HPC data analysis and HPC data modeling applications through the execution of workflows and data sharing mechanism. Online registries of interoperable worklflow components, storage of intermediate results and data provenance are those aspects that are currently under investigations to make the VERCE facilities usable from a large scale of users, data and service providers. For such purposes the adoption of a Digital Object Architecture, to create online catalogs referencing and describing semantically all these distributed resources, such as datasets, computational processes and derivative products, is seen as one of the viable solution to monitor and steer the usage of the infrastructure, increasing its efficiency and the cooperation among the community.
NASA Astrophysics Data System (ADS)
Zhong, L.; Takano, K.; Ji, Y.; Yamada, S.
2015-12-01
The disruption of telecommunications is one of the most critical disasters during natural hazards. As the rapid expanding of mobile communications, the mobile communication infrastructure plays a very fundamental role in the disaster response and recovery activities. For this reason, its disruption will lead to loss of life and property, due to information delays and errors. Therefore, disaster preparedness and response of mobile communication infrastructure itself is quite important. In many cases of experienced disasters, the disruption of mobile communication networks is usually caused by the network congestion and afterward long-term power outage. In order to reduce this disruption, the knowledge of communication demands during disasters is necessary. And big data analytics will provide a very promising way to predict the communication demands by analyzing the big amount of operational data of mobile users in a large-scale mobile network. Under the US-Japan collaborative project on 'Big Data and Disaster Research (BDD)' supported by the Japan Science and Technology Agency (JST) and National Science Foundation (NSF), we are going to investigate the application of big data techniques in the disaster preparedness and response of mobile communication infrastructure. Specifically, in this research, we have considered to exploit the big amount of operational information of mobile users for predicting the communications needs in different time and locations. By incorporating with other data such as shake distribution of an estimated major earthquake and the power outage map, we are able to provide the prediction information of stranded people who are difficult to confirm safety or ask for help due to network disruption. In addition, this result could further facilitate the network operators to assess the vulnerability of their infrastructure and make suitable decision for the disaster preparedness and response. In this presentation, we are going to introduce the results we obtained based on the big data analytics of mobile user statistical information and discuss the implications of these results.
Role of EPA in Asset Management Research – The Aging Water Infrastructure Research Program
This slide presentation provides an overview of the EPA Office of Research and Development’s Aging Water infrastructure Research Program (AWIRP). The research program origins, goals, products, and plans are described. The research program focuses on four areas: condition asses...
3D WindScanner lidar measurements of wind and turbulence around wind turbines, buildings and bridges
NASA Astrophysics Data System (ADS)
Mikkelsen, T.; Sjöholm, M.; Angelou, N.; Mann, J.
2017-12-01
WindScanner is a distributed research infrastructure developed at DTU with the participation of a number of European countries. The research infrastructure consists of a mobile technically advanced facility for remote measurement of wind and turbulence in 3D. The WindScanners provide coordinated measurements of the entire wind and turbulence fields, of all three wind components scanned in 3D space. Although primarily developed for research related to on- and offshore wind turbines and wind farms, the facility is also well suited for scanning turbulent wind fields around buildings, bridges, aviation structures and of flow in urban environments. The mobile WindScanner facility enables 3D scanning of wind and turbulence fields in full scale within the atmospheric boundary layer at ranges from 10 meters to 5 (10) kilometers. Measurements of turbulent coherent structures are applied for investigation of flow pattern and dynamical loads from turbines, building structures and bridges and in relation to optimization of the location of, for example, wind farms and suspension bridges. This paper presents our achievements to date and reviews briefly the state-of-the-art of the WindScanner measurement technology with examples of uses for wind engineering applications.
NASA Astrophysics Data System (ADS)
Hasbi, M.; Darma, R.; Yamin, M.; Nurdin, M.; Rizal, M.
2018-05-01
Cocoa is an important commodity because 90% farmers involved, easily marketed, and potentially harvested along the year. However, cocoa productivity tended to decrease by an average of only 300 kg hectare-1 year-1 or away from the potential productivity of two tons. Water management was an alternative method to increase its productivity by harvesting rainwater on the hilly cocoa farm area and distributing the water based on the gravity law. The research objective was to describes how to manage rainwater at the hilly cocoa farm area, so that the water needs of cocoa farm were met during the dry season. The important implication of the management was the water availability that supports the cocoa cultivation during the year. This research used qualitative method with descriptive approach to explain the appropriate technical specification of infrastructure to support the rainwater management. This research generated several mathematical formulas to support rainwater management infrastructure. The implementation of an appropriate rainwater utilization management for cocoa farm will ensuring the availability of water during dry season, so the cocoa farm allowed to produce cacao fruit during the year.
Enabling BOINC in infrastructure as a service cloud system
NASA Astrophysics Data System (ADS)
Montes, Diego; Añel, Juan A.; Pena, Tomás F.; Uhe, Peter; Wallom, David C. H.
2017-02-01
Volunteer or crowd computing is becoming increasingly popular for solving complex research problems from an increasingly diverse range of areas. The majority of these have been built using the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which provides a range of different services to manage all computation aspects of a project. The BOINC system is ideal in those cases where not only does the research community involved need low-cost access to massive computing resources but also where there is a significant public interest in the research being done.We discuss the way in which cloud services can help BOINC-based projects to deliver results in a fast, on demand manner. This is difficult to achieve using volunteers, and at the same time, using scalable cloud resources for short on demand projects can optimize the use of the available resources. We show how this design can be used as an efficient distributed computing platform within the cloud, and outline new approaches that could open up new possibilities in this field, using Climateprediction.net (http://www.climateprediction.net/) as a case study.
Wyoming Landscape Conservation Initiative data management and integration
Latysh, Natalie; Bristol, R. Sky
2011-01-01
Six Federal agencies, two State agencies, and two local entities formally support the Wyoming Landscape Conservation Initiative (WLCI) and work together on a landscape scale to manage fragile habitats and wildlife resources amidst growing energy development in southwest Wyoming. The U.S. Geological Survey (USGS) was tasked with implementing targeted research and providing scientific information about southwest Wyoming to inform the development of WLCI habitat enhancement and restoration projects conducted by land management agencies. Many WLCI researchers and decisionmakers representing the Bureau of Land Management, U.S. Fish and Wildlife Service, the State of Wyoming, and others have overwhelmingly expressed the need for a stable, robust infrastructure to promote sharing of data resources produced by multiple entities, including metadata adequately describing the datasets. Descriptive metadata facilitates use of the datasets by users unfamiliar with the data. Agency representatives advocate development of common data handling and distribution practices among WLCI partners to enhance availability of comprehensive and diverse data resources for use in scientific analyses and resource management. The USGS Core Science Informatics (CSI) team is developing and promoting data integration tools and techniques across USGS and partner entity endeavors, including a data management infrastructure to aid WLCI researchers and decisionmakers.
Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
pSCANNER: patient-centered Scalable National Network for Effectiveness Research
Ohno-Machado, Lucila; Agha, Zia; Bell, Douglas S; Dahm, Lisa; Day, Michele E; Doctor, Jason N; Gabriel, Davera; Kahlon, Maninder K; Kim, Katherine K; Hogarth, Michael; Matheny, Michael E; Meeker, Daniella; Nebeker, Jonathan R
2014-01-01
This article describes the patient-centered Scalable National Network for Effectiveness Research (pSCANNER), which is part of the recently formed PCORnet, a national network composed of learning healthcare systems and patient-powered research networks funded by the Patient Centered Outcomes Research Institute (PCORI). It is designed to be a stakeholder-governed federated network that uses a distributed architecture to integrate data from three existing networks covering over 21 million patients in all 50 states: (1) VA Informatics and Computing Infrastructure (VINCI), with data from Veteran Health Administration's 151 inpatient and 909 ambulatory care and community-based outpatient clinics; (2) the University of California Research exchange (UC-ReX) network, with data from UC Davis, Irvine, Los Angeles, San Francisco, and San Diego; and (3) SCANNER, a consortium of UCSD, Tennessee VA, and three federally qualified health systems in the Los Angeles area supplemented with claims and health information exchange data, led by the University of Southern California. Initial use cases will focus on three conditions: (1) congestive heart failure; (2) Kawasaki disease; (3) obesity. Stakeholders, such as patients, clinicians, and health service researchers, will be engaged to prioritize research questions to be answered through the network. We will use a privacy-preserving distributed computation model with synchronous and asynchronous modes. The distributed system will be based on a common data model that allows the construction and evaluation of distributed multivariate models for a variety of statistical analyses. PMID:24780722
Critical Infrastructure Protection: EMP Impacts on the U.S. Electric Grid
NASA Astrophysics Data System (ADS)
Boston, Edwin J., Jr.
The purpose of this research is to identify the United States electric grid infrastructure systems vulnerabilities to electromagnetic pulse attacks and the cyber-based impacts of those vulnerabilities to the electric grid. Additionally, the research identifies multiple defensive strategies designed to harden the electric grid against electromagnetic pulse attack that include prevention, mitigation and recovery postures. Research results confirm the importance of the electric grid to the United States critical infrastructures system and that an electromagnetic pulse attack against the electric grid could result in electric grid degradation, critical infrastructure(s) damage and the potential for societal collapse. The conclusions of this research indicate that while an electromagnetic pulse attack against the United States electric grid could have catastrophic impacts on American society, there are currently many defensive strategies under consideration designed to prevent, mitigate and or recover from an electromagnetic pulse attack. However, additional research is essential to further identify future target hardening opportunities, efficient implementation strategies and funding resources.
The iPlant Collaborative: Cyberinfrastructure for Enabling Data to Discovery for the Life Sciences
Merchant, Nirav; Lyons, Eric; Goff, Stephen; Vaughn, Matthew; Ware, Doreen; Micklos, David; Antin, Parker
2016-01-01
The iPlant Collaborative provides life science research communities access to comprehensive, scalable, and cohesive computational infrastructure for data management; identity management; collaboration tools; and cloud, high-performance, high-throughput computing. iPlant provides training, learning material, and best practice resources to help all researchers make the best use of their data, expand their computational skill set, and effectively manage their data and computation when working as distributed teams. iPlant’s platform permits researchers to easily deposit and share their data and deploy new computational tools and analysis workflows, allowing the broader community to easily use and reuse those data and computational analyses. PMID:26752627
CloudMan as a platform for tool, data, and analysis distribution
2012-01-01
Background Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. Results CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. Conclusions With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions. PMID:23181507
BioenergyKDF: Enabling Spatiotemporal Data Synthesis and Research Collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, Aaron T; Movva, Sunil; Karthik, Rajasekar
2014-01-01
The Bioenergy Knowledge Discovery Framework (BioenergyKDF) is a scalable, web-based collaborative environment for scientists working on bioenergy related research in which the connections between data, literature, and models can be explored and more clearly understood. The fully-operational and deployed system, built on multiple open source libraries and architectures, stores contributions from the community of practice and makes them easy to find, but that is just its base functionality. The BioenergyKDF provides a national spatiotemporal decision support capability that enables data sharing, analysis, modeling, and visualization as well as fosters the development and management of the U.S. bioenergy infrastructure, which ismore » an essential component of the national energy infrastructure. The BioenergyKDF is built on a flexible, customizable platform that can be extended to support the requirements of any user community especially those that work with spatiotemporal data. While there are several community data-sharing software platforms available, some developed and distributed by national governments, none of them have the full suite of capabilities available in BioenergyKDF. For example, this component-based platform and database independent architecture allows it to be quickly deployed to existing infrastructure and to connect to existing data repositories (spatial or otherwise). As new data, analysis, and features are added; the BioenergyKDF will help lead research and support decisions concerning bioenergy into the future, but will also enable the development and growth of additional communities of practice both inside and outside of the Department of Energy. These communities will be able to leverage the substantial investment the agency has made in the KDF platform to quickly stand up systems that are customized to their data and research needs.« less
Water and Carbon Footprints for Sustainability Analysis of Urban Infrastructure
Water and transportation infrastructures define spatial distribution of urban population and economic activities. In this context, energy and water consumed per capita are tangible measures of how efficient water and transportation systems are constructed and operated. At a hig...
COST MODELS FOR WATER SUPPLY DISTRIBUTION SYSTEMS
A major challenge for society in the twenty-first century will be replacement, design and optimal management of urban infrastructure. It is estimated that the current world wide demand for infrastructure investment is approximately three trillion dollars annually. A Drinking Wate...
Centre for Research Infrastructure of Polish GNSS Data - response and possible contribution to EPOS
NASA Astrophysics Data System (ADS)
Araszkiewicz, Andrzej; Rohm, Witold; Bosy, Jaroslaw; Szolucha, Marcin; Kaplon, Jan; Kroszczynski, Krzysztof
2017-04-01
In the frame of the first call under Action 4.2: Development of modern research infrastructure of the science sector in the Smart Growth Operational Programme 2014-2020 in the late of 2016 the "EPOS-PL" project has launched. Following institutes are responsible for the implementation of this project: Institute of Geophysics, Polish Academy of Sciences - Project Leader, Academic Computer Centre Cyfronet AGH University of Science and Technology, Central Mining Institute, the Institute of Geodesy and Cartography, Wrocław University of Environmental and Life Sciences, Military University of Technology. In addition, resources constituting entrepreneur's own contribution will come from the Polish Mining Group. Research Infrastructure EPOS-PL will integrate both existing and newly built National Research Infrastructures (Theme Centre for Research Infrastructures), which, under the premise of the program EPOS, are financed exclusively by the national founds. In addition, the e-science platform will be developed. The Centre for Research Infrastructure of GNSS Data (CIBDG - Task 5) will be built based on the experience and facilities of two institutions: Military University of Technology and Wrocław University of Environmental and Life Sciences. The project includes the construction of the National GNNS Repository with data QC procedures and adaptation of two Regional GNNS Analysis Centres for rapid and long-term geodynamical monitoring.
Network and computing infrastructure for scientific applications in Georgia
NASA Astrophysics Data System (ADS)
Kvatadze, R.; Modebadze, Z.
2016-09-01
Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.
Research Practices, Evaluation and Infrastructure in the Digital Environment
ERIC Educational Resources Information Center
Houghton, John W.
2004-01-01
This paper examines changing research practices in the digital environment and draws out implications for research evaluation and the development of research infrastructure. Reviews of the literature, quantitative indicators of research activities and our own field research in Australia suggest that a new mode of knowledge production is emerging,…
Changing Research Practices and Research Infrastructure Development
ERIC Educational Resources Information Center
Houghton, John W.
2005-01-01
This paper examines changing research practices in the digital environment and draws out implications for the development of research infrastructure. Reviews of the literature, quantitative indicators of research activities and our own field research in Australia suggest that there is a new mode of knowledge production emerging, changing research…
A Drupal-Based Collaborative Framework for Science Workflows
NASA Astrophysics Data System (ADS)
Pinheiro da Silva, P.; Gandara, A.
2010-12-01
Cyber-infrastructure is built from utilizing technical infrastructure to support organizational practices and social norms to provide support for scientific teams working together or dependent on each other to conduct scientific research. Such cyber-infrastructure enables the sharing of information and data so that scientists can leverage knowledge and expertise through automation. Scientific workflow systems have been used to build automated scientific systems used by scientists to conduct scientific research and, as a result, create artifacts in support of scientific discoveries. These complex systems are often developed by teams of scientists who are located in different places, e.g., scientists working in distinct buildings, and sometimes in different time zones, e.g., scientist working in distinct national laboratories. The sharing of these specifications is currently supported by the use of version control systems such as CVS or Subversion. Discussions about the design, improvement, and testing of these specifications, however, often happen elsewhere, e.g., through the exchange of email messages and IM chatting. Carrying on a discussion about these specifications is challenging because comments and specifications are not necessarily connected. For instance, the person reading a comment about a given workflow specification may not be able to see the workflow and even if the person can see the workflow, the person may not specifically know to which part of the workflow a given comments applies to. In this paper, we discuss the design, implementation and use of CI-Server, a Drupal-based infrastructure, to support the collaboration of both local and distributed teams of scientists using scientific workflows. CI-Server has three primary goals: to enable information sharing by providing tools that scientists can use within their scientific research to process data, publish and share artifacts; to build community by providing tools that support discussions between scientists about artifacts used or created through scientific processes; and to leverage the knowledge collected within the artifacts and scientific collaborations to support scientific discoveries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, Dennis Patrick; Jauregui, David Villegas; Daumueller, Andrew Nicholas
2012-02-01
Recent structural failures such as the I-35W Mississippi River Bridge in Minnesota have underscored the urgent need for improved methods and procedures for evaluating our aging transportation infrastructure. This research seeks to develop a basis for a Structural Health Monitoring (SHM) system to provide quantitative information related to the structural integrity of metallic structures to make appropriate management decisions and ensuring public safety. This research employs advanced structural analysis and nondestructive testing (NDT) methods for an accurate fatigue analysis. Metal railroad bridges in New Mexico will be the focus since many of these structures are over 100 years old andmore » classified as fracture-critical. The term fracture-critical indicates that failure of a single component may result in complete collapse of the structure such as the one experienced by the I-35W Bridge. Failure may originate from sources such as loss of section due to corrosion or cracking caused by fatigue loading. Because standard inspection practice is primarily visual, these types of defects can go undetected due to oversight, lack of access to critical areas, or, in riveted members, hidden defects that are beneath fasteners or connection angles. Another issue is that it is difficult to determine the fatigue damage that a structure has experienced and the rate at which damage is accumulating due to uncertain history and load distribution in supporting members. A SHM system has several advantages that can overcome these limitations. SHM allows critical areas of the structure to be monitored more quantitatively under actual loading. The research needed to apply SHM to metallic structures was performed and a case study was carried out to show the potential of SHM-driven fatigue evaluation to assess the condition of critical transportation infrastructure and to guide inspectors to potential problem areas. This project combines the expertise in transportation infrastructure at New Mexico State University with the expertise at Sandia National Laboratories in the emerging field of SHM.« less
Stormwater management and ecosystem services: a review
NASA Astrophysics Data System (ADS)
Prudencio, Liana; Null, Sarah E.
2018-03-01
Researchers and water managers have turned to green stormwater infrastructure, such as bioswales, retention basins, wetlands, rain gardens, and urban green spaces to reduce flooding, augment surface water supplies, recharge groundwater, and improve water quality. It is increasingly clear that green stormwater infrastructure not only controls stormwater volume and timing, but also promotes ecosystem services, which are the benefits that ecosystems provide to humans. Yet there has been little synthesis focused on understanding how green stormwater management affects ecosystem services. The objectives of this paper are to review and synthesize published literature on ecosystem services and green stormwater infrastructure and identify gaps in research and understanding, establishing a foundation for research at the intersection of ecosystems services and green stormwater management. We reviewed 170 publications on stormwater management and ecosystem services, and summarized the state-of-the-science categorized by the four types of ecosystem services. Major findings show that: (1) most research was conducted at the parcel-scale and should expand to larger scales to more closely understand green stormwater infrastructure impacts, (2) nearly a third of papers developed frameworks for implementing green stormwater infrastructure and highlighted barriers, (3) papers discussed ecosystem services, but less than 40% quantified ecosystem services, (4) no geographic trends emerged, indicating interest in applying green stormwater infrastructure across different contexts, (5) studies increasingly integrate engineering, physical science, and social science approaches for holistic understanding, and (6) standardizing green stormwater infrastructure terminology would provide a more cohesive field of study than the diverse and often redundant terminology currently in use. We recommend that future research provide metrics and quantify ecosystem services, integrate disciplines to measure ecosystem services from green stormwater infrastructure, and better incorporate stormwater management into environmental policy. Our conclusions outline promising future research directions at the intersection of stormwater management and ecosystem services.
The research overview of the US EPA Aging Water Infrastructure Research Program includes: Research areas: condition assessment; rehabilitation; advanced design/treatment concepts and Research project focused on innovative rehabilitation technologies to reduce costs and increase...
The Impact of Airport Performance towards Construction and Infrastructure Expansion in Indonesia
NASA Astrophysics Data System (ADS)
Laksono, T. D.; Kurniasih, N.; Hasyim, C.; Setiawan, M. I.; Ahmar, A. S.
2018-01-01
Development that is generated from airport areas includes construction and infrastructure development. This research reviews about how the implementation of material management in certain construction project and the relationship between development especially construction and infrastructure development with Airport Performance. The method that is used in this research is mixed method. The population in this research is 297 airports that are existed in Indonesia. From those 297 airports then it is chosen airports that have the most completed data about construction project and it is obtained 148 airports. Based on the coefficient correlation (R) test it is known that construction and infrastructure development has relatively strong relation with airport performance variable, but there are still other factors that influence construction and infrastructure development become bigger effect.
Approach to sustainable e-Infrastructures - The case of the Latin American Grid
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Diacovo, Ramon; Brasileiro, Francisco; Carvalho, Diego; Dutra, Inês; Faerman, Marcio; Gavillet, Philippe; Hoeger, Herbert; Lopez Pourailly, Maria Jose; Marechal, Bernard; Garcia, Rafael Mayo; Neumann Ciuffo, Leandro; Ramos Pollan, Paul; Scardaci, Diego; Stanton, Michael
2010-05-01
The EELA (E-Infrastructure shared between Europe and Latin America) and EELA-2 (E-science grid facility for Europe and Latin America) projects, co-funded by the European Commission under FP6 and FP7, respectively, have been successful in building a high capacity, production-quality, scalable Grid Facility for a wide spectrum of applications (e.g. Earth & Life Sciences, High energy physics, etc.) from several European and Latin American User Communities. This paper presents the 4-year experience of EELA and EELA-2 in: • Providing each Member Institution the unique opportunity to benefit of a huge distributed computing platform for its research activities, in particular through initiatives such as OurGrid which proposes a so-called Opportunistic Grid Computing well adapted to small and medium Research Laboratories such as most of those of Latin America and Africa; • Developing a realistic strategy to ensure the long-term continuity of the e-Infrastructure in the Latin American continent, beyond the term of the EELA-2 project, in association with CLARA and collaborating with EGI. Previous interactions between EELA and African Grid members at events such as the IST Africa'07, 08 and 09, the International Conference on Open Access'08 and EuroAfriCa-ICT'08, to which EELA and EELA-2 contributed, have shown that the e-Infrastructure situation in Africa compares well with the Latin American one. This means that African Grids are likely to face the same problems that EELA and EELA-2 experienced, especially in getting the necessary User and Decision Makers support to create NGIs and, later, a possible continent-wide African Grid Initiative (AGI). The hope is that the EELA-2 endeavour towards sustainability as described in this presentation could help the progress of African Grids.
Astronomy: On the Bleeding Edge of Scholarly Infrastructure
NASA Astrophysics Data System (ADS)
Borgman, Christine; Sands, A.; Wynholds, L. A.
2013-01-01
The infrastructure for scholarship has moved online, making data, articles, papers, journals, catalogs, and other scholarly resources nodes in a deeply interconnected network. Astronomy has led the way on several fronts, developing tools such as ADS to provide unified access to astronomical publications and reaching agreement on a common data file formats such as FITS. Astronomy also was among the first fields to establish open access to substantial amounts of observational data. We report on the first three years of a long-term research project to study knowledge infrastructures in astronomy, funded by the NSF and the Alfred P. Sloan Foundation. Early findings indicate that the availability and use of networked technologies for integrating scholarly resources varies widely within astronomy. Substantial differences arise in the management of data between ground-based and space-based missions and between subfields of astronomy, for example. While large databases such as SDSS and MAST are essential resources for many researchers, much pointed, ground-based observational data exist only on local servers, with minimal curation. Some astronomy data are easily discoverable and usable, but many are not. International coordination activities such as IVOA and distributed access to high-level data products servers such as SIMBAD and NED are enabling further integration of published data. Astronomers are tackling yet more challenges in new forms of publishing data, algorithms, visualizations, and in assuring interoperability with parallel infrastructure efforts in related fields. New issues include data citation, attribution, and provenance. Substantial concerns remain for the long term discoverability, accessibility, usability, and curation of astronomy data and other scholarly resources. The presentation will outline these challenges, how they are being addressed by astronomy and related fields, and identify concerns and accomplishments expressed by the astronomers we have interviewed and observed.
About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture
NASA Astrophysics Data System (ADS)
Grauer, Manfred; Barth, Thomas
2004-06-01
Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.
Water and Carbon Footprints for Sustainability Analysis of Urban Infrastructure - abstract
Water and transportation infrastructures define spatial distribution of urban population and economic activities. In this context, energy and water consumed per capita are tangible measures of how efficient water and transportation systems are constructed and operated. At a hig...
Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure
In order to provide a water infrastructure that is more sustainable into and beyond the 21st century, drinking water distribution systems and wastewater collection systems must account for our diminishing water supply, increasing demands, climate change, energy cost and availabil...
Community-driven computational biology with Debian Linux
2010-01-01
Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984
Parallel Processing of Images in Mobile Devices using BOINC
NASA Astrophysics Data System (ADS)
Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo
2018-04-01
Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.
Hydrogen Infrastructure Testing and Research Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-04-10
Learn about the Hydrogen Infrastructure Testing and Research Facility (HITRF), where NREL researchers are working on vehicle and hydrogen infrastructure projects that aim to enable more rapid inclusion of fuel cell and hydrogen technologies in the market to meet consumer and national goals for emissions reduction, performance, and energy security. As part of NREL’s Energy Systems Integration Facility (ESIF), the HITRF is designed for collaboration with a wide range of hydrogen, fuel cell, and transportation stakeholders.
Facilities and Infrastructure FY 2017 Budget At-A-Glance
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-03-01
The Facilities and Infrastructure Program includes EERE’s capital investments, operations and maintenance, and site-wide support of the National Renewable Energy Laboratory (NREL). It is the nation’s only national laboratory with a primary mission dedicated to the research, development and demonstration (RD&D) of energy efficiency, renewable energy and related technologies. EERE is NREL’s steward, primary client and sponsor of NREL’s designation as a Federally Funded Research and Development Center. The Facilities and Infrastructure (F&I) budget maintains NREL’s research and support infrastructure, ensures availability for EERE’s use, and provides a safe and secure workplace for employees.
TEODOOR, a blueprint for distributed terrestrial observation data infrastructures
NASA Astrophysics Data System (ADS)
Kunkel, Ralf; Sorg, Jürgen; Abbrent, Martin; Borg, Erik; Gasche, Rainer; Kolditz, Olaf; Neidl, Frank; Priesack, Eckart; Stender, Vivien
2017-04-01
TERENO (TERrestrial ENvironmental Observatories) is an initiative funded by the large research infrastructure program of the Helmholtz Association of Germany. Four observation platforms to facilitate the investigation of consequences of global change for terrestrial ecosys-tems and the socioeconomic implications of these have been implemented and equipped from 2007 until 2013. Data collection, however, is planned to be performed for at least 30 years. TERENO provides series of system variables (e.g. precipitation, runoff, groundwater level, soil moisture, water vapor and trace gases fluxes) for the analysis and prognosis of global change consequences using integrated model systems, which will be used to derive efficient prevention, mitigation and adaptation strategies. Each platform is operated by a different Helmholtz-Institution, which maintains its local data infrastructure. Within the individual observatories, areas with intensive measurement programs have been implemented. Different sensors provide information on various physical parameters like soil moisture, temperatures, ground water levels or gas fluxes. Sensor data from more than 900 stations are collected automatically with a frequency of 20 s-1 up to 2 h-1, summing up to about 2,500,000 data values per day. In addition, three weather radar devices create raster data with a frequency of 12 to 60 h-1. The data are automatically imported into local relational database systems using a common data quality assessment framework, used to handle processing and assessment of heterogeneous environmental observation data. Starting with the way data are imported into the data infrastructure, custom workflows are developed. Data levels implying the underlying data processing, stages of quality assessment and data ac-cessibility are defined. In order to facilitate the acquisition, provision, integration, management and exchange of heterogeneous geospatial resources within a scientific and non-scientific environment the dis-tributed spatial data infrastructure TEODOOR (TEreno Online Data RepOsitORry) has been build-up. The individual observatories are connected via OGC-compliant web-services, while the TERENO Data Discovery Portal (DDP) enables data discovery, visualization and data ac-cess. Currently, free access to data from more than 900 monitoring stations is provided.
A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN
NASA Astrophysics Data System (ADS)
Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.
2015-12-01
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
Safety impacts of bicycle infrastructure: A critical review.
DiGioia, Jonathan; Watkins, Kari Edison; Xu, Yanzhi; Rodgers, Michael; Guensler, Randall
2017-06-01
This paper takes a critical look at the present state of bicycle infrastructure treatment safety research, highlighting data needs. Safety literature relating to 22 bicycle treatments is examined, including findings, study methodologies, and data sources used in the studies. Some preliminary conclusions related to research efficacy are drawn from the available data and findings in the research. While the current body of bicycle safety literature points toward some defensible conclusions regarding the safety and effectiveness of certain bicycle treatments, such as bike lanes and removal of on-street parking, the vast majority treatments are still in need of rigorous research. Fundamental questions arise regarding appropriate exposure measures, crash measures, and crash data sources. This research will aid transportation departments with regard to decisions about bicycle infrastructure and guide future research efforts toward understanding safety impacts of bicycle infrastructure. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
NASA Astrophysics Data System (ADS)
Bandrés, Candela; Robador, María Dolores; Albardonedo, Antonio
2017-10-01
The aqueduct of the Caños de Carmona was in operation from 1172 until its demolition in 1912.Its infrastructure was an essential resource to supply water to the city of Seville. This study attempts to analyse the supply and distribution system used in the city in the Modern Age. The research is focused mainly on obtaining water from the Santa Lucia spring to 19 km in Alcala de Guadaira, its route through the aqueduct, the division for the distribution between different users in the general partition ark and its subsequent distribution to the final destinations. This study aims to develop a hypothesis about the principles of water distribution through the city and to estimate the percentage of water going to each client based on the theoretical concession that should reach each home.
EPA Research Highlights: EPA Studies Aging Water Infrastructure
The nation's extensive water infrastructure has the capacity to treat, store, and transport trillions of gallons of water and wastewater per day through millions of miles of pipelines. However, some infrastructure components are more than 100 years old, and as the infrastructure ...
Sea Level Rise Impacts On Infrastructure Vulnerability
NASA Astrophysics Data System (ADS)
Pasqualini, D.; Mccown, A. W.; Backhaus, S.; Urban, N. M.
2015-12-01
Increase of global sea level is one of the potential consequences of climate change and represents a threat for the U.S.A coastal regions, which are highly populated and home of critical infrastructures. The potential danger caused by sea level rise may escalate if sea level rise is coupled with an increase in frequency and intensity of storms that may strike these regions. These coupled threats present a clear risk to population and critical infrastructure and are concerns for Federal, State, and particularly local response and recovery planners. Understanding the effect of sea level rise on the risk to critical infrastructure is crucial for long planning and for mitigating potential damages. In this work we quantify how infrastructure vulnerability to a range of storms changes due to an increase of sea level. Our study focuses on the Norfolk area of the U.S.A. We assess the direct damage of drinking water and wastewater facilities and the power sector caused by a distribution of synthetic hurricanes. In addition, our analysis estimates indirect consequences of these damages on population and economic activities accounting also for interdependencies across infrastructures. While projections unanimously indicate an increase in the rate of sea level rise, the scientific community does not agree on the size of this rate. Our risk assessment accounts for this uncertainty simulating a distribution of sea level rise for a specific climate scenario. Using our impact assessment results and assuming an increase of future hurricanes frequencies and intensities, we also estimate the expected benefits for critical infrastructure.
Initiative for the creation of an integrated infrastructure of European Volcano Observatories
NASA Astrophysics Data System (ADS)
Puglisi, G.; Bachelery, P.; Ferreira, T. J. L.; Vogfjörd, K. S.
2012-04-01
Active volcanic areas in Europe constitute a direct threat to millions of European citizens. The recent Eyjafjallajökull eruption also demonstrated that indirect effects of volcanic activity can present a threat to the economy and the lives of hundreds of million of people living in the whole continental area even in the case of activity of volcanoes with sporadic eruptions. Furthermore, due to the wide political distribution of the European territories, major activities of "European" volcanoes may have a worldwide impact (e.g. on the North Atlantic Ocean, West Indies included, and the Indian Ocean). Our ability to understand volcanic unrest and forecast eruptions depends on the capability of both the monitoring systems to effectively detect the signals generated by the magma rising and on the scientific knowledge necessary to unambiguously interpret these signals. Monitoring of volcanoes is the main focus of volcano observatories, which are Research Infrastructures in the ESFRI vision, because they represent the basic resource for researches in volcanology. In addition, their facilities are needed for the design, implementation and testing of new monitoring techniques. Volcano observatories produce a large amount of monitoring data and represent extraordinary and multidisciplinary laboratories for carrying out innovative joint research. The current distribution of volcano observatories in Europe and their technological state of the art is heterogeneous because of different types of volcanoes, different social requirements, operational structures and scientific background in the different volcanic areas, so that, in some active volcanic areas, observatories are lacking or poorly instrumented. Moreover, as the recent crisis of the ash in the skies over Europe confirms, the assessment of the volcanic hazard cannot be limited to the immediate areas surrounding active volcanoes. The whole European Community would therefore benefit from the creation of a network of volcano observatories, which would enable strengthening and sharing the technological and scientific level of current infrastructures. Such a network could help to achieve the minimum goal of deploying an observatory in each active volcanic area, and lay the foundation for an efficient and effective volcanic monitoring system at the European level.
NASA Astrophysics Data System (ADS)
Burkhart, John F.; Decker, Sven; Filhol, Simon; Hulth, John; Nesje, Atle; Schuler, Thomas V.; Sobolowski, Stefan; Tallaksen, Lena M.
2017-04-01
The Finse Alpine Research Station provides convenient access to the Hardangervidda mountain plateau in Southern Norway (60 deg N, 1222 m asl). The station is located above the tree-line in vicinity to the west-eastern mountain water divide and is easily accessible by train from Bergen and Oslo. The station itself offers housing and basic laboratory facilities and has been used for ecological monitoring. Over the past years, studies on small-scale snow distribution and ground temperature have been performed and accompanied by a suite of meteorological measurements. Supported by strategic investments by the University of Oslo and ongoing research projects, these activities are currently expanded and the site is developed towards a mountain field laboratory for studies on Land-Atmosphere Interaction in Cold Environments, facilitated by the LATICE project (www.mn.uio.no/latice). Additional synergy comes from close collaborations with a range of institutions that perform operational monitoring close to Finse, including long-term time series of meteorological data and global radiation. Through our activities, this infrastructure has been complemented by a permanent tower for continuous Eddy-Covariance measurements along with associated gas fluxes. A second, mobile covariance system is in preparation and will become operational in 2017. In addition, a wireless sensor network is set up to grasp the spatial distributions of basic meteorological variables, snow depth and glacier mass balance on the nearby Hardangerjøkulen ice cap. While the research focus so far was on small scale processes (snow redistribution), this is now being expanded to cover hydrological processes on the catchment and regional scale. To this end, two discharge stations have been installed to gauge discharge from two contrasting catchments (glacier dominated and non-glacierized). In this presentation, we provide an overview over existing and planned infrastructure, field campaigns and research activities, accompanied by available data, the result of some preliminary analysis and discuss opportunities for future collaboration.
An evaluation of the status of living collections for plant, environmental, and microbial research.
McCLUSKEY, Kevin; Parsons, Jill P; Quach, Kimberly; Duke, Clifford S
2017-06-01
While living collections are critical for biological research, support for these foundational infrastructure elements is inconsistent, which makes quality control, regulatory compliance, and reproducibility difficult. In recent years, the Ecological Society of America has hosted several National Science Foundation-sponsored workshops to explore and enhance the sustainability of biological research infrastructure. At the same time, the United States Culture Collection Network has brought together managers of living collections to foster collaboration and information exchange within a specific living collections community. To assess the sustainability of collections, a survey was distributed to collection scientists whose responses provide a benchmark for evaluating the resiliency of these collections. Among the key observations were that plant collections have larger staffing requirements and that living microbe collections were the most vulnerable to retirements or other disruptions. Many higher plant and vertebrate collections have institutional support and several have endowments. Other collections depend on competitive grant support in an era of intense competition for these resources. Opportunities for synergy among living collections depend upon complementing the natural strong engagement with the research communities that depend on these collections with enhanced information sharing, communication, and collective action to keep them sustainable for the future. External efforts by funding agencies and publishers could reinforce the advantages of having professional management of research resources across every discipline.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
DOT National Transportation Integrated Search
2001-07-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
DOT National Transportation Integrated Search
2000-08-01
This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youssef, Tarek A.; Elsayed, Ahmed T.; Mohammed, Osama A.
This study presents the design and implementation of a communication and control infrastructure for smart grid operation. The proposed infrastructure enhances the reliability of the measurements and control network. The advantages of utilizing the data-centric over message-centric communication approach are discussed in the context of smart grid applications. The data distribution service (DDS) is used to implement a data-centric common data bus for the smart grid. This common data bus improves the communication reliability, enabling distributed control and smart load management. These enhancements are achieved by avoiding a single point of failure while enabling peer-to-peer communication and an automatic discoverymore » feature for dynamic participating nodes. The infrastructure and ideas presented in this paper were implemented and tested on the smart grid testbed. A toolbox and application programing interface for the testbed infrastructure are developed in order to facilitate interoperability and remote access to the testbed. This interface allows control, monitoring, and performing of experiments remotely. Furthermore, it could be used to integrate multidisciplinary testbeds to study complex cyber-physical systems (CPS).« less
Education Potential of the National Virtual Observatory
NASA Astrophysics Data System (ADS)
Christian, Carol
2006-12-01
Research in astronomy is blossoming with the availability of sophisticated instrumentation and tools aimed at breakthroughs in our understanding of the physical universe. Researchers can take advantage of the astronomical infrastructure, the National Virtual Observatory (NVO), for their investigations. . As well, data and tools available to the public are increasing through the distributed resources of observatories, academic institutions, computing facilities and educational organizations. Because Astronomy holds the public interest through engaging content and striking a cord with fundamental questions of human interest, it is a perfect context for science and technical education. Through partnerships we are cultivating, the NVO can be tuned for educational purposes.
NASA Astrophysics Data System (ADS)
Kutsch, Werner Leo; Asmi, Ari; Laj, Paolo; Brus, Magdalena; Sorvari, Sanna
2016-04-01
ENVRIplus is a Horizon 2020 project bringing together Environmental and Earth System Research Infrastructures, projects and networks together with technical specialist partners to create a more coherent, interdisciplinary and interoperable cluster of Environmental Research Infrastructures (RIs) across Europe. The objective of ENVRIplus is to provide common solutions to shared challenges for these RIs in their efforts to deliver new services for science and society. To reach this overall goal, ENVRIplus brings together the current ESFRI roadmap environmental and associate fields RIs, leading I3 projects, key developing RI networks and specific technical specialist partners to build common synergic solutions for pressing issues in RI construction and implementation. ENVRIplus will be organized along 6 main objectives, further on called "Themes": 1) Improve the RI's abilities to observe the Earth System, particularly in developing and testing new sensor technologies, harmonizing observation methodologies and developing methods to overcome common problems associated with distributed remote observation networks; 2) Generate common solutions for shared information technology and data related challenges of the environmental RIs in data and service discovery and use, workflow documentation, data citations methodologies, service virtualization, and user characterization and interaction; 3) Develop harmonized policies for access (physical and virtual) for the environmental RIs, including access services for the multidisciplinary users; 4) Investigate the interactions between RIs and society: Find common approaches and methodologies how to assess the RIs' ability to answer the economical and societal challenges, develop ethics guidelines for RIs and investigate the possibility to enhance the use Citizen Science approaches in RI products and services; 5) Ensure the cross-fertilisation and knowledge transfer of new technologies, best practices, approaches and policies of the RIs by generating training material for RI personnel to use the new observational, technological and computational tools and facilitate inter-RI knowledge transfer via a staff exchange program; 6) Create RI communication and cooperation framework to coordinate activities of the environmental RIs towards common strategic development, improved user interaction and interdisciplinary cross-RI products and services. The produced solutions, services, systems and other project results are made available to all environmental research infrastructure initiatives.
Gales, Sydney; Tanaka, Kazuo A; Balabanski, D L; Negoita, Florin; Stutman, D; Ur, Calin Alexander; Tesileanu, Ovidiu; Ursescu, Daniel; Ghita, Dan Gabriel; Andrei, I; Ataman, Stefan; Cernaianu, M O; D'Alessi, L; Dancus, I; Diaconescu, B; Djourelov, N; Filipescu, D; Ghenuche, P; Matei, C; Seto Kei, K; Zeng, M; Zamfir, Victor Nicolae
2018-06-28
The European Strategic Forum for Research Infrastructures (ESFRI) has selected in 2006 a proposal based on ultra-intense laser elds with intensities reaching up to 10221023 W/cm2 called \\ELI" for Extreme Light Infrastructure. The construction of a large-scale laser-centred, distributed pan-European research infrastructure, involving beyond the state-of-the-art ultra-short and ultra-intense laser technologies, received the approval for funding in 2011 2012. The three pillars of the ELI facility are being built in Czech Republic, Hungary and Romania. The Romanian pillar is ELI-Nuclear Physics (ELI-NP). The new facility is intended to serve a broad national, European and International science community. Its mission covers scientic research at the frontier of knowledge involving two domains. The rst one is laser-driven experiments related to nuclear physics, strong-eld quantum electrodynamics and associated vacuum eects. The second is based on a Comptonbackscattering high-brilliance and intense low-energy gamma beam (< 20 MeV), a marriage of laser and accelerator technology which will allow us to investigate nuclear structure and reactions as well as nuclear astrophysics with unprecedented resolution and accuracy. In addition to fundamental themes, a large number of applications with signicant societal impact are being developed. The ELI-NP research centre will be located in Magurele near Bucharest, Romania. The project is implemented by \\Horia Hulubei" National Institute for Physics and Nuclear Engineering (IFIN-HH). The project started in January 2013 and the new facility will be fully operational by the end of 2019. After a short introduction to multi-PW lasers and Multi-MeV brilliant gamma beam scientic and technical description of the future ELI-NP facility as well as the present status of its implementation of ELI-NP, will be presented. The science and examples of societal applications at reach with these new probes will be discussed with a special focus on day-one experiments and associated novel instrumentation. © 2018 IOP Publishing Ltd.
ForM@Ter: a French Solid Earth Research Infrastructure Project
NASA Astrophysics Data System (ADS)
Mandea, M.; Diament, M.; Jamet, O.; Deschamps-Ostanciaux, E.
2017-12-01
Recently, some noteworthy initiatives to develop efficient research e-infrastructures for the study of the Earth's system have been set up. However, some gaps between the data availability and their scientific use still exists, either because technical reasons (big data issues) or because of the lack of a dedicated support in terms of expert knowledge of the data, software availability, or data cost. The need for thematic cooperative platforms has been underlined over the last years, as well as the need to create thematic centres designed to federate the scientific community of Earth's observation. Four thematic data centres have been developed in France, covering the domains of ocean, atmosphere, land, and solid Earth sciences. For the Solid Earth science community, a research infrastructure project named ForM@Ter was launched by the French Space Agency (CNES) and the National Centre for Scientific Research (CNRS), with the active participation of the National institute for geographical and forestry information (IGN). Currently, it relies on the contributions of scientists from more than 20 French Earth science laboratories.Preliminary analysis have showed that a focus on the determination of the shape and movements of the Earth surface (ForM@Ter: Formes et Mouvements de la Terre) can federate a wide variety of scientific areas (earthquake cycle, tectonics, morphogenesis, volcanism, erosion dynamics, mantle rheology, geodesy) and offers many interfaces with other geoscience domains, such as glaciology or snow evolution. This choice motivates the design of an ambitious data distribution scheme, including a wide variety of sources - optical imagery, SAR, GNSS, gravity, satellite altimetry data, in situ observations (inclinometers, seismometers, etc.) - as well as a wide variety of processing techniques. In the evolving context of the current and forthcoming national and international e-infrastructures, the challenge of the project is to design a non-redundant service based on interoperations with existing services, and to cope with highly complex data flows due to the granularity of the data and its associated knowledge. Here, a presentation of the project status and of the first available operational functionalities is foreseen.
Software and hardware infrastructure for research in electrophysiology
Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Řondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Štěbeták, Jan
2014-01-01
As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software. PMID:24639646
Software and hardware infrastructure for research in electrophysiology.
Mouček, Roman; Ježek, Petr; Vařeka, Lukáš; Rondík, Tomáš; Brůha, Petr; Papež, Václav; Mautner, Pavel; Novotný, Jiří; Prokop, Tomáš; Stěbeták, Jan
2014-01-01
As in other areas of experimental science, operation of electrophysiological laboratory, design and performance of electrophysiological experiments, collection, storage and sharing of experimental data and metadata, analysis and interpretation of these data, and publication of results are time consuming activities. If these activities are well organized and supported by a suitable infrastructure, work efficiency of researchers increases significantly. This article deals with the main concepts, design, and development of software and hardware infrastructure for research in electrophysiology. The described infrastructure has been primarily developed for the needs of neuroinformatics laboratory at the University of West Bohemia, the Czech Republic. However, from the beginning it has been also designed and developed to be open and applicable in laboratories that do similar research. After introducing the laboratory and the whole architectural concept the individual parts of the infrastructure are described. The central element of the software infrastructure is a web-based portal that enables community researchers to store, share, download and search data and metadata from electrophysiological experiments. The data model, domain ontology and usage of semantic web languages and technologies are described. Current data publication policy used in the portal is briefly introduced. The registration of the portal within Neuroscience Information Framework is described. Then the methods used for processing of electrophysiological signals are presented. The specific modifications of these methods introduced by laboratory researches are summarized; the methods are organized into a laboratory workflow. Other parts of the software infrastructure include mobile and offline solutions for data/metadata storing and a hardware stimulator communicating with an EEG amplifier and recording software.
The GÉANT network: addressing current and future needs of the HEP community
NASA Astrophysics Data System (ADS)
Capone, Vincenzo; Usman, Mian
2015-12-01
The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization Infrastructure, end-to-end performance monitoring, advanced network services (dynamic circuits, L2-L3VPN, MD-VPN). This talk will outline the factors that help the GÉANT network to respond to the needs of the High Energy Physics community, both in Europe and worldwide. The Pan-European network provides the connectivity between 40 European national research and education networks. In addition, GÉANT also connects the European NRENs to the R&E networks in other world region and has reach to over 110 NREN worldwide, making GÉANT the best connected Research and Education network, with its multiple intercontinental links to different continents e.g. North and South America, Africa and Asia-Pacific. The High Energy Physics computational needs have always had (and will keep having) a leading role among the scientific user groups of the GÉANT network: the LHCONE overlay network has been built, in collaboration with the other big world REN, specifically to address the peculiar needs of the LHC data movement. Recently, as a result of a series of coordinated efforts, the LHCONE network has been expanded to the Asia-Pacific area, and is going to include some of the main regional R&E network in the area. The LHC community is not the only one that is actively using a distributed computing model (hence the need for a high-performance network); new communities are arising, as BELLE II. GÉANT is deeply involved also with the BELLE II Experiment, to provide full support to their distributed computing model, along with a perfSONAR-based network monitoring system. GÉANT has also coordinated the setup of the network infrastructure to perform the BELLE II Trans-Atlantic Data Challenge, and has been active on helping the BELLE II community to sort out their end-to-end performance issues. In this talk we will provide information about the current GÉANT network architecture and of the international connectivity, along with the upcoming upgrades and the planned and foreseeable improvements. We will also describe the implementation of the solutions provided to support the LHC and BELLE II experiments.
Designing and validating the joint battlespace infosphere
NASA Astrophysics Data System (ADS)
Peterson, Gregory D.; Alexander, W. Perry; Birdwell, J. Douglas
2001-08-01
Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.
Applications of the pipeline environment for visual informatics and genomics computations
2011-01-01
Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102
Cloud Computing and Its Applications in GIS
NASA Astrophysics Data System (ADS)
Kang, Cao
2011-12-01
Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)
The computing and data infrastructure to interconnect EEE stations
NASA Astrophysics Data System (ADS)
Noferini, F.; EEE Collaboration
2016-07-01
The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.
Modeling the resilience of critical infrastructure: the role of network dependencies.
Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John
2016-01-01
Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities' well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure.
Modeling the resilience of critical infrastructure: the role of network dependencies
Guidotti, Roberto; Chmielewski, Hana; Unnikrishnan, Vipin; Gardoni, Paolo; McAllister, Therese; van de Lindt, John
2017-01-01
Water and wastewater network, electric power network, transportation network, communication network, and information technology network are among the critical infrastructure in our communities; their disruption during and after hazard events greatly affects communities’ well-being, economic security, social welfare, and public health. In addition, a disruption in one network may cause disruption to other networks and lead to their reduced functionality. This paper presents a unified theoretical methodology for the modeling of dependent/interdependent infrastructure networks and incorporates it in a six-step probabilistic procedure to assess their resilience. Both the methodology and the procedure are general, can be applied to any infrastructure network and hazard, and can model different types of dependencies between networks. As an illustration, the paper models the direct effects of seismic events on the functionality of a potable water distribution network and the cascading effects of the damage of the electric power network (EPN) on the potable water distribution network (WN). The results quantify the loss of functionality and delay in the recovery process due to dependency of the WN on the EPN. The results show the importance of capturing the dependency between networks in modeling the resilience of critical infrastructure. PMID:28825037
Integration of end-user Cloud storage for CMS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
Integration of end-user Cloud storage for CMS analysis
Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...
2017-05-19
End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less
DOT National Transportation Integrated Search
2003-05-01
The Department of Transportation's (DOT) Research and Special Programs Administration (RSPA) began research in to assess the vulnerabilities of the nation's transportation infrastructure and develop needed improvements in security in June 2001. The g...
A network-based distributed, media-rich computing and information environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, R.L.
1995-12-31
Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to be a prototype National Information Infrastructure development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multi-media technologies, and data-mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and K-12 education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; (3) To define a new way of collaboration between computer science and industrially-relevant research.« less
Some recent advances of intelligent health monitoring systems for civil infrastructures in HIT
NASA Astrophysics Data System (ADS)
Ou, Jinping
2005-06-01
The intelligent health monitoring systems more and more become a technique for ensuring the health and safety of civil infrastructures and also an important approach for research of the damage accumulation or even disaster evolving characteristics of civil infrastructures, and attracts prodigious research interests and active development interests of scientists and engineers since a great number of civil infrastructures are planning and building each year in mainland China. In this paper, some recent advances on research, development nad implementation of intelligent health monitoring systems for civil infrastructuresin mainland China, especially in Harbin Institute of Technology (HIT), P.R.China. The main contents include smart sensors such as optical fiber Bragg grating (OFBG) and polivinyllidene fluoride (PVDF) sensors, fatigue life gauges, self-sensing mortar and carbon fiber reinforced polymer (CFRP), wireless sensor networks and their implementation in practical infrastructures such as offshore platform structures, hydraulic engineering structures, large span bridges and large space structures. Finally, the relative research projects supported by the national foundation agencies of China are briefly introduced.
Likumahuwa, Sonja; Song, Hui; Singal, Robbie; Weir, Rosy Chang; Crane, Heidi; Muench, John; Sim, Shao-Chee; DeVoe, Jennifer E
2013-01-01
This article introduces the Community Health Applied Research Network (CHARN), a practice-based research network of community health centers (CHCs). Established by the Health Resources and Services Administration in 2010, CHARN is a network of 4 community research nodes, each with multiple affiliated CHCs and an academic center. The four nodes (18 individual CHCs and 4 academic partners in 9 states) are supported by a data coordinating center. Here we provide case studies detailing how CHARN is building research infrastructure and capacity in CHCs, with a particular focus on how community practice-academic partnerships were facilitated by the CHARN structure. The examples provided by the CHARN nodes include many of the building blocks of research capacity: communication capacity and "matchmaking" between providers and researchers; technology transfer; research methods tailored to community practice settings; and community institutional review board infrastructure to enable community oversight. We draw lessons learned from these case studies that we hope will serve as examples for other networks, with special relevance for community-based networks seeking to build research infrastructure in primary care settings.
Likumahuwa, Sonja; Song, Hui; Singal, Robbie; Weir, Rosy Chang; Crane, Heidi; Muench, John; Sim, Shao-Chee; DeVoe, Jennifer E.
2015-01-01
This article introduces the Community Health Applied Research Network (CHARN), a practice-based research network of community health centers (CHCs). Established by the Health Resources and Services Administration in 2010, CHARN is a network of 4 community research nodes, each with multiple affiliated CHCs and an academic center. The four nodes (18 individual CHCs and 4 academic partners in 9 states) are supported by a data coordinating center. Here we provide case studies detailing how CHARN is building research infrastructure and capacity in CHCs, with a particular focus on how community practice-academic partnerships were facilitated by the CHARN structure. The examples provided by the CHARN nodes include many of the building blocks of research capacity: communication capacity and “matchmaking” between providers and researchers; technology transfer; research methods tailored to community practice settings; and community institutional review board infrastructure to enable community oversight. We draw lessons learned from these case studies that we hope will serve as examples for other networks, with special relevance for community-based networks seeking to build research infrastructure in primary care settings. PMID:24004710
State of Technology for Rehabilitation of Water Distribution Systems
The impact that the lack of investment in water infrastructure will have on the performance of aging underground infrastructure over time is well documented and the needed funding estimates range as high as $325 billion over the next 20 years. With the current annual replacement...
DOT National Transportation Integrated Search
2015-06-01
This document serves as an Operational Concept for the Transit Traveler Information Infrastructure Mobility Application. The purpose of this document is to provide an operational description of how the Transit Traveler Information Infrastructur...
Storing and using health data in a virtual private cloud.
Regola, Nathan; Chawla, Nitesh V
2013-03-13
Electronic health records are being adopted at a rapid rate due to increased funding from the US federal government. Health data provide the opportunity to identify possible improvements in health care delivery by applying data mining and statistical methods to the data and will also enable a wide variety of new applications that will be meaningful to patients and medical professionals. Researchers are often granted access to health care data to assist in the data mining process, but HIPAA regulations mandate comprehensive safeguards to protect the data. Often universities (and presumably other research organizations) have an enterprise information technology infrastructure and a research infrastructure. Unfortunately, both of these infrastructures are generally not appropriate for sensitive research data such as HIPAA, as they require special accommodations on the part of the enterprise information technology (or increased security on the part of the research computing environment). Cloud computing, which is a concept that allows organizations to build complex infrastructures on leased resources, is rapidly evolving to the point that it is possible to build sophisticated network architectures with advanced security capabilities. We present a prototype infrastructure in Amazon's Virtual Private Cloud to allow researchers and practitioners to utilize the data in a HIPAA-compliant environment.
Shiramizu, Bruce; Shambaugh, Vicki; Petrovich, Helen; Seto, Todd B.; Ho, Tammy; Mokuau, Noreen; Hedges, Jerris R.
2016-01-01
Building research infrastructure capacity to address clinical and translational gaps has been a focus of funding agencies and foundations. Clinical and Translational Sciences Awards, Research Centers in Minority Institutions Infrastructure for Clinical and Translational Research (RCTR) and the Institutional Development Award Infrastructure for Clinical and Translational Research funded by United States (US) government to fund clinical translational research programs have existed for over a decade to address racial and ethnic health disparities across the US. While the impact on the nation’s health can’t be made in a short period, assessment of a program’s impact could be a litmus test to gauge its effectiveness at the institution and communities. We report the success of a Pilot Project Program in the University of Hawaii RCTR Award in advancing careers of emerging investigators and community collaborators. Our findings demonstrated that the investment has a far-reaching impact on engagement with community-based research collaborators, career advancement of health disparities investigators, and favorable impacts on health policy. PMID:27797013
GREEN INFRASTRUCTURE RESEARCH PROGRAM: Rain Gardens
the National Risk Management Research Laboratory (NRMRL) rain garden evaluation is part of a larger collection of long-term research that evaluates a variety of stormwater management practices. The U.S. EPA recognizes the potential of rain gardens as a green infrastructure manag...
White Paper on Condition Assessment of Wastewater Collection Systems
The Office of Research and Development’s National Risk Management Research Laboratory has published this report in support of the Aging Water Infrastructure (AWI) Research Program, which directly supports the Office of Water’s Sustainable Water Infrastructure Initiative. Scienti...
NASA Astrophysics Data System (ADS)
Lim, Theodore C.; Welty, Claire
2017-09-01
Green infrastructure (GI) is an approach to stormwater management that promotes natural processes of infiltration and evapotranspiration, reducing surface runoff to conventional stormwater drainage infrastructure. As more urban areas incorporate GI into their stormwater management plans, greater understanding is needed on the effects of spatial configuration of GI networks on hydrological performance, especially in the context of potential subsurface and lateral interactions between distributed facilities. In this research, we apply a three-dimensional, coupled surface-subsurface, land-atmosphere model, ParFlow.CLM, to a residential urban sewershed in Washington DC that was retrofitted with a network of GI installations between 2009 and 2015. The model was used to test nine additional GI and imperviousness spatial network configurations for the site and was compared with monitored pipe-flow data. Results from the simulations show that GI located in higher flow-accumulation areas of the site intercepted more surface runoff, even during wetter and multiday events. However, a comparison of the differences between scenarios and levels of variation and noise in monitored data suggests that the differences would only be detectable between the most and least optimal GI/imperviousness configurations.
Storing Water in California's Hidden Reservoirs
NASA Astrophysics Data System (ADS)
Perrone, D.; Rohde, M. M.; Szeptycki, L.; Freyberg, D. L.
2014-12-01
California is experiencing one of its worst droughts in history; in early 2014, the Governor released the Water Action Plan outlining opportunities to secure reliable water supplies. Groundwater recharge and storage is suggested as an alternative to surface storage, but little research has been conducted to see if groundwater recharge is a competitive alternative to other water-supply infrastructure projects. Although groundwater recharge and storage data are not readily available, several voter-approved bonds have helped finance groundwater recharge and storage projects and can be used as a proxy for costs, geographic distribution, and interest in such projects. We mined and analyzed available grant applications submitted to the Department of Water Resources that include groundwater recharge and storage elements. We found that artificial recharge can be cheaper than other water-supply infrastructure, but the cost was dependent on the source of water, the availability and accessibility of infrastructure used to capture and convey water, and the method of recharge. Bond applications and funding awards were concentrated in the Central Valley and southern California - both are regions of high water demand. With less than 60% of proposals funded, there are opportunities for groundwater recharge and storage to play a bigger role in securing California's water supplies.
The ELIXIR channel in F1000Research.
Blomberg, Niklas; Oliveira, Arlindo; Mons, Barend; Persson, Bengt; Jonassen, Inge
2015-01-01
ELIXIR, the European life science infrastructure for biological information, is a unique initiative to consolidate Europe's national centres, services, and core bioinformatics resources into a single, coordinated infrastructure. ELIXIR brings together Europe's major life-science data archives and connects these with national bioinformatics infrastructures - the ELIXIR Nodes. This editorial introduces the ELIXIR channel in F1000Research; the aim of the channel is to collect and present ELIXIR's scientific and operational output, engage with the broad life science community and encourage discussion on proposed infrastructure solutions. Submissions will be assessed by the ELIXIR channel Advisory Board to ensure they are relevant to ELIXIR community, and subjected to F1000Research open peer review process.
The ELIXIR channel in F1000Research
Blomberg, Niklas; Oliveira, Arlindo; Mons, Barend; Persson, Bengt; Jonassen, Inge
2016-01-01
ELIXIR, the European life science infrastructure for biological information, is a unique initiative to consolidate Europe’s national centres, services, and core bioinformatics resources into a single, coordinated infrastructure. ELIXIR brings together Europe’s major life-science data archives and connects these with national bioinformatics infrastructures - the ELIXIR Nodes. This editorial introduces the ELIXIR channel in F1000Research; the aim of the channel is to collect and present ELIXIR’s scientific and operational output, engage with the broad life science community and encourage discussion on proposed infrastructure solutions. Submissions will be assessed by the ELIXIR channel Advisory Board to ensure they are relevant to ELIXIR community, and subjected to F1000Research open peer review process. PMID:26913192
The EPOS Implementation Phase: building thematic and integrated services for solid Earth sciences
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Epos Consortium, the
2015-04-01
The European Plate Observing System (EPOS) has a scientific vision and approach aimed at creating a pan-European infrastructure for Earth sciences to support a safe and sustainable society. To follow this vision, the EPOS mission is integrating a suite of diverse and advanced Research Infrastructures (RIs) in Europe relying on new e-science opportunities to monitor and understand the dynamic and complex Earth system. To this goal, the EPOS Preparatory Phase has designed a long-term plan to facilitate integrated use of data and products as well as access to facilities from mainly distributed existing and new research infrastructures for solid Earth Science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth surface dynamics. Through integration of data, models and facilities EPOS will allow the Earth Science community to make a step change in developing new concepts and tools for key answers to scientific and socio-economic questions concerning geo-hazards and geo-resources as well as Earth sciences applications to the environment and to human welfare. Since its conception EPOS has been built as "a single, Pan-European, sustainable and distributed infrastructure". EPOS is, indeed, the sole infrastructure for solid Earth Science in ESFRI and its pan-European dimension is demonstrated by the participation of 23 countries in its preparatory phase. EPOS is presently moving into its implementation phase further extending its pan-European dimension. The EPOS Implementation Phase project (EPOS IP) builds on the achievements of the successful EPOS preparatory phase project. The EPOS IP objectives are synergetic and coherent with the establishment of the new legal subject (the EPOS-ERIC in Italy). EPOS coordinates the existing and new solid Earth RIs within Europe and builds the integrating RI elements. This integration requires a significant coordination between, among others, disciplinary (thematic) communities, national RIs policies and initiatives, as well as geo- and IT-scientists. The RIs that EPOS is coordinating include: i) regionally-distributed geophysical observing systems (seismological and geodetic networks); ii) local observatories (including geomagnetic, near-fault and volcano observatories); iii) analytical and experimental laboratories; iv) integrated satellite data and geological information services v) new services for natural and anthropogenic hazards. Here we present the successful story of the EPOS Preparatory Phase and the progress towards the implementation of both integrated core services (ICS) and thematic core services (TCS) for the different communities participating to the integration plan. We aim to discuss the achieved results and the approach followed to design the implementation phase. The goal is to present and discuss the strategies adopted to foster the implementation of TCS, clarifying their crucial role as domain-specific service hubs for coordinating and harmonizing national resources/plans with the European dimension of EPOS, and their integration to develop the new ICS. We will present the prototype of the ICS central hub as a key contribution for providing multidisciplinary services for solid Earth sciences as well as the glue to keep ICT aspects integrated and rationalized across EPOS. Finally, we will discuss the well-defined role of the EPOS-ERIC Headquarter to coordinate and harmonize national RIs and EPOS services (through ICS and TCS) looking for an effective commitment by national governments. It will be an important and timely opportunity to discuss the EPOS roadmap toward the operation of the novel multidisciplinary platform for discoveries to foster scientific excellence in solid Earth sciences.
Bottom-up capacity building for data providers in RITMARE
NASA Astrophysics Data System (ADS)
Pepe, Monica; Basoni, Anna; Bastianini, Mauro; Fugazza, Cristiano; Menegon, Stefano; Oggioni, Alessandro; Pavesi, Fabio; Sarretta, Alessandro; Carrara, Paola
2014-05-01
RITMARE is a Flagship Project by the Italian Ministry of Research, coordinated by the National Research Council (CNR). It aims at the interdisciplinary integration of Italian marine research. Sub-project 7 shall create an interoperable infrastructure for the project, capable of interconnecting the whole community of researchers involved. It will allow coordinating and sharing of data, processes, and information produced by the other sub-projects [1]. Spatial Data Infrastructures (SDIs) allow for interoperable sharing among heterogeneous, distributed spatial content providers. The INSPIRE Directive [2] regulates the development of a pan-european SDI despite the great variety of national approaches in managing spatial data. However, six years after its adoption, its growth is still hampered by technological, cultural, and methodological gaps. In particular, in the research sector, actors may not be prone to comply with INSPIRE (or feel not compelled to) because they are too concentrated on domain-specific activities or hindered by technological issues. Indeed, the available technologies and tools for enabling standard-based discovery and access services are far from being user-friendly and requires time-consuming activities, such as metadata creation. Moreover, the INSPIRE implementation guidelines do not accommodate an essential component in environmental research, that is, in situ observations. In order to overcome most of the aforementioned issues and to enable researchers to actively give their contribution in the creation of the project infrastructure, a bottom-up approach has been adopted: a software suite has been developed, called Starter Kit, which is offered to research data production units, so that they can become autonomous, independent nodes of data provision. The Starter Kit enables the provision of geospatial resources, either geodata (e.g., maps and layers) or observations pulled from sensors, which are made accessible according to the OGC standards defined for the specific category of data (WMS, WFS, WCS, and SOS). Resources are annotated by fine-grained metadata that is compliant with standards (e.g., INSPIRE, SensorML) and also semantically enriched by leveraging controlled vocabularies and RDF-based data structures (e.g., the FOAF description of the project's organisation). The Starter Kit is packaged as an off-the-shelf virtual machine and is made available under an open license (GPL v.3) and with extensive support tools. Among the most innovative features of the architecture is the user-friendly, extensible approach to metadata creation. On the one hand, the number of metadata items that need to be provided by the user is reduced to the minimum by recourse to controlled vocabularies and context information. The semantic underpinning of these data structures enables advanced discovery functionalities. On the other hand, the templating mechanism adopted in metadata editing allows to easily plug-in further schemata. The Starter Kit provides a consistent framework for capacity building that brings the heterogeneous actors in the project under the same umbrella, while preserving the individual practices, formats, and workflows. At the same time, users are empowered with standard-compliant web services that can be discovered and accessed both locally and remotely, such as the RITMARE infrastructure itself. [1] Carrara, P., Sarretta, A., Giorgetti, A., Ribera D'Alcalà, M., Oggioni, A., & Partescano, E. (2013). An interoperable infrastructure for the Italian Marine Research. IMDIS 2013 [2] European Commission, "Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE)" Directive 2007/2/EC, Official J. European Union, vol. 50, no. L 108, 2007, pp. 1-14.
INNOVATION AND RESEARCH FOR WATER INFRASTRUCTURE FOR THE 21ST CENTURY RESEARCH PLAN
This plan has been developed to provide the Office of Research and Development (ORD) with a guide for implementing a research program that addresses high priority needs of the Nation relating to its drinking water and wastewater infrastructure. By identifying these critical need...
Risk management of infrastructure development in border area Indonesia - Malaysia
NASA Astrophysics Data System (ADS)
Fitri, Suryani; Trikariastoto, Reinita, Ita
2017-11-01
Border area is geographically adjacent to neighboring countries with the primary function of maintaining state sovereignty and public welfare. Area in question is part of the provinces, districts or cities that directly intersect with national boundaries (or territory) and / or that have a functional relationship (linkage) and has a strategic value for the state. The border area is considered strategic because it involves the national lives of many people in terms of the interests of political, economic, social and cultural as well as defense and security (poleksosbudhankam) both located on land, sea or air. The border area is geographically adjacent to neighboring countries with the primary function of maintaining state sovereignty and public welfare. Area in question is part of the provinces, districts or cities that directly intersect with national boundaries (or territory) and / or that have a functional relationship (linkage) and has a strategic value for the state. To realize the necessary research on the development of the area, based on good practices from other countries some of the city that can meet all these challenges and at least can be applied with minor changes / adjustments. Furthermore, the application must be supported by the availability of funds. This study to discuss about any problems either obstacles or things that drive to develop function becomes an ideal border area with major support infrastructure for housing, transportation, energy availability, and distribution of clean water which will strengthen in its function which consists of five pillars, namely: central community service; trade and distribution center; financial center; tourism center; related to the field of community development. Articulation between key stakeholders such as government, private, and community is a major concern in this study, including in determining the appropriate financing schemes. The results of this study will be recommended to the government to improve the reliability of the infrastructure development of border area, particularly for housing infrastructure projects, transport, energy and clean water.
Development Model for Research Infrastructures
NASA Astrophysics Data System (ADS)
Wächter, Joachim; Hammitzsch, Martin; Kerschke, Dorit; Lauterjung, Jörn
2015-04-01
Research infrastructures (RIs) are platforms integrating facilities, resources and services used by the research communities to conduct research and foster innovation. RIs include scientific equipment, e.g., sensor platforms, satellites or other instruments, but also scientific data, sample repositories or archives. E-infrastructures on the other hand provide the technological substratum and middleware to interlink distributed RI components with computing systems and communication networks. The resulting platforms provide the foundation for the design and implementation of RIs and play an increasing role in the advancement and exploitation of knowledge and technology. RIs are regarded as essential to achieve and maintain excellence in research and innovation crucial for the European Research Area (ERA). The implementation of RIs has to be considered as a long-term, complex development process often over a period of 10 or more years. The ongoing construction of Spatial Data Infrastructures (SDIs) provides a good example for the general complexity of infrastructure development processes especially in system-of-systems environments. A set of directives issued by the European Commission provided a framework of guidelines for the implementation processes addressing the relevant content and the encoding of data as well as the standards for service interfaces and the integration of these services into networks. Additionally, a time schedule for the overall construction process has been specified. As a result this process advances with a strong participation of member states and responsible organisations. Today, SDIs provide the operational basis for new digital business processes in both national and local authorities. Currently, the development of integrated RIs in Earth and Environmental Sciences is characterised by the following properties: • A high number of parallel activities on European and national levels with numerous institutes and organisations participating. The maturity of individual scientific domains differs considerably. • Technologically and organisationally many different RI components have to be integrated. Individual systems are often complex and have a long-term history. Existing approaches are on different maturity levels, e.g. in relation to the standardisation of interfaces. • The concrete implementation process consists of independent and often parallel development activities. In many cases no detailed architectural blue-print for the envisioned system exists. • Most of the funding currently available for RI implementation is provided on a project basis. To increase the synergies in infrastructure development the authors propose a specific RI Maturity Model (RIMM) that is specifically qualified for open system-of-system environments. RIMM is based on the concepts of Capability Maturity Models for organisational development, concretely the Levels of Conceptual Interoperability Model (LCIM) specifying the technical, syntactical, semantic, pragmatic, dynamic, and conceptual layers of interoperation [1]. The model is complemented by the identification and integration of growth factors (according to the Nolan Stages Theory [2]). These factors include supply and demand factors. Supply factors comprise available resources, e.g., data, services and IT-management capabilities including organisations and IT-personal. Demand factors are the overall application portfolio for RIs but also the skills and requirements of scientists and communities using the infrastructure. RIMM thus enables a balanced development process of RI and RI components by evaluating the status of the supply and demand factors in relation to specific levels of interoperability. [1] Tolk, A., Diallo, A., Turnitsa, C. (2007): Applying the Levels of Conceptual Interoperability Model in Support of Integratability, Interoperability, and Composability for System-of-Systems Engineering. Systemics, Cybernetics and Informatics, Volume 5 - Number 5. [2] Mutsaers, E.-J., van der Zee, H., and Giertz, H. (1998): The evolution of information technology. Information Management & Computer Security, Volume 6 - Issue 3.
Assessing Research Interest and Capacity in Community Health Centers
Bhuiya, Nazmim; Pernice, Joan; Khan, Sami M.; Sequist, Thomas D.; Tendulkar, Shalini A.
2013-01-01
Abstract Objective Community health centers (CHCs) have great potential to participate in the development of evidence‐based primary care but face obstacles to engagement in clinical translational research. Methods To understand factors associated with CHC interest in building research infrastructure, Harvard Catalyst and the Massachusetts League of Community Health Centers conducted an online survey of medical directors in all 50 Massachusetts CHC networks. Results Thirty‐two (64%) medical directors completed the survey representing 126 clinical sites. Over 80% reported that their primary care providers (PCPs) were slightly to very interested in future clinical research and that they were interested in building research infrastructure at their CHC. Frequently cited barriers to participation in research included financial issues, lack of research skills, and lack of research infrastructure. In bivariate analyses, PCP interest in future clinical research and a belief that involvement in research contributed to PCP retention were significantly associated with interest in building research infrastructure. Conclusion CHCs critical role in caring for vulnerable populations ideally positions them to raise relevant research questions and translate evidence into practice. Our findings suggest a high interest in engagement in research among CHC leadership. CTSAs have a unique opportunity to support local CHCs in this endeavor. PMID:24127928
Accelerator science and technology in Europe: EuCARD 2012
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2012-05-01
Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.
ERIC Educational Resources Information Center
Perz, Stephen G.; Shenkin, Alexander; Barnes, Grenville; Cabrera, Liliana; Carvalho, Lucas A.; Castillo, Jorge
2012-01-01
Infrastructure is a worldwide policy priority for national development via regional integration into the global economy. However, economic, ecological and social research draws contrasting conclusions about the consequences of infrastructure. We present a synthetic approach to the study of infrastructure, focusing on a multidimensional treatment…
Research Challenges and Opportunities for Clinically Oriented Academic Radiology Departments.
Decker, Summer J; Grajo, Joseph R; Hazelton, Todd R; Hoang, Kimberly N; McDonald, Jennifer S; Otero, Hansel J; Patel, Midhir J; Prober, Allen S; Retrouvey, Michele; Rosenkrantz, Andrew B; Roth, Christopher G; Ward, Robert J
2016-01-01
Between 2004 and 2012, US funding for the biomedical sciences decreased to historic lows. Health-related research was crippled by receiving only 1/20th of overall federal scientific funding. Despite the current funding climate, there is increased pressure on academic radiology programs to establish productive research programs. Whereas larger programs have resources that can be utilized at their institutions, small to medium-sized programs often struggle with lack of infrastructure and support. To address these concerns, the Association of University Radiologists' Radiology Research Alliance developed a task force to explore any untapped research productivity potential in these smaller radiology departments. We conducted an online survey of faculty at smaller clinically funded programs and found that while they were interested in doing research and felt it was important to the success of the field, barriers such as lack of resources and time were proving difficult to overcome. One potential solution proposed by this task force is a collaborative structured research model in which multiple participants from multiple institutions come together in well-defined roles that allow for an equitable distribution of research tasks and pooling of resources and expertise. Under this model, smaller programs will have an opportunity to share their unique perspective on how to address research topics and make a measureable impact on the field of radiology as a whole. Through a health services focus, projects are more likely to succeed in the context of limited funding and infrastructure while simultaneously providing value to the field. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Williams, P.; Roldin, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.; Löschau, G.; Bastian, S.
2012-03-01
Mobility particle size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide range of applications in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards and guidelines with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. Technical standards were developed for a minimum requirement of mobility size spectrometry to perform long-term atmospheric aerosol measurements. Technical recommendations include continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyzer. We compared commercial and custom-made inversion routines to calculate the particle number size distributions from the measured electrical mobility distribution. All inversion routines are comparable within few per cent uncertainty for a given set of raw data. Furthermore, this work summarizes the results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research) and ACTRIS (Aerosols, Clouds, and Trace gases Research InfraStructure Network) to determine present uncertainties especially of custom-built mobility particle size spectrometers. Under controlled laboratory conditions, the particle number size distributions from 20 to 200 nm determined by mobility particle size spectrometers of different design are within an uncertainty range of around ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. For particles larger than 200 nm, the uncertainty range increased to 30%, which could not be explained. The network reference mobility spectrometers with identical design agreed within ±4% in the peak particle number concentration when all settings were done carefully. The consistency of these reference instruments to the total particle number concentration was demonstrated to be less than 5%. Additionally, a new data structure for particle number size distributions was introduced to store and disseminate the data at EMEP (European Monitoring and Evaluation Program). This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.
NASA Astrophysics Data System (ADS)
Papa, Mauricio; Shenoi, Sujeet
The information infrastructure -- comprising computers, embedded devices, networks and software systems -- is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection II describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: - Themes and Issues - Infrastructure Security - Control Systems Security - Security Strategies - Infrastructure Interdependencies - Infrastructure Modeling and Simulation This book is the second volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of twenty edited papers from the Second Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection held at George Mason University, Arlington, Virginia, USA in the spring of 2008.
This webinar describes the use of VELMA, a spatially-distributed ecohydrological model, to identify green infrastructure (GI) best management practices for protecting water quality in intensively managed watersheds. The seminar will include a brief description of VELMA and an ex...
The current problem in the United States is that the water infrastructure is aging and spending has not been adequate to repair, replace, or rehabilitate drinking water distribution systems and wastewater collection systems. The American Society of Civil Engineers Report Card in...
Modernization of B-2 Data, Video, and Control Systems Infrastructure
NASA Technical Reports Server (NTRS)
Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA s third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.
Modernization of B-2 Data, Video, and Control Systems Infrastructure
NASA Technical Reports Server (NTRS)
Cmar, Mark D.; Maloney, Christian T.; Butala, Vishal D.
2012-01-01
The National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) Plum Brook Station (PBS) Spacecraft Propulsion Research Facility, commonly referred to as B-2, is NASA's third largest thermal-vacuum facility with propellant systems capability. B-2 has completed a modernization effort of its facility legacy data, video and control systems infrastructure to accommodate modern integrated testing and Information Technology (IT) Security requirements. Integrated systems tests have been conducted to demonstrate the new data, video and control systems functionality and capability. Discrete analog signal conditioners have been replaced by new programmable, signal processing hardware that is integrated with the data system. This integration supports automated calibration and verification of the analog subsystem. Modern measurement systems analysis (MSA) tools are being developed to help verify system health and measurement integrity. Legacy hard wired digital data systems have been replaced by distributed Fibre Channel (FC) network connected digitizers where high speed sampling rates have increased to 256,000 samples per second. Several analog video cameras have been replaced by digital image and storage systems. Hard-wired analog control systems have been replaced by Programmable Logic Controllers (PLC), fiber optic networks (FON) infrastructure and human machine interface (HMI) operator screens. New modern IT Security procedures and schemes have been employed to control data access and process control flows. Due to the nature of testing possible at B-2, flexibility and configurability of systems has been central to the architecture during modernization.
ForM@Ter: a solid Earth thematic pole
NASA Astrophysics Data System (ADS)
Ostanciaux, Emilie; Jamet, Olivier; Mandea, Mioara; Diament, Michel
2014-05-01
Over the last years, several notable initiatives have been developed to provide Solid Earth sciences with an efficient research e-infrastructure. The EPOS project (European Plate Observing System) was included in the EFSRI roadmap in 2008. The 7th European frame program funded an e-science environment such as the Virtual Earthquake and Seismology Research Community in Europe (VERCE). GEO supports the development of the Geohazard SuperSites and Natural Laboratories portal, while the ESA SSEP project (SuperSites exploitation plateform) is developing as an Helix Nebula usecase. Meanwhile, operational use of space data for emergency management is in constant progress, within the Copernicus services. This rich activity is still leaving some gaps between the data availability and its scientific use, either for technical reasons (big data issues) or due to the need for a better support in term of expert knowledge on the data, of software availability, or of data cost. French infrastructures for data distribution are organized around National Observatory Services (in situ data), scientific services participating to the International association of geodesy data centres and wider research infrastructures such as the Réseau Sismologique et géodésique Français (RESIF) that is contributing to EPOS. The need for thematic cooperative platforms has been underlined over tha last years. In 2009, after a scientific prospective of the French national space agency (CNES) it becomes clear the urgent need to create thematic centres designed to federate the scientific community of Earth observation. Four thematic data centres are currently developing in France in the field of ocean , atmosphere, critical zone and solid Earth sciences. For Solid Earth research, the project - named ForM@Ter - was initiated at the beginning of 2012 to design, with the scientific community, the perimeter, structure and functions of such a thematic centre. It was launched by the CNES and the National Centre for Scientific Research (CNRS), with the active participation of the National institute for geographical and forestry information (IGN). Currently, it relies on the contributions of scientists from more than 20 French Earth science laboratories. Preliminary analysis showed that a focus on the determination of the shape and movements of the Earth surface ForM@Ter : Formes et Mouvements de la Terre can federate a wide variety of scientific areas (earthquake cycle, tectonics, morphogenesis, volcanism, erosion dynamics, mantle rheology, geodesy) and offers many interfaces with other thematics, such as glaciology or snow evolution. This choice motivates the design of an ambitious data distribution scheme, including a wide variety of sources - optical imagery, SAR, GNSS, gravity, satellite altimetry data, in situ observations (inclinometers, seismometers, topometry, etc.) - as well as a wide variety of processing techniques. The challenge of the project, in the evolving context of the current and forthcoming national and international e-infrastructures, is to design a non redundant service based on interoperations with existing services, and to cope with highly complex data flows due to the granularity of the data and its associated knowledge.
Services supporting collaborative alignment of engineering networks
NASA Astrophysics Data System (ADS)
Jansson, Kim; Uoti, Mikko; Karvonen, Iris
2015-08-01
Large-scale facilities such as power plants, process factories, ships and communication infrastructures are often engineered and delivered through geographically distributed operations. The competencies required are usually distributed across several contributing organisations. In these complicated projects, it is of key importance that all partners work coherently towards a common goal. VTT and a number of industrial organisations in the marine sector have participated in a national collaborative research programme addressing these needs. The main output of this programme was development of the Innovation and Engineering Maturity Model for Marine-Industry Networks. The recently completed European Union Framework Programme 7 project COIN developed innovative solutions and software services for enterprise collaboration and enterprise interoperability. One area of focus in that work was services for collaborative project management. This article first addresses a number of central underlying research themes and previous research results that have influenced the development work mentioned above. This article presents two approaches for the development of services that support distributed engineering work. Experience from use of the services is analysed, and potential for development is identified. This article concludes with a proposal for consolidation of the two above-mentioned methodologies. This article outlines the characteristics and requirements of future services supporting collaborative alignment of engineering networks.
Optimal infrastructure maintenance scheduling problem under budget uncertainty.
DOT National Transportation Integrated Search
2010-05-01
This research addresses a general class of infrastructure asset management problems. Infrastructure : agencies usually face budget uncertainties that will eventually lead to suboptimal planning if : maintenance decisions are made without taking the u...
EPA NRMRL green Infrastructure research
Green Infrastructure is an engineering approach to wet weather flow management that uses infiltration, evapotranspiration, capture and reuse to better mimic the natural drainage processes than traditional gray systems. Green technologies supplement gray infrastructure to red...
The Aging Water Infrastructure (AWI) research program is part of EPA’s larger effort called the Sustainable Water Infrastructure (SI) initiative. The SI initiative brings together drinking water and wastewater utility managers; trade associations; local watershed protection organ...
NASA Astrophysics Data System (ADS)
Samors, R. J.; Allison, M. L.
2016-12-01
An e-infrastructure that supports data-intensive, multidisciplinary research is being organized under the auspices of the Belmont Forum consortium of national science funding agencies to accelerate the pace of science to address 21st century global change research challenges. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. The five action themes adopted by the Belmont Forum: 1. Adopt and make enforceable Data Principles that establish a global, interoperable e-infrastructure. 2. Foster communication, collaboration and coordination between the wider research community and Belmont Forum and its projects through an e-Infrastructure Coordination, Communication, & Collaboration Office. 3. Promote effective data planning and stewardship in all Belmont Forum agency-funded research with a goal to make it enforceable. 4. Determine international and community best practice to inform Belmont Forum research e-infrastructure policy through identification and analysis of cross-disciplinary research case studies. 5. Support the development of a cross-disciplinary training curriculum to expand human capacity in technology and data-intensive analysis methods. The Belmont Forum is ideally poised to play a vital and transformative leadership role in establishing a sustained human and technical international data e-infrastructure to support global change research. In 2016, members of the 23-nation Belmont Forum began a collaborative implementation phase. Four multi-national teams are undertaking Action Themes based on the recommendations above. Tasks include mapping the landscape, identifying and documenting existing data management plans, and scheduling a series of workshops that analyse trans-disciplinary applications of existing Belmont Forum projects to identify best practices and critical gaps that may be uniquely or best addressed by the Belmont Forum funding model. Concurrent work will define challenges in conducting international and interdisciplinary data management implementation plans and identify sources of relevant expertise and knowledge.
Clauzel, Céline; Girardet, Xavier; Foltête, Jean-Christophe
2013-09-30
The aim of the present work is to assess the potential long-distance effect of a high-speed railway line on the distribution of the European tree frog (Hyla arborea) in eastern France by combining graph-based analysis and species distribution models. This combination is a way to integrate patch-level connectivity metrics on different scales into a predictive model. The approach used is put in place before the construction of the infrastructure and allows areas potentially affected by isolation to be mapped. Through a diachronic analysis, comparing species distribution before and after the construction of the infrastructure, we identify changes in the probability of species presence and we determine the maximum distance of impact. The results show that the potential impact decreases with distance from the high-speed railway line and the largest disturbances occur within the first 500 m. Between 500 m and 3500 m, the infrastructure generates a moderate decrease in the probability of presence with maximum values close to -40%. Beyond 3500 m the average disturbance is less than -10%. The spatial extent of the impact is greater than the dispersal distance of the tree frog, confirming the assumption of the long-distance effect of the infrastructure. This predictive modelling approach appears to be a useful tool for environmental impact assessment and strategic environmental assessment. The results of the species distribution assessment may provide guidance for field surveys and support for conservation decisions by identifying the areas most affected. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Allison, M. L.; Gurney, R. J.
2015-12-01
An e-infrastructure that supports data-intensive, multidisciplinary research is needed to accelerate the pace of science to address 21st century global change challenges. Data discovery, access, sharing and interoperability collectively form core elements of an emerging shared vision of e-infrastructure for scientific discovery. The pace and breadth of change in information management across the data lifecycle means that no one country or institution can unilaterally provide the leadership and resources required to use data and information effectively, or needed to support a coordinated, global e-infrastructure. An 18-month long process involving ~120 experts in domain, computer, and social sciences from more than a dozen countries resulted in a formal set of recommendations to the Belmont Forum collaboration of national science funding agencies and others on what they are best suited to implement for development of an e-infrastructure in support of global change research, including: adoption of data principles that promote a global, interoperable e-infrastructure establishment of information and data officers for coordination of global data management and e-infrastructure efforts promotion of effective data planning determination of best practices development of a cross-disciplinary training curriculum on data management and curation The Belmont Forum is ideally poised to play a vital and transformative leadership role in establishing a sustained human and technical international data e-infrastructure to support global change research. The international collaborative process that went into forming these recommendations is contributing to national governments and funding agencies and international bodies working together to execute them.
Supporting NEESPI with Data Services - The SIB-ESS-C e-Infrastructure
NASA Astrophysics Data System (ADS)
Gerlach, R.; Schmullius, C.; Frotscher, K.
2009-04-01
Data discovery and retrieval is commonly among the first steps performed for any Earth science study. The way scientific data is searched and accessed has changed significantly over the past two decades. Especially the development of the World Wide Web and the technologies that evolved along shortened the data discovery and data exchange process. On the other hand the amount of data collected and distributed by earth scientists has increased exponentially requiring new concepts for data management and sharing. One such concept to meet the demand is to build up Spatial Data Infrastructures (SDI) or e-Infrastructures. These infrastructures usually contain components for data discovery allowing users (or other systems) to query a catalogue or registry and retrieve metadata information on available data holdings and services. Data access is typically granted using FTP/HTTP protocols or, more advanced, through Web Services. A Service Oriented Architecture (SOA) approach based on standardized services enables users to benefit from interoperability among different systems and to integrate distributed services into their application. The Siberian Earth System Science Cluster (SIB-ESS-C) being established at the University of Jena (Germany) is such a spatial data infrastructure following these principles and implementing standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO). The prime objective is to provide researchers with focus on Siberia with the technical means for data discovery, data access, data publication and data analysis. The region of interest covers the entire Asian part of the Russian Federation from the Ural to the Pacific Ocean including the Ob-, Lena- and Yenissey river catchments. The aim of SIB-ESS-C is to provide a comprehensive set of data products for Earth system science in this region. Although SIB-ESS-C will be equipped with processing capabilities for in-house data generation (mainly from Earth Observation), current data holdings of SIB-ESS-C have been created in collaboration with a number of partners in previous and ongoing research projects (e.g. SIBERIA-II, SibFORD, IRIS). At the current development stage the SIB-ESS-C system comprises a federated metadata catalogue accessible through the SIB-ESS-C Web Portal or from any OGC-CSW compliant client. Due to full interoperability with other metadata catalogues users of the SIB-ESS-C Web Portal are able to search external metadata repositories. The Web Portal contains also a simple visualization component which will be extended to a comprehensive visualization and analysis tool in the near future. All data products are already accessible as a Web Mapping Service and will be made available as Web Feature and Web Coverage Services soon allowing users to directly incorporate the data into their application. The SIB-ESS-C infrastructure will be further developed as one node in a network of similar systems (e.g. NASA GIOVANNI) in the NEESPI region.
Managing Critical Infrastructures C.I.M. Suite
Dudenhoeffer, Donald
2018-05-23
See how a new software package developed by INL researchers could help protect infrastructure during natural disasters, terrorist attacks and electrical outages. For more information about INL research, visit http://www.facebook.com/idahonationallaboratory.
Green Infrastructure Research and Demonstration at the Edison Environmental Center
This presentation will review the need for storm water control practices and will present a portion of the green infrastructure research and demonstration being performed at the Edison Environmental Center.
Satellite Communications for Aeronautical Applications: Recent research and Development Results
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.
2001-01-01
Communications systems have always been a critical element in aviation. Until recently, nearly all communications between the ground and aircraft have been based on analog voice technology. But the future of global aviation requires a more sophisticated "information infrastructure" which not only provides more and better communications, but integrates the key information functions (communications, navigation, and surveillance) into a modern, network-based infrastructure. Satellite communications will play an increasing role in providing information infrastructure solutions for aviation. Developing and adapting satellite communications technologies for aviation use is now receiving increased attention as the urgency to develop information infrastructure solutions grows. The NASA Glenn Research Center is actively involved in research and development activities for aeronautical satellite communications, with a key emphasis on air traffic management communications needs. This paper describes the recent results and status of NASA Glenn's research program.
NASA Astrophysics Data System (ADS)
Glaves, Helen; Schaap, Dick
2016-04-01
The increasingly ocean basin level approach to marine research has led to a corresponding rise in the demand for large quantities of high quality interoperable data. This requirement for easily discoverable and readily available marine data is currently being addressed by initiatives such as SeaDataNet in Europe, Rolling Deck to Repository (R2R) in the USA and the Australian Ocean Data Network (AODN) with each having implemented an e-infrastructure to facilitate the discovery and re-use of standardised multidisciplinary marine datasets available from a network of distributed repositories, data centres etc. within their own region. However, these regional data systems have been developed in response to the specific requirements of their users and in line with the priorities of the funding agency. They have also been created independently of the marine data infrastructures in other regions often using different standards, data formats, technologies etc. that make integration of marine data from these regional systems for the purposes of basin level research difficult. Marine research at the ocean basin level requires a common global framework for marine data management which is based on existing regional marine data systems but provides an integrated solution for delivering interoperable marine data to the user. The Ocean Data Interoperability Platform (ODIP/ODIP II) project brings together those responsible for the management of the selected marine data systems and other relevant technical experts with the objective of developing interoperability across the regional e-infrastructures. The commonalities and incompatibilities between the individual data infrastructures are identified and then used as the foundation for the specification of prototype interoperability solutions which demonstrate the feasibility of sharing marine data across the regional systems and also with relevant larger global data services such as GEO, COPERNICUS, IODE, POGO etc. The potential impact for the individual regional data infrastructures of implementing these prototype interoperability solutions is also being evaluated to determine both the technical and financial implications of their integration within existing systems. These impact assessments form part of the strategy to encourage wider adoption of the ODIP solutions and approach beyond the current scope of the project which is focussed on regional marine data systems in Europe, Australia, the USA and, more recently, Canada.
Integrating biodiversity distribution knowledge: toward a global map of life.
Jetz, Walter; McPherson, Jana M; Guralnick, Robert P
2012-03-01
Global knowledge about the spatial distribution of species is orders of magnitude coarser in resolution than other geographically-structured environmental datasets such as topography or land cover. Yet such knowledge is crucial in deciphering ecological and evolutionary processes and in managing global change. In this review, we propose a conceptual and cyber-infrastructure framework for refining species distributional knowledge that is novel in its ability to mobilize and integrate diverse types of data such that their collective strengths overcome individual weaknesses. The ultimate aim is a public, online, quality-vetted 'Map of Life' that for every species integrates and visualizes available distributional knowledge, while also facilitating user feedback and dynamic biodiversity analyses. First milestones toward such an infrastructure have now been implemented. Copyright © 2011 Elsevier Ltd. All rights reserved.
The U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) has long recognized the need for research and development in the area of drinking water and wastewater infrastructure. Most recently in support of the Agency’s Sustainable Water Infrastructu...
The U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) has long recognized the need for research and development in the area of drinking water and wastewater infrastructure. Most recently in support of the Agency’s Sustainable Water ...
Implementation Practice and Implementation Research: A Report from the Field
ERIC Educational Resources Information Center
Brekke, John S.; Phillips, Elizabeth; Pancake, Laura; O, Anne; Lewis, Jenebah; Duke, Jessica
2009-01-01
The Interventions and Practice Research Infrastructure Program (IPRISP) funding mechanism was introduced by the National Institute of Mental Health (NIMH) to bridge the gap between the worlds of services research and the usual care practice in the community. The goal was to build infrastructure that would provide a platform for research to…
NASA Astrophysics Data System (ADS)
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.
Clinical research: business opportunities for pharmacy-based investigational drug services.
Marnocha, R M
1999-02-01
The application by an academic health center of business principles to the conduct of clinical research is described. Re-engineering of the infrastructure for clinical research at the University of Wisconsin and University of Wisconsin Hospital and Clinics began in 1990 with the creation of the Center for Clinical Trials (CCT) and the restructuring of the investigational drug services (IDS). Strategies to further improve the institution's clinical research activities have been continually assessed and most recently have centered on the adaptation of a business philosophy within the institution's multidisciplinary research infrastructure. Toward that end, the CCT and IDS have introduced basic business principles into operational activities. Four basic business concepts have been implemented: viewing the research protocol as a commodity, seeking payment for services rendered, tracking investments, and assessing performance. It is proposed that incorporation of these basic business concepts is not only compatible with the infrastructure for clinical research but beneficial to that infrastructure. The adaptation of a business mindset is likely to enable an academic health center to reach its clinical research goals.
Recovery Act-SmartGrid regional demonstration transmission and distribution (T&D) Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hedges, Edward T.
This document represents the Final Technical Report for the Kansas City Power & Light Company (KCP&L) Green Impact Zone SmartGrid Demonstration Project (SGDP). The KCP&L project is partially funded by Department of Energy (DOE) Regional Smart Grid Demonstration Project cooperative agreement DE-OE0000221 in the Transmission and Distribution Infrastructure application area. This Final Technical Report summarizes the KCP&L SGDP as of April 30, 2015 and includes summaries of the project design, implementation, operations, and analysis performed as of that date.
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2013-10-01
Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the realization of CARE (Coordinated Accelerator R&D), EuCARD (European Coordination of Accelerator R&D) and during the national annual review meeting of the TIARA - Test Infrastructure of European Research Area in Accelerator R&D. The European projects on accelerator technology started in 2003 with CARE. TIARA is an European Collaboration of Accelerator Technology, which by running research projects, technical, networks and infrastructural has a duty to integrate the research and technical communities and infrastructures in the global scale of Europe. The Collaboration gathers all research centers with large accelerator infrastructures. Other ones, like universities, are affiliated as associate members. TIARA-PP (preparatory phase) is an European infrastructural project run by this Consortium and realized inside EU-FP7. The paper presents a general overview of CARE, EuCARD and especially TIARA activities, with an introduction containing a portrait of contemporary accelerator technology and a digest of its applications in modern society. CARE, EuCARD and TIARA activities integrated the European accelerator community in a very effective way. These projects are expected very much to be continued.
National Study of Nursing Research Characteristics at Magnet®-Designated Hospitals.
Pintz, Christine; Zhou, Qiuping Pearl; McLaughlin, Maureen Kirkpatrick; Kelly, Katherine Patterson; Guzzetta, Cathie E
2018-05-01
To describe the research infrastructure, culture, and characteristics of building a nursing research program in Magnet®-designated hospitals. Magnet recognition requires hospitals to conduct research and implement evidence-based practice (EBP). Yet, the essential characteristics of productive nursing research programs are not well described. We surveyed 181 nursing research leaders at Magnet-designated hospitals to assess the characteristics in their hospitals associated with research infrastructure, research culture, and building a nursing research program. Magnet hospitals provide most of the needed research infrastructure and have a culture that support nursing research. Higher scores for the 3 categories were found when hospitals had a nursing research director, a research department, and more than 10 nurse-led research studies in the past 5 years. While some respondents indicated their nurse executives and leaders support the enculturation of EBP and research, there continue to be barriers to full implementation of these characteristics in practice.
Impact of electric vehicles on the IEEE 34 node distribution infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zeming; Shalalfel, Laith; Beshir, Mohammed J.
With the growing penetration of the electric vehicles to our daily life owing to their economic and environmental benefits, there will be both opportunities and challenges to the utilities when adopting plug-in electric vehicles (PEV) to the distribution network. In this study, a thorough analysis based on real-world project is conducted to evaluate the impacts of electric vehicles infrastructure on the grid relating to system load flow, load factor, and voltage stability. IEEE 34 node test feeder was selected and tested along with different case scenarios utilizing the electrical distribution design (EDD) software to find out the potential impacts tomore » the grid.« less
Impact of electric vehicles on the IEEE 34 node distribution infrastructure
Jiang, Zeming; Shalalfel, Laith; Beshir, Mohammed J.
2014-10-01
With the growing penetration of the electric vehicles to our daily life owing to their economic and environmental benefits, there will be both opportunities and challenges to the utilities when adopting plug-in electric vehicles (PEV) to the distribution network. In this study, a thorough analysis based on real-world project is conducted to evaluate the impacts of electric vehicles infrastructure on the grid relating to system load flow, load factor, and voltage stability. IEEE 34 node test feeder was selected and tested along with different case scenarios utilizing the electrical distribution design (EDD) software to find out the potential impacts tomore » the grid.« less
Computers, the Internet and medical education in Africa.
Williams, Christopher D; Pitchforth, Emma L; O'Callaghan, Christopher
2010-05-01
OBJECTIVES This study aimed to explore the use of information and communications technology (ICT) in undergraduate medical education in developing countries. METHODS Educators (deans and heads of medical education) in English-speaking countries across Africa were sent a questionnaire to establish the current state of ICT at medical schools. Non-respondents were contacted firstly by e-mail, subsequently by two postal mailings at 3-month intervals, and finally by telephone. Main outcome measures included cross-sectional data about the availability of computers, specifications, Internet connection speeds, use of ICT by students, and teaching of ICT and computerised research skills, presented by country or region. RESULTS The mean computer : student ratio was 0.123. Internet speeds were rated as 'slow' or 'very slow' on a 5-point Likert scale by 25.0% of respondents overall, but by 58.3% in East Africa and 33.3% in West Africa (including Cameroon). Mean estimates showed that campus computers more commonly supported CD-ROM (91.4%) and sound (87.3%) than DVD-ROM (48.1%) and Internet (72.5%). The teaching of ICT and computerised research skills, and the use of computers by medical students for research, assignments and personal projects were common. CONCLUSIONS It is clear that ICT infrastructure in Africa lags behind that in other regions. Poor download speeds limit the potential of Internet resources (especially videos, sound and other large downloads) to benefit students, particularly in East and West (including Cameroon) Africa. CD-ROM capability is more widely available, but has not yet gained momentum as a means of distributing materials. Despite infrastructure limitations, ICT is already being used and there is enthusiasm for developing this further. Priority should be given to developing partnerships to improve ICT infrastructure and maximise the potential of existing technology.
NASA Astrophysics Data System (ADS)
Favali, Paolo; Beranzoli, Laura; Best, Mairi; Franceschini, PierLuigi; Materia, Paola; Peppoloni, Silvia; Picard, John
2014-05-01
EMSO (European Multidisciplinary Seafloor and Water Column Observatory) is a large-scale European Research Infrastructure (RI). It is a geographically distributed infrastructure composed of several deep-seafloor and water-column observatories, which will be deployed at key sites in European waters, spanning from the Arctic, through the Atlantic and Mediterranean, to the Black Sea, with the basic scientific objective of real-time, long-term monitoring of environmental processes related to the interaction between the geosphere, biosphere and hydrosphere. EMSO is one of the environmental RIs on the ESFRI roadmap. The ESRFI Roadmap identifies new RIs of pan-European importance that correspond to the long term needs of European research communities. EMSO will be the sub-sea segment of the EU's large-scale Earth Observation program, Copernicus (previously known as GMES - Global Monitoring for Environment and Security) and will significantly enhance the observational capabilities of European member states. An open data policy compliant with the recommendations being developed within the GEOSS initiative (Global Earth Observation System of Systems) will allow for shared use of the infrastructure and the exchange of scientific information and knowledge. The processes that occur in the oceans have a direct impact on human societies, therefore it is crucial to improve our understanding of how they operate and interact. To encompass the breadth of these major processes, sustained and integrated observations are required that appreciate the interconnectedness of atmospheric, surface ocean, biological pump, deep-sea, and solid-Earth dynamics and that can address: • natural and anthropogenic change; • interactions between ecosystem services, biodiversity, biogeochemistry, physics, and climate; • impacts of exploration and extraction of energy, minerals, and living resources; • geo-hazard early warning capability for earthquakes, tsunamis, gas-hydrate release, and slope instability and failure; • connecting scientific outcomes to stakeholders and policy makers, including to government decision-makers. The development of a large research infrastructure initiatives like EMSO must continuously take into account wide-reaching environmental and socio-economic implications and objectives. For this reason, an Ethics Commitee was established early in EMSO's initial Preparatory Phase with responsibility for overseeing the key ethical and social aspects of the project. These include: • promoting inclusive science communication and data dissemination services to civil society according to Open Access principles; • guaranteeing top quality scientific information and data as results of top quality research; • promoting the increased adoption of eco-friendly, sustainable technologies through the dissemination of advanced scientific knowledge and best practices to the private sector and to policy makers; • developing Education Strategies in cooperation with academia and industry aimed at informing and sensitizing the general public on the environmental and socio-economic implications and benefits of large research infrastructure initiatives such as EMSO; • carrying out Excellent Science following strict criteria of research integrity, as expressed in the Montreal Statement (2013); • promoting Geo-ethical awareness and innovation by spurring innovative approaches in the management of environmental aspects of large research projects; • supporting technological Innovation by working closely in support of SMEs; • providing a constant, qualified and authoritative one-stop-shopping Reference Point and Advisory for politicians and decision-makers. The paper shows how Geoethics is an essential tool for guiding methodological and operational choices, and management of an European project with great impact on the environment and society.
The Heliophysics Integrated Observatory
NASA Astrophysics Data System (ADS)
Csillaghy, A.; Bentley, R. D.
2009-12-01
HELIO is a new Europe-wide, FP7-funded distributed network of services that will address the needs of a broad community of researchers in heliophysics. This new research field explores the “Sun-Solar System Connection” and requires the joint exploitation of solar, heliospheric, magnetospheric and ionospheric observations. HELIO will provide the most comprehensive integrated information system in this domain; it will coordinate access to the distributed resources needed by the community, and will provide access to services to mine and analyse the data. HELIO will be designed as a Service-oriented Architecture. The initial infrastructure will include services based on metadata and data servers deployed by the European Grid of Solar Observations (EGSO). We will extend these to address observations from all the disciplines of heliophysics; differences in the way the domains describe and handle the data will be resolved using semantic mapping techniques. Processing and storage services will allow the user to explore the data and create the products that meet stringent standards of interoperability. These capabilities will be orchestrated with the data and metadata services using the Taverna workflow tool. HELIO will address the challenges along the FP7 I3 activities model: (1) Networking: we will cooperate closely with the community to define new standards for heliophysics and the required capabilities of the HELIO system. (2) Services: we will integrate the services developed by the project and other groups to produce an infrastructure that can easily be extended to satisfy the growing and changing needs of the community. (3) Joint Research: we will develop search tools that span disciplinary boundaries and explore new types of user-friendly interfaces HELIO will be a key component of a worldwide effort to integrate heliophysics data and will coordinate closely with international organizations to exploit synergies with complementary domains.
Towards a 3d Spatial Urban Energy Modelling Approach
NASA Astrophysics Data System (ADS)
Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.
2013-09-01
Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies conceptually and practically integrate urban spatial and energy planning approaches. The combined modelling approach that will be developed based on the described sectorial models holds the potential to represent hybrid energy systems coupling distributed generation of electricity with thermal conversion systems.
GEOSS AIP-2 Climate Change and Biodiversity Use Scenarios: Interoperability Infrastructures
NASA Astrophysics Data System (ADS)
Nativi, Stefano; Santoro, Mattia
2010-05-01
In the last years, scientific community is producing great efforts in order to study the effects of climate change on life on Earth. In this general framework, a key role is played by the impact of climate change on biodiversity. To assess this, several use scenarios require the modeling of climatological change impact on the regional distribution of biodiversity species. Designing and developing interoperability infrastructures which enable scientists to search, discover, access and use multi-disciplinary resources (i.e. datasets, services, models, etc.) is currently one of the main research fields for the Earth and Space Science Informatics. This presentation introduces and discusses an interoperability infrastructure which implements the discovery, access, and chaining of loosely-coupled resources in the climatology and biodiversity domains. This allows to set up and run forecast and processing models. The presented framework was successfully developed and experimented in the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2) Climate Change & Biodiversity thematic Working Group. This interoperability infrastructure is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. This framework was assessed in two use scenarios of GEOSS AIP-2 Climate Change and Biodiversity WG. Both scenarios concern the prediction of species distributions driven by climatological change forecasts. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. The scientific patronage was provided by the University of Colorado and the University of Alaska, respectively. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop.
The Australian Integrated Marine Observing System
NASA Astrophysics Data System (ADS)
Proctor, R.; Meyers, G.; Roughan, M.; Operators, I.
2008-12-01
The Integrated Marine Observing System (IMOS) is a 92M project established with 50M from the National Collaborative Research Infrastructure Strategy (NCRIS) and co-investments from 10 operators including Universities and government agencies (see below). It is a nationally distributed set of equipment established and maintained at sea, oceanographic data and information services that collectively will contribute to meeting the needs of marine research in both open oceans and over the continental shelf around Australia. In particular, if sustained in the long term, it will permit identification and management of climate change in the marine environment, an area of research that is as yet almost a blank page, studies relevant to conservation of marine biodiversity and research on the role of the oceans in the climate system. While as an NCRIS project IMOS is intended to support research, the data streams are also useful for many societal, environmental and economic applications, such as management of offshore industries, safety at sea, management of marine ecosystems and fisheries and tourism. The infrastructure also contributes to Australia's commitments to international programs of ocean observing and international conventions, such as the 1982 Law of the Sea Convention that established the Australian Exclusive Economic Zone, the United Nations Framework Convention on Climate Change, the Global Ocean Observing System and the intergovernmental coordinating activity Global Earth Observation System of Systems. IMOS is made up of nine national facilities that collect data, using different components of infrastructure and instruments, and two facilities that manage and provide access to data and enhanced data products, one for in situ data and a second for remotely sensed satellite data. The observing facilities include three for the open (bluewater) ocean (Argo Australia, Enhanced Ships of Opportunity and Southern Ocean Time Series), three facilities for coastal currents and water properties (Moorings, Ocean Gliders and HF Radar) and three for coastal ecosystems (Acoustic Tagging and Tracking, Autonomous Underwater Vehicle and a biophysical sensor network on the Great Barrier Reef). The value from this infrastructure investment lies in the coordinated deployment of a wide range of equipment aimed at deriving critical data sets that serve multiple applications. Additional information on IMOS is available at the website (http://www.imos.org.au). The IMOS Operators are Australian Institute of Marine Science, James Cook University, Sydney Institute of Marine Science, Geoscience Australia, Bureau of Meteorology, South Australia Research and Development Institute, University of Western Australia, Curtin University of Technology, CSIRO Marine and Atmospheric Research, University of Tasmania.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamber, Kevin L.; Unis, Carl J.; Shirah, Donald N.
Research into modeling of the quantification and prioritization of resources used in the recovery of lifeline critical infrastructure following disruptive incidents, such as hurricanes and earthquakes, has shown several factors to be important. Among these are population density and infrastructure density, event effects on infrastructure, and existence of an emergency response plan. The social sciences literature has a long history of correlating the population density and infrastructure density at a national scale, at a country-to-country level, mainly focused on transportation networks. This effort examines whether these correlations can be repeated at smaller geographic scales, for a variety of infrastructure types,more » so as to be able to use population data as a proxy for infrastructure data where infrastructure data is either incomplete or insufficiently granular. Using the best data available, this effort shows that strong correlations between infrastructure density for multiple types of infrastructure (e.g. miles of roads, hospital beds, miles of electric power transmission lines, and number of petroleum terminals) and population density do exist at known geographic boundaries (e.g. counties, service area boundaries) with exceptions that are explainable within the social sciences literature. Furthermore, the correlations identified provide a useful basis for ongoing research into the larger resource utilization problem.« less
Stamber, Kevin L.; Unis, Carl J.; Shirah, Donald N.; ...
2016-04-01
Research into modeling of the quantification and prioritization of resources used in the recovery of lifeline critical infrastructure following disruptive incidents, such as hurricanes and earthquakes, has shown several factors to be important. Among these are population density and infrastructure density, event effects on infrastructure, and existence of an emergency response plan. The social sciences literature has a long history of correlating the population density and infrastructure density at a national scale, at a country-to-country level, mainly focused on transportation networks. This effort examines whether these correlations can be repeated at smaller geographic scales, for a variety of infrastructure types,more » so as to be able to use population data as a proxy for infrastructure data where infrastructure data is either incomplete or insufficiently granular. Using the best data available, this effort shows that strong correlations between infrastructure density for multiple types of infrastructure (e.g. miles of roads, hospital beds, miles of electric power transmission lines, and number of petroleum terminals) and population density do exist at known geographic boundaries (e.g. counties, service area boundaries) with exceptions that are explainable within the social sciences literature. Furthermore, the correlations identified provide a useful basis for ongoing research into the larger resource utilization problem.« less
Nasir, Zaheer Ahmad; Campos, Luiza Cintra; Christie, Nicola; Colbeck, Ian
2016-08-01
Exposure to airborne biological hazards in an ever expanding urban transport infrastructure and highly diverse mobile population is of growing concern, in terms of both public health and biosecurity. The existing policies and practices on design, construction and operation of these infrastructures may have severe implications for airborne disease transmission, particularly, in the event of a pandemic or intentional release of biological of agents. This paper reviews existing knowledge on airborne disease transmission in different modes of transport, highlights the factors enhancing the vulnerability of transport infrastructures to airborne disease transmission, discusses the potential protection measures and identifies the research gaps in order to build a bioresilient transport infrastructure. The unification of security and public health research, inclusion of public health security concepts at the design and planning phase, and a holistic system approach involving all the stakeholders over the life cycle of transport infrastructure hold the key to mitigate the challenges posed by biological hazards in the twenty-first century transport infrastructure.
42 CFR § 512.510 - Downstream distribution arrangements under the EPM.
Code of Federal Regulations, 2010 CFR
2017-10-01
... HEALTH AND HUMAN SERVICES (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL... distribution payment it receives from the EPM collaborator only in accordance with a downstream distribution... make or receive a downstream distribution payment must not be conditioned directly or indirectly on the...
Detection of Frauds and Other Non-technical Losses in Power Utilities using Smart Meters: A Review
NASA Astrophysics Data System (ADS)
Ahmad, Tanveer; Ul Hasan, Qadeer
2016-06-01
Analysis of losses in power distribution system and techniques to mitigate these are two active areas of research especially in energy scarce countries like Pakistan to increase the availability of power without installing new generation. Since total energy losses account for both technical losses (TL) as well as non-technical losses (NTLs). Utility companies in developing countries are incurring of major financial losses due to non-technical losses. NTLs lead to a series of additional losses, such as damage to the network (infrastructure and the reduction of network reliability) etc. The purpose of this paper is to perform an introductory investigation of non-technical losses in power distribution systems. Additionally, analysis of NTLs using consumer energy consumption data with the help of Linear Regression Analysis has been carried out. This data focuses on the Low Voltage (LV) distribution network, which includes: residential, commercial, agricultural and industrial consumers by using the monthly kWh interval data acquired over a period (one month) of time using smart meters. In this research different prevention techniques are also discussed to prevent illegal use of electricity in the distribution of electrical power system.
Orion Navigation Sensitivities to Ground Station Infrastructure for Lunar Missions
NASA Technical Reports Server (NTRS)
Getchius, Joel; Kukitschek, Daniel; Crain, Timothy
2008-01-01
The Orion Crew Exploration Vehicle (CEV) will replace the Space Shuttle and serve as the next-generation spaceship to carry humans to the International Space Station and back to the Moon for the first time since the Apollo program. As in the Apollo and Space Shuttle programs, the Mission Control Navigation team will utilize radiometric measurements to determine the position and velocity of the CEV. In the case of lunar missions, the ground station infrastructure consisting of approximately twelve stations distributed about the Earth and known as the Apollo Manned Spaceflight Network, no longer exists. Therefore, additional tracking resources will have to be allocated or constructed to support mission operations for Orion lunar missions. This paper examines the sensitivity of Orion navigation for lunar missions to the number and distribution of tracking sites that form the ground station infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finnell, Joshua Eugene; Klein, Martin; Cain, Brian J.
2017-05-09
The proposal is to provide institutional infrastructure that facilitates management of research projects, research collaboration, and management, preservation, and discovery of data. Deploying such infrastructure will amplify the effectiveness, efficiency, and impact of research, as well as assist researchers in regards to compliance with both data management mandates and LANL security policy. This will facilitate discoverability of LANL research both within the lab and external to LANL.
NASA Astrophysics Data System (ADS)
Low, W. W.; Wong, K. S.; Lee, J. L.
2018-04-01
With the growth of economy and population, there is an increase in infrastructure construction projects. As such, it is unavoidable to have construction projects on soft soil. Without proper risk management plan, construction projects are vulnerable to different types of risks which will have negative impact on project’s time, cost and quality. Literature review showed that little or none of the research is focused on the risk assessment on the infrastructure project in soft soil. Hence, the aim of this research is to propose a risk assessment framework in infrastructure projects in soft soil during the construction stage. This research was focused on the impact of risks on project time and internal risk factors. The research method was Analytical Hierarchy Process and the sample population was experienced industry experts who have experience in infrastructure projects. Analysis was completed and result showed that for internal factors, the five most significant risks on time element are lack of special equipment, potential contractual disputes and claims, shortage of skilled workers, delay/lack of materials supply, and insolvency of contractor/sub-contractor. Results indicated that resources risk factor play a critical role on project time frame in infrastructure projects in soft soil during the construction stage.
Storing and Using Health Data in a Virtual Private Cloud
Regola, Nathan
2013-01-01
Electronic health records are being adopted at a rapid rate due to increased funding from the US federal government. Health data provide the opportunity to identify possible improvements in health care delivery by applying data mining and statistical methods to the data and will also enable a wide variety of new applications that will be meaningful to patients and medical professionals. Researchers are often granted access to health care data to assist in the data mining process, but HIPAA regulations mandate comprehensive safeguards to protect the data. Often universities (and presumably other research organizations) have an enterprise information technology infrastructure and a research infrastructure. Unfortunately, both of these infrastructures are generally not appropriate for sensitive research data such as HIPAA, as they require special accommodations on the part of the enterprise information technology (or increased security on the part of the research computing environment). Cloud computing, which is a concept that allows organizations to build complex infrastructures on leased resources, is rapidly evolving to the point that it is possible to build sophisticated network architectures with advanced security capabilities. We present a prototype infrastructure in Amazon’s Virtual Private Cloud to allow researchers and practitioners to utilize the data in a HIPAA-compliant environment. PMID:23485880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
The climate and weather data science community gathered December 3–5, 2013, at Lawrence Livermore National Laboratory, in Livermore, California, for the third annual Earth System Grid Federation (ESGF) and Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Meeting, which was hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UV-CDAT are global collaborations designed to develop a new generation of open-source software infrastructure that provides distributed access and analysis to observed andmore » simulated data from the climate and weather communities. The tools and infrastructure developed under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change, while the F2F meetings help to build a stronger climate and weather data science community and stronger federated software infrastructure. The 2013 F2F meeting determined requirements for existing and impending national and international community projects; enhancements needed for data distribution, analysis, and visualization infrastructure; and standards and resources needed for better collaborations.« less
NASA Astrophysics Data System (ADS)
Seo, Y.; Hwang, J.; Kwon, Y.
2017-12-01
The existence of impervious areas is one of the most distinguishing characteristics of urban catchments. It decreases infiltration and increases direct runoff in urban catchments. The recent introduction of green infrastructure in urban catchments for the purpose of sustainable development contributes to the decrease of the directly connected impervious areas (DCIA) by isolating existing impervious areas and consequently, to the flood risk mitigation. This study coupled the width function-based instantaneous hydrograph (WFIUH), which is able to handle the spatial distribution of the impervious areas, with the concept of the DCIA to assess the impact of decreasing DCIA on the shape of direct runoff hydrographs. Using several scenarios for typical green infrastructure and corresponding changes of DCIA in a test catchment, this study evaluated the effect of green infrastructure on the shape of the resulting direct runoff hydrographs and peak flows. The results showed that the changes in the DCIA immediately affects the shape of the direct runoff hydrograph and decreases peak flows depending on spatial implementation scenarios. The quantitative assessment of the spatial distribution of impervious areas and also the changes to the DCIA suggests effective and well-planned green infrastructure can be introduced in urban environments for flood risk management.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-19
... market participants that the program is experimental and that NASDAQ may choose not to continue the... not only the costs of the data distribution infrastructure, but also the costs of designing... infrastructure--that have risen. The same holds true for execution services; despite numerous enhancements to...
ERIC Educational Resources Information Center
Maryland State Dept. of Education, Baltimore. School Facilities Branch.
Telecommunications infrastructure has the dual challenges of maintaining quality while accommodating change, issues that have long been met through a series of implementation standards. This document is designed to ensure that telecommunications systems within the Maryland public school system are also capable of meeting both challenges and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.
The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.
Map Matching and Real World Integrated Sensor Data Warehousing (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burton, E.
2014-02-01
The inclusion of interlinked temporal and spatial elements within integrated sensor data enables a tremendous degree of flexibility when analyzing multi-component datasets. The presentation illustrates how to warehouse, process, and analyze high-resolution integrated sensor datasets to support complex system analysis at the entity and system levels. The example cases presented utilizes in-vehicle sensor system data to assess vehicle performance, while integrating a map matching algorithm to link vehicle data to roads to demonstrate the enhanced analysis possible via interlinking data elements. Furthermore, in addition to the flexibility provided, the examples presented illustrate concepts of maintaining proprietary operational information (Fleet DNA)more » and privacy of study participants (Transportation Secure Data Center) while producing widely distributed data products. Should real-time operational data be logged at high resolution across multiple infrastructure types, map matched to their associated infrastructure, and distributed employing a similar approach; dependencies between urban environment infrastructures components could be better understood. This understanding is especially crucial for the cities of the future where transportation will rely more on grid infrastructure to support its energy demands.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladendorff, Marlene Z.
Considerable money and effort has been expended by generation, transmission, and distribution entities in North America to implement the North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards for the bulk electric system. Assumptions have been made that as a result of the implementation of the standards, the grid is more cyber secure than it was pre-NERC CIP, but are there data supporting these claims, or only speculation? Has the implementation of the standards had an effect on the grid? Furthermore, developing a research study to address these and other questions provided surprising results.
FDA's Activities Supporting Regulatory Application of "Next Gen" Sequencing Technologies.
Wilson, Carolyn A; Simonyan, Vahan
2014-01-01
Applications of next-generation sequencing (NGS) technologies require availability and access to an information technology (IT) infrastructure and bioinformatics tools for large amounts of data storage and analyses. The U.S. Food and Drug Administration (FDA) anticipates that the use of NGS data to support regulatory submissions will continue to increase as the scientific and clinical communities become more familiar with the technologies and identify more ways to apply these advanced methods to support development and evaluation of new biomedical products. FDA laboratories are conducting research on different NGS platforms and developing the IT infrastructure and bioinformatics tools needed to enable regulatory evaluation of the technologies and the data sponsors will submit. A High-performance Integrated Virtual Environment, or HIVE, has been launched, and development and refinement continues as a collaborative effort between the FDA and George Washington University to provide the tools to support these needs. The use of a highly parallelized environment facilitated by use of distributed cloud storage and computation has resulted in a platform that is both rapid and responsive to changing scientific needs. The FDA plans to further develop in-house capacity in this area, while also supporting engagement by the external community, by sponsoring an open, public workshop to discuss NGS technologies and data formats standardization, and to promote the adoption of interoperability protocols in September 2014. Next-generation sequencing (NGS) technologies are enabling breakthroughs in how the biomedical community is developing and evaluating medical products. One example is the potential application of this method to the detection and identification of microbial contaminants in biologic products. In order for the U.S. Food and Drug Administration (FDA) to be able to evaluate the utility of this technology, we need to have the information technology infrastructure and bioinformatics tools to be able to store and analyze large amounts of data. To address this need, we have developed the High-performance Integrated Virtual Environment, or HIVE. HIVE uses a combination of distributed cloud storage and distributed cloud computations to provide a platform that is both rapid and responsive to support the growing and increasingly diverse scientific and regulatory needs of FDA scientists in their evaluation of NGS in research and ultimately for evaluation of NGS data in regulatory submissions. © PDA, Inc. 2014.
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases
Zaslavsky, Ilya; Baldock, Richard A.; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project. PMID:25309417
Cyberinfrastructure for the digital brain: spatial standards for integrating rodent brain atlases.
Zaslavsky, Ilya; Baldock, Richard A; Boline, Jyl
2014-01-01
Biomedical research entails capture and analysis of massive data volumes and new discoveries arise from data-integration and mining. This is only possible if data can be mapped onto a common framework such as the genome for genomic data. In neuroscience, the framework is intrinsically spatial and based on a number of paper atlases. This cannot meet today's data-intensive analysis and integration challenges. A scalable and extensible software infrastructure that is standards based but open for novel data and resources, is required for integrating information such as signal distributions, gene-expression, neuronal connectivity, electrophysiology, anatomy, and developmental processes. Therefore, the International Neuroinformatics Coordinating Facility (INCF) initiated the development of a spatial framework for neuroscience data integration with an associated Digital Atlasing Infrastructure (DAI). A prototype implementation of this infrastructure for the rodent brain is reported here. The infrastructure is based on a collection of reference spaces to which data is mapped at the required resolution, such as the Waxholm Space (WHS), a 3D reconstruction of the brain generated using high-resolution, multi-channel microMRI. The core standards of the digital atlasing service-oriented infrastructure include Waxholm Markup Language (WaxML): XML schema expressing a uniform information model for key elements such as coordinate systems, transformations, points of interest (POI)s, labels, and annotations; and Atlas Web Services: interfaces for querying and updating atlas data. The services return WaxML-encoded documents with information about capabilities, spatial reference systems (SRSs) and structures, and execute coordinate transformations and POI-based requests. Key elements of INCF-DAI cyberinfrastructure have been prototyped for both mouse and rat brain atlas sources, including the Allen Mouse Brain Atlas, UCSD Cell-Centered Database, and Edinburgh Mouse Atlas Project.
Critical Infrastructure for Ocean Research and Societal Needs in 2030
NASA Astrophysics Data System (ADS)
Glickson, D.; Barron, E. J.; Fine, R. A.; Bellingham, J. G.; Boss, E.; Boyle, E. A.; Edwards, M.; Johnson, K. S.; Kelley, D. S.; Kite-Powell, H.; Ramberg, S. E.; Rudnick, D. L.; Schofield, O.; Tamburri, M.; Wiebe, P. H.; Wright, D. J.; Committee on an Ocean Infrastructure StrategyU. S. Ocean Research in 2030
2011-12-01
At the request of the Subcommittee on Ocean Science and Technology, an expert committee was convened by the National Research Council to identify major research questions anticipated to be at the forefront of ocean science in 2030, define categories of infrastructure that should be included in planning, provide advice on criteria and processes that could be used to set priorities, and recommend ways to maximize the value of investments in ocean infrastructure. The committee identified 32 future ocean research questions in four themes: enabling stewardship of the environment, protecting life and property, promoting economic vitality, and increasing fundamental scientific understanding. Many of the questions reflect challenging, multidisciplinary science questions that are clearly relevant now and are likely to take decades to solve. U.S. ocean research will require a growing suite of ocean infrastructure for a range of activities, such as high quality, sustained time series observations and autonomous monitoring at a broad range of spatial and temporal scales. A coordinated national plan for making future strategic investments will be needed and should be based upon known priorities and reviewed every 5-10 years. After assessing trends in ocean infrastructure and technology development, the committee recommended implementing a comprehensive, long-term research fleet plan in order to retain access to the sea; continuing U.S. capability to access fully and partially ice-covered seas; supporting innovation, particularly the development of biogeochemical sensors; enhancing computing and modeling capacity and capability; establishing broadly accessible data management facilities; and increasing interdisciplinary education and promoting a technically-skilled workforce. They also recommended that development, maintenance, or replacement of ocean research infrastructure assets should be prioritized in terms of societal benefit. Particular consideration should be given to usefulness for addressing important science questions; affordability, efficiency, and longevity; and ability to contribute to other missions or applications. Estimating the economic costs and benefits of each potential infrastructure investment using these criteria would allow funding of investments that produce the largest expected net benefit over time.
Thinking about Distributed Learning? Issues and Questions To Ponder.
ERIC Educational Resources Information Center
Sorg, Steven
2001-01-01
Introduces other articles in this issue devoted to distributed learning at metropolitan universities. Discusses issues that institutions should address if considering distributed learning: institutional goals and strategic plans, faculty development needs and capabilities, student support services, technical and personnel infrastructure, policies,…
WATER INFRASTRUCTURE IN THE 21ST CENTURY: U.S. EPA’S RESEARCH PLANS FOR GRAVITY SEWERS
The U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) has long recognized the need for research and development in the area of drinking water and wastewater infrastructure. Most recently in support of the Agency’s Sustainable Water Infrastruct...
A green infrastructure experimental site for developing and evaluating models
The Ecosystems Research Division (ERD) of the U.S. EPA’s National Exposure Research Laboratory (NERL) in Athens, GA has a 14-acre urban watershed which has become an experimental research site for green infrastructure studies. About half of the watershed is covered by pervious la...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-06
... DEPARTMENT OF TRANSPORTATION Enabling a Secure Environment for Vehicle-to-Infrastructure Research Workshop; Notice of Public Meeting AGENCY: ITS Joint Program Office, Research and Innovative Technology Administration, U.S. Department of Transportation. ACTION: Notice. The U.S. Department of Transportation (USDOT...
Raising Virtual Laboratories in Australia onto global platforms
NASA Astrophysics Data System (ADS)
Wyborn, L. A.; Barker, M.; Fraser, R.; Evans, B. J. K.; Moloney, G.; Proctor, R.; Moise, A. F.; Hamish, H.
2016-12-01
Across the globe, Virtual Laboratories (VLs), Science Gateways (SGs), and Virtual Research Environments (VREs) are being developed that enable users who are not co-located to actively work together at various scales to share data, models, tools, software, workflows, best practices, etc. Outcomes range from enabling `long tail' researchers to more easily access specific data collections, to facilitating complex workflows on powerful supercomputers. In Australia, government funding has facilitated the development of a range of VLs through the National eResearch Collaborative Tools and Resources (NeCTAR) program. The VLs provide highly collaborative, research-domain oriented, integrated software infrastructures that meet user community needs. Twelve VLs have been funded since 2012, including the Virtual Geophysics Laboratory (VGL); Virtual Hazards, Impact and Risk Laboratory (VHIRL); Climate and Weather Science Laboratory (CWSLab); Marine Virtual Laboratory (MarVL); and Biodiversity and Climate Change Virtual Laboratory (BCCVL). These VLs share similar technical challenges, with common issues emerging on integration of tools, applications and access data collections via both cloud-based environments and other distributed resources. While each VL began with a focus on a specific research domain, communities of practice have now formed across the VLs around common issues, and facilitate identification of best practice case studies, and new standards. As a result, tools are now being shared where the VLs access data via data services using international standards such as ISO, OGC, W3C. The sharing of these approaches is starting to facilitate re-usability of infrastructure and is a step towards supporting interdisciplinary research. Whilst the focus of the VLs are Australia-centric, by using standards, these environments are able to be extended to analysis on other international datasets. Many VL datasets are subsets of global datasets and so extension to global is a small (and often requested) step. Similarly, most of the tools, software, and other technologies could be shared across infrastructures globally. Therefore, it is now time to better connect the Australian VLs with similar initiatives elsewhere to create international platforms that can contribute to global research challenges.
NASA Technical Reports Server (NTRS)
Shoham, Yoav
1994-01-01
The goal of our research is a methodology for creating robust software in distributed and dynamic environments. The approach taken is to endow software objects with explicit information about one another, to have them interact through a commitment mechanism, and to equip them with a speech-acty communication language. System-level applications include software interoperation and compositionality. A government application of specific interest is an infrastructure for coordination among multiple planners. Daily activity applications include personal software assistants, such as programmable email, scheduling, and new group agents. Research topics include definition of mental state of agents, design of agent languages as well as interpreters for those languages, and mechanisms for coordination within agent societies such as artificial social laws and conventions.
Racializing drug design: implications of pharmacogenomics for health disparities.
Lee, Sandra Soo-Jin
2005-12-01
Current practices of using "race" in pharmacogenomics research demands consideration of the ethical and social implications for understandings of group difference and for efforts to eliminate health disparities. This discussion focuses on an "infrastructure of racialization" created by current trajectories of research on genetic differences among racially identified groups, the use of race as a proxy for risk in clinical practice, and increasing interest in new market niches by the pharmaceutical industry. The confluence of these factors has resulted in the conflation of genes, disease, and race. I argue that public investment in pharmacogenomics requires careful consideration of current inequities in health status and social and ethical concerns over reifying race and issues of distributive justice.
Network-based collaborative research environment LDRD final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, B.R.; McDonald, M.J.
1997-09-01
The Virtual Collaborative Environment (VCE) and Distributed Collaborative Workbench (DCW) are new technologies that make it possible for diverse users to synthesize and share mechatronic, sensor, and information resources. Using these technologies, university researchers, manufacturers, design firms, and others can directly access and reconfigure systems located throughout the world. The architecture for implementing VCE and DCW has been developed based on the proposed National Information Infrastructure or Information Highway and a tool kit of Sandia-developed software. Further enhancements to the VCE and DCW technologies will facilitate access to other mechatronic resources. This report describes characteristics of VCE and DCW andmore » also includes background information about the evolution of these technologies.« less
NASA Aeropropulsion Research: Looking Forward
NASA Technical Reports Server (NTRS)
Seidel, Jonathan A.; Sehra, Arun K.; Colantonio, Renato O.
2001-01-01
NASA has been researching new technology and system concepts to meet the requirements of aeropropulsion for 21st Century aircraft. The air transportation for the new millennium will require revolutionary solutions to meet public demand for improving safety, reliability, environmental compatibility, and affordability. Whereas the turbine engine revolution will continue during the next two decades, several new revolutions are required to achieve the dream of an affordable, emissionless, and silent aircraft. This paper reviews the continuing turbine engine revolution and explores the propulsion system impact of future revolutions in propulsion configuration, fuel infrastructure, and alternate energy systems. A number of promising concepts, ranging from the ultrahigh to fuel cell-powered distributed propulsion are also reviewed.
European grid services for global earth science
NASA Astrophysics Data System (ADS)
Brewer, S.; Sipos, G.
2012-04-01
This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.
Pilots 2.0: DIRAC pilots for all the skies
NASA Astrophysics Data System (ADS)
Stagni, F.; Tsaregorodtsev, A.; McNab, A.; Luzzi, C.
2015-12-01
In the last few years, new types of computing infrastructures, such as IAAS (Infrastructure as a Service) and IAAC (Infrastructure as a Client), gained popularity. New resources may come as part of pledged resources, while others are opportunistic. Most of these new infrastructures are based on virtualization techniques. Meanwhile, some concepts, such as distributed queues, lost appeal, while still supporting a vast amount of resources. Virtual Organizations are therefore facing heterogeneity of the available resources and the use of an Interware software like DIRAC to hide the diversity of underlying resources has become essential. The DIRAC WMS is based on the concept of pilot jobs that was introduced back in 2004. A pilot is what creates the possibility to run jobs on a worker node. Within DIRAC, we developed a new generation of pilot jobs, that we dubbed Pilots 2.0. Pilots 2.0 are not tied to a specific infrastructure; rather they are generic, fully configurable and extendible pilots. A Pilot 2.0 can be sent, as a script to be run, or it can be fetched from a remote location. A pilot 2.0 can run on every computing resource, e.g.: on CREAM Computing elements, on DIRAC Computing elements, on Virtual Machines as part of the contextualization script, or IAAC resources, provided that these machines are properly configured, hiding all the details of the Worker Nodes (WNs) infrastructure. Pilots 2.0 can be generated server and client side. Pilots 2.0 are the “pilots to fly in all the skies”, aiming at easy use of computing power, in whatever form it is presented. Another aim is the unification and simplification of the monitoring infrastructure for all kinds of computing resources, by using pilots as a network of distributed sensors coordinated by a central resource monitoring system. Pilots 2.0 have been developed using the command pattern. VOs using DIRAC can tune pilots 2.0 as they need, and extend or replace each and every pilot command in an easy way. In this paper we describe how Pilots 2.0 work with distributed and heterogeneous resources providing the necessary abstraction to deal with different kind of computing resources.
Designing a concept for an IT-infrastructure for an integrated research and treatment center.
Stäubert, Sebastian; Winter, Alfred; Speer, Ronald; Löffler, Markus
2010-01-01
Healthcare and medical research in Germany are heading to more interconnected systems. New initiatives are funded by the German government to encourage the development of Integrated Research and Treatment Centers (IFB). Within an IFB new organizational structures and infrastructures for interdisciplinary, translational and trans-sectoral working relationship between existing rigid separated sectors are intended and needed. This paper describes how an IT-infrastructure of an IFB could look like, what major challenges have to be solved and what methods can be used to plan such a complex IT-infrastructure in the field of healthcare. By means of project management, system analyses, process models, 3LGM2-models and resource plans an appropriate concept with different views is created. This concept supports the information management in its enterprise architecture planning activities and implies a first step of implementing a connected healthcare and medical research platform.
New Geodetic Infrastructure for Australia: The NCRIS / AuScope Geospatial Component
NASA Astrophysics Data System (ADS)
Tregoning, P.; Watson, C. S.; Coleman, R.; Johnston, G.; Lovell, J.; Dickey, J.; Featherstone, W. E.; Rizos, C.; Higgins, M.; Priebbenow, R.
2009-12-01
In November 2006, the Australian Federal Government announced AUS15.8M in funding for geospatial research infrastructure through the National Collaborative Research Infrastructure Strategy (NCRIS). Funded within a broader capability area titled ‘Structure and Evolution of the Australian Continent’, NCRIS has provided a significant investment across Earth imaging, geochemistry, numerical simulation and modelling, the development of a virtual core library, and geospatial infrastructure. Known collectively as AuScope (www.auscope.org.au), this capability area has brought together Australian’s leading Earth scientists to decide upon the most pressing scientific issues and infrastructure needs for studying Earth systems and their impact on the Australian continent. Importantly and at the same time, the investment in geospatial infrastructure offers the opportunity to raise Australian geodetic science capability to the highest international level into the future. The geospatial component of AuScope builds onto the AUS15.8M of direct funding through the NCRIS process with significant in-kind and co-investment from universities and State/Territory and Federal government departments. The infrastructure to be acquired includes an FG5 absolute gravimeter, three gPhone relative gravimeters, three 12.1 m radio telescopes for geodetic VLBI, a continent-wide network of continuously operating geodetic quality GNSS receivers, a trial of a mobile SLR system and access to updated cluster computing facilities. We present an overview of the AuScope geospatial capability, review the current status of the infrastructure procurement and discuss some examples of the scientific research that will utilise the new geospatial infrastructure.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
Cloud Environment Automation: from infrastructure deployment to application monitoring
NASA Astrophysics Data System (ADS)
Aiftimiei, C.; Costantini, A.; Bucchi, R.; Italiano, A.; Michelotto, D.; Panella, M.; Pergolesi, M.; Saletta, M.; Traldi, S.; Vistoli, C.; Zizzi, G.; Salomoni, D.
2017-10-01
The potential offered by the cloud paradigm is often limited by technical issues, rules and regulations. In particular, the activities related to the design and deployment of the Infrastructure as a Service (IaaS) cloud layer can be difficult to apply and time-consuming for the infrastructure maintainers. In this paper the research activity, carried out during the Open City Platform (OCP) research project [1], aimed at designing and developing an automatic tool for cloud-based IaaS deployment is presented. Open City Platform is an industrial research project funded by the Italian Ministry of University and Research (MIUR), started in 2014. It intends to research, develop and test new technological solutions open, interoperable and usable on-demand in the field of Cloud Computing, along with new sustainable organizational models that can be deployed for and adopted by the Public Administrations (PA). The presented work and the related outcomes are aimed at simplifying the deployment and maintenance of a complete IaaS cloud-based infrastructure.
NASA Astrophysics Data System (ADS)
Argenti, M.; Giannini, V.; Averty, R.; Bigagli, L.; Dumoulin, J.
2012-04-01
The EC FP7 ISTIMES project has the goal of realizing an ICT-based system exploiting distributed and local sensors for non destructive electromagnetic monitoring in order to make critical transport infrastructures more reliable and safe. Higher situation awareness thanks to real time and detailed information and images of the controlled infrastructure status allows improving decision capabilities for emergency management stakeholders. Web-enabled sensors and a service-oriented approach are used as core of the architecture providing a sys-tem that adopts open standards (e.g. OGC SWE, OGC CSW etc.) and makes efforts to achieve full interoperability with other GMES and European Spatial Data Infrastructure initiatives as well as compliance with INSPIRE. The system exploits an open easily scalable network architecture to accommodate a wide range of sensors integrated with a set of tools for handling, analyzing and processing large data volumes from different organizations with different data models. Situation Awareness tools are also integrated in the system. Definition of sensor observations and services follows a metadata model based on the ISO 19115 Core set of metadata elements and the O&M model of OGC SWE. The ISTIMES infrastructure is based on an e-Infrastructure for geospatial data sharing, with a Data Cata-log that implements the discovery services for sensor data retrieval, acting as a broker through static connections based on standard SOS and WNS interfaces; a Decision Support component which helps decision makers providing support for data fusion and inference and generation of situation indexes; a Presentation component which implements system-users interaction services for information publication and rendering, by means of a WEB Portal using SOA design principles; A security framework using Shibboleth open source middleware based on the Security Assertion Markup Language supporting Single Sign On (SSO). ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663
Cooperative high-performance storage in the accelerated strategic computing initiative
NASA Technical Reports Server (NTRS)
Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark
1996-01-01
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.
ERIC Educational Resources Information Center
Mogge, Dru, Ed.; And Others
The topic of the 123rd meeting of the Association of Research Libraries (ARL) is the information infrastructure. The ARL is seeking to influence the policies that will form the backbone of the emerging information infrastructure. The first session concentrated on government roles and initiatives and included the following papers: "Opening…
Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed
NASA Technical Reports Server (NTRS)
Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr
2016-01-01
The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.
Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907
Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C
2015-01-01
Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.
NASA Astrophysics Data System (ADS)
Liu, Yaolong; Kreimeier, Michael; Stumpf, Eike; Zhou, Yaoming; Liu, Hu
2017-05-01
Personal aerial vehicles, an innovative transport mode to bridge the niche between scheduled airliners and ground transport, are seen by aviation researchers and engineers as a solution to provide fast urban on-demand mobility. This paper reviews recent research efforts on the personal aerial vehicle (PAV), with a focus on the US and Europe led research activities. As an extension of the programmatic level overview, several enabling technologies, such as vertical/short take-off and landing (V/STOL), automation, distributed electric propulsion, which might promote the deployment of PAVs, are introduced and discussed. Despite the dramatic innovation in PAV concept development and related technologies, some challenging issues remain, especially safety, infrastructure and public acceptance. As such, further efforts by many stakeholders are required to enable the real implementation and application of PAVs.
Forging new, non-traditional partnerships among physicists, teachers and students
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardeen, Marjorie; Adams, Mark; Wayne, Mitchell
The QuarkNet collaboration has forged new, nontraditional relationships among particle physicists, high school teachers and their students. QuarkNet provides professional development for teachers and creates opportunities for teachers and students to engage in particle physics data investigations and join research teams. Embedded in the U.S. particle research community, QuarkNet leverages the nature of particle physics research$-$the long duration of the experiments with extensive lead times, construction periods, and data collection and analysis periods. QuarkNet is patterned after the large collaborations with a central management infrastructure and a distributed workload across university- and lab-based research groups. As a result, we describemore » the important benefits of the QuarkNet outreach program that flow to university faculty and present successful strategies that others can adapt for use in their countries.« less
Forging new, non-traditional partnerships among physicists, teachers and students
Bardeen, Marjorie; Adams, Mark; Wayne, Mitchell; ...
2016-10-26
The QuarkNet collaboration has forged new, nontraditional relationships among particle physicists, high school teachers and their students. QuarkNet provides professional development for teachers and creates opportunities for teachers and students to engage in particle physics data investigations and join research teams. Embedded in the U.S. particle research community, QuarkNet leverages the nature of particle physics research$-$the long duration of the experiments with extensive lead times, construction periods, and data collection and analysis periods. QuarkNet is patterned after the large collaborations with a central management infrastructure and a distributed workload across university- and lab-based research groups. As a result, we describemore » the important benefits of the QuarkNet outreach program that flow to university faculty and present successful strategies that others can adapt for use in their countries.« less
Functionalized multi-walled carbon nanotube based sensors for distributed methane leak detection
This paper presents a highly sensitive, energy efficient and low-cost distributed methane (CH4) sensor system (DMSS) for continuous monitoring, detection and localization of CH4 leaks in natural gas infrastructure such as transmission and distribution pipelines, wells, and produc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.
1997-11-01
This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less
Utilizing an integrated infrastructure for outcomes research: a systematic review.
Dixon, Brian E; Whipple, Elizabeth C; Lajiness, John M; Murray, Michael D
2016-03-01
To explore the ability of an integrated health information infrastructure to support outcomes research. A systematic review of articles published from 1983 to 2012 by Regenstrief Institute investigators using data from an integrated electronic health record infrastructure involving multiple provider organisations was performed. Articles were independently assessed and classified by study design, disease and other metadata including bibliometrics. A total of 190 articles were identified. Diseases included cognitive, (16) cardiovascular, (16) infectious, (15) chronic illness (14) and cancer (12). Publications grew steadily (26 in the first decade vs. 100 in the last) as did the number of investigators (from 15 in 1983 to 62 in 2012). The proportion of articles involving non-Regenstrief authors also expanded from 54% in the first decade to 72% in the last decade. During this period, the infrastructure grew from a single health system into a health information exchange network covering more than 6 million patients. Analysis of journal and article metrics reveals high impact for clinical trials and comparative effectiveness research studies that utilised data available in the integrated infrastructure. Integrated information infrastructures support growth in high quality observational studies and diverse collaboration consistent with the goals for the learning health system. More recent publications demonstrate growing external collaborations facilitated by greater access to the infrastructure and improved opportunities to study broader disease and health outcomes. Integrated information infrastructures can stimulate learning from electronic data captured during routine clinical care but require time and collaboration to reach full potential. © 2015 Health Libraries Group.
OGC and Grid Interoperability in enviroGRIDS Project
NASA Astrophysics Data System (ADS)
Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas
2010-05-01
EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/
ERIC Educational Resources Information Center
Chval, Kathryn B.; Nossaman, Larry D.
2014-01-01
Administrators seek faculty who have the expertise to secure external funding to support their research agenda. Administrators also seek strategies to support and enhance faculty productivity across different ranks. In this manuscript, we describe the infrastructure we established and strategies we implemented to enhance the research enterprise at…
NASA Astrophysics Data System (ADS)
Yurkovich, E. S.; Howell, D. G.
2002-12-01
Exploding population and unprecedented urban development within the last century helped fuel an increase in the severity of natural disasters. Not only has the world become more populated, but people, information and commodities now travel greater distances to service larger concentrations of people. While many of the earth's natural hazards remain relatively constant, understanding the risk to increasingly interconnected and large populations requires an expanded analysis. To improve mitigation planning we propose a model that is accessible to planners and implemented with public domain data and industry standard GIS software. The model comprises 1) the potential impact of five significant natural hazards: earthquake, flood, tropical storm, tsunami and volcanic eruption assessed by a comparative index of risk, 2) population density, 3) infrastructure distribution represented by a proxy, 4) the vulnerability of the elements at risk (population density and infrastructure distribution) and 5) the connections and dependencies of our increasingly 'globalized' world, portrayed by a relative linkage index. We depict this model with the equation, Risk = f(H, E, V, I) Where H is an index normalizing the impact of five major categories of natural hazards; E is one element at risk, population or infrastructure; V is a measure of the vulnerability for of the elements at risk; and I pertains to a measure of interconnectivity of the elements at risk as a result of economic and social globalization. We propose that future risk analysis include the variable I to better define and quantify risk. Each assessment reflects different repercussions from natural disasters: losses of life or economic activity. Because population and infrastructure are distributed heterogeneously across the Pacific region, two contrasting representations of risk emerge from this study.
A web-portal for interactive data exploration, visualization, and hypothesis testing
Bartsch, Hauke; Thompson, Wesley K.; Jernigan, Terry L.; Dale, Anders M.
2014-01-01
Clinical research studies generate data that need to be shared and statistically analyzed by their participating institutions. The distributed nature of research and the different domains involved present major challenges to data sharing, exploration, and visualization. The Data Portal infrastructure was developed to support ongoing research in the areas of neurocognition, imaging, and genetics. Researchers benefit from the integration of data sources across domains, the explicit representation of knowledge from domain experts, and user interfaces providing convenient access to project specific data resources and algorithms. The system provides an interactive approach to statistical analysis, data mining, and hypothesis testing over the lifetime of a study and fulfills a mandate of public sharing by integrating data sharing into a system built for active data exploration. The web-based platform removes barriers for research and supports the ongoing exploration of data. PMID:24723882
Collaboration and decision making tools for mobile groups
NASA Astrophysics Data System (ADS)
Abrahamyan, Suren; Balyan, Serob; Ter-Minasyan, Harutyun; Degtyarev, Alexander
2017-12-01
Nowadays the use of distributed collaboration tools is widespread in many areas of people activity. But lack of mobility and certain equipment-dependency creates difficulties and decelerates development and integration of such technologies. Also mobile technologies allow individuals to interact with each other without need of traditional office spaces and regardless of location. Hence, realization of special infrastructures on mobile platforms with help of ad-hoc wireless local networks could eliminate hardware-attachment and be useful also in terms of scientific approach. Solutions from basic internet-messengers to complex software for online collaboration equipment in large-scale workgroups are implementations of tools based on mobile infrastructures. Despite growth of mobile infrastructures, applied distributed solutions in group decisionmaking and e-collaboration are not common. In this article we propose software complex for real-time collaboration and decision-making based on mobile devices, describe its architecture and evaluate performance.
Hadoop distributed batch processing for Gaia: a success story
NASA Astrophysics Data System (ADS)
Riello, Marco
2015-12-01
The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.
Scientific Services on the Cloud
NASA Astrophysics Data System (ADS)
Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong
Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.
Data Sharing in DHT Based P2P Systems
NASA Astrophysics Data System (ADS)
Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia
The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.
Addressing Data Access Needs of the Long-tail Distribution of Geoscientists
NASA Astrophysics Data System (ADS)
Malik, T.; Foster, I.
2012-12-01
Geoscientists must increasingly consider data from multiple disciplines and make intelligent connections between the data in order to advance research frontiers in mission critical problems. As a first step towards making timely and relevant connections, scientists require data and resource access, made available through simple and efficient protocols and web services that allows them to conveniently transmit, acquire, process, and inspect data and metadata. The last decade witnessed some vital data and resource access barriers being crossed. "Big iron" data infrastructures enabled geoscientists with large volumes of simulation and observational datasets, protocols made data access convenient, and strong governing bodies ensured standards for interoperability, repeatability and auditability. All this remarkable growth in access, however, addresses needs of publishers of large data and ignores consumers of that data. To-date limited access mechanisms exist for the consumers, who fetch subsets, analyze them, and, more often than not, generate new data and analysis, which finally gets published in scientific articles. In this session, we will highlight the data access needs of the long-tail distribution of geoscientists and a state-of-the art cyber-infrastructure approaches proposed to address those needs. The needs and the state-of-the-art arose from discussions held with geoscientists as part of the EarthCube Data Access Workshop, which was coordinated by the authors. Our presentation will summarize the proceedings of the Data Access workshop. It will present qualifying characteristics of solutions that will continue to serve the needs of these scientists in the long-term. Finally, we will present some cyber-infrastructure efforts in building such solutions and also provide a vision of the future CI in which such solutions can be useful.
Overview of Ongoing NRMRL GI Research
This presentation is an overview of ongoing NRMRL Green Infrastructure research and addresses the question: What do we need to know to present a cogent estimate of the value of Green Infrastructure? Discussions included are: stormwater well study, rain gardens and permeable su...
The Satellite Data Thematic Core Service within the EPOS Research Infrastructure
NASA Astrophysics Data System (ADS)
Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio
2017-04-01
EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model, 3D displacement maps, seismic hazard maps). Moreover, the services will release both on-demand and systematic products. The latter will be generated and made available to the users on a continuous basis, by processing each Sentinel-1 data once acquired, over a defined number of areas of interest; while the former will allow users to select data, areas, and time period to carry out their own analyses via an on-line platform. The satellite components will be integrated within the EPOS infrastructure through a common and harmonized interface that will allow users to search, process and share remote sensing images and results. This gateway to the satellite services will be represented by the ESA- Geohazards Exploitation Platform (GEP), a new cloud-based platform for the satellite Earth Observations designed to support the scientific community in the understanding of high impact natural disasters. Satellite Data TCS will use GEP as the common interface toward the main EPOS portal to provide EPOS users not only with data products but also with relevant processing and visualisation software, thus allowing users to gather and process on a cloud-computing infrastructure large datasets without any need to download them locally.
Damage identification in highway bridges using distribution factors
NASA Astrophysics Data System (ADS)
Gangone, Michael V.; Whelan, Matthew J.
2017-04-01
The U.S. infrastructure system is well behind the needs of the 21st century and in dire need of improvements. The American Society of Civil Engineers (ASCE) graded America's Infrastructure as a "D+" in its recent 2013 Report Card. Bridges are a major component of the infrastructure system and were awarded a "C+". Nearly 25 percent of the nation's bridges are categorized as deficient by the Federal Highway Administration (FWHA). Most bridges were designed with an expected service life of roughly 50 years and today the average age of a bridge is 42 years. Finding alternative methods of condition assessment which captures the true performance of the bridge is of high importance. This paper discusses the monitoring of two multi-girder/stringer bridges at different ages of service life. Normal strain measurements were used to calculate the load distribution factor at the midspan of the bridge under controlled loading conditions. Controlled progressive damage was implemented to one of the superstructures to determine if the damage could be detected using the distribution factor. An uncertainty analysis, based on the accuracy and precision of the normal strain measurement, was undertaken to determine how effective it is to use the distribution factor measurement as a damage indicator. The analysis indicates that this load testing parameter may be an effective measure for detecting damage.
42 CFR § 512.505 - Distribution arrangements under the EPM.
Code of Federal Regulations, 2010 CFR
2017-10-01
... SERVICES (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL Financial... distribute all or a portion of any gainsharing payment it receives from the EPM participant only in... with all applicable laws and regulations. (4) The opportunity to make or receive a distribution payment...
Sustainability considerations for health research and analytic data infrastructures.
Wilcox, Adam; Randhawa, Gurvaneet; Embi, Peter; Cao, Hui; Kuperman, Gilad J
2014-01-01
The United States has made recent large investments in creating data infrastructures to support the important goals of patient-centered outcomes research (PCOR) and comparative effectiveness research (CER), with still more investment planned. These initial investments, while critical to the creation of the infrastructures, are not expected to sustain them much beyond the initial development. To provide the maximum benefit, the infrastructures need to be sustained through innovative financing models while providing value to PCOR and CER researchers. Based on our experience with creating flexible sustainability strategies (i.e., strategies that are adaptive to the different characteristics and opportunities of a resource or infrastructure), we define specific factors that are important considerations in developing a sustainability strategy. These factors include assets, expansion, complexity, and stakeholders. Each factor is described, with examples of how it is applied. These factors are dimensions of variation in different resources, to which a sustainability strategy should adapt. We also identify specific important considerations for maintaining an infrastructure, so that the long-term intended benefits can be realized. These observations are presented as lessons learned, to be applied to other sustainability efforts. We define the lessons learned, relating them to the defined sustainability factors as interactions between factors. Using perspectives and experiences from a diverse group of experts, we define broad characteristics of sustainability strategies and important observations, which can vary for different projects. Other descriptions of adaptive, flexible, and successful models of collaboration between stakeholders and data infrastructures can expand this framework by identifying other factors for sustainability, and give more concrete directions on how sustainability can be best achieved.
European environmental research infrastructures are going for common 30 years strategy
NASA Astrophysics Data System (ADS)
Asmi, Ari; Konjin, Jacco; Pursula, Antti
2014-05-01
Environmental Research infrastructures are facilities, resources, systems and related services that are used by research communities to conduct top-level research. Environmental research is addressing processes at very different time scales, and supporting research infrastructures must be designed as long-term facilities in order to meet the requirements of continuous environmental observation, measurement and analysis. This longevity makes the environmental research infrastructures ideal structures to support the long-term development in environmental sciences. ENVRI project is a collaborative action of the major European (ESFRI) Environmental Research Infrastructures working towards increased co-operation and interoperability between the infrastructures. One of the key products of the ENVRI project is to combine the long-term plans of the individual infrastructures towards a common strategy, describing the vision and planned actions. The envisaged vision for environmental research infrastructures toward 2030 is to support the holistic understanding of our planet and it's behavior. The development of a 'Standard Model of the Planet' is a common ambition, a challenge to define an environmental standard model; a framework of all interactions within the Earth System, from solid earth to near space. Indeed scientists feel challenged to contribute to a 'Standard Model of the Planet' with data, models, algorithms and discoveries. Understanding the Earth System as an interlinked system requires a systems approach. The Environmental Sciences are rapidly moving to become a one system-level science. Mainly since modern science, engineering and society are increasingly facing complex problems that can only be understood in the context of the full overall system. The strategy of the supporting collaborating research infrastructures is based on developing three key factors for the Environmental Sciences: the technological, the cultural and the human capital. The technological capital development concentrates on improving the capacities to measure, observe, preserve and compute. This requires staff, technologies, sensors, satellites, floats, software to integrate and to do analysis and modeling, including data storage, computing platforms and networks. The cultural capital development addresses issues such as open access to data, rules, licenses, citation agreements, IPR agreements, technologies for machine-machine interaction, workflows, metadata, and RI community on the policy level. Human capital actions are based on anticipated need of specialists, including data scientists and 'generalists' that oversee more than just their own discipline. Developing these, as interrelated services, should help the scientific community to enter innovative and large projects contributing to a 'Standard Model of the Planet'. To achieve the overall goal, ENVRI will publish a set of action items that contains intermediate aims, bigger and smaller steps to work towards the development of the 'Standard Model of the Planet' approach. This timeline of actions can used as reference and 'common denominator' in defining new projects and research programs. Either within the various environmental scientific disciplines or when cooperating among these disciplines or even when outreaching towards other disciplines like social sciences, physics/chemistry, medical/life sciences etc.
DOT National Transportation Integrated Search
2017-01-22
The objective of this study is to develop new railway capacity evaluation tools and infrastructure planning techniques to address infrastructure or operations planning challenges under different operating styles. Three main research questions will be...
Field Evaluation of Innovative Wastewater Collection System Condition Assessment Technologies
As part of an effort to address aging infrastructure needs, the U.S. Environmental Protection Agency (USEPA) initiated research under the Aging Water Infrastructure program, part of the USEPA Office of Water’s Sustainable Infrastructure Initiative. This presentation discusses fi...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... Deployment Analysis Report Review; Notice of Public Meeting AGENCY: Research and Innovative Technology... discuss the Connected Vehicle Infrastructure Deployment Analysis Report. The webinar will provide an... and Transportation Officials (AASHTO) Connected Vehicle Infrastructure Deployment Analysis Report...
Aerosol classification using EARLINET measurements for an intensive observational period
NASA Astrophysics Data System (ADS)
Papagiannopoulos, Nikolaos; Mona, Lucia; Pappalardo, Gelsomina
2016-04-01
ACTRIS (Aerosols, Clouds and Trace gases Research Infrastructure Network) organized an intensive observation period during summer 2012. This campaign aimed at the provision of advanced observations of physical and chemical aerosol properties, at the delivery of information about the 3D distribution of European atmospheric aerosols, and at the monitoring of Saharan dust intrusions events. EARLINET (European Aerosol Research Lidar Network) participated in the ACTRIS campaign through the addition of measurements according to the EARLINET schedule as well as daily lidar-profiling measurements around sunset by 11 selected lidar stations for the period from 8 June - 17 July. EARLINET observations during this almost two-month period are used to characterize the optical properties and vertical distribution of long-range transported aerosol over the broader area of Mediterranean basin. The lidar measurements of aerosol intensive parameters (lidar ratio, depolarization, Angstrom exponents) are shown to vary with location and aerosol type. A methodology based on EARLINET observations of frequently observed aerosol types is used to classify aerosols into seven separate types. The summertime Mediterranean basin is prone to African dust aerosols. Two major dust events were studied. The first episode occurred from the 18 to 21 of the June and the second one lasted from 28 June to 6 July. The lidar ratio within the dust layer was found to be wavelength independent with mean values of 58±14 sr at 355 nm and 57±11 sr at 532 nm. For the particle linear depolarization ratio, mean values of 0.27±0.04 at 532 nm have been found. Acknowledgements. The financial support for EARLINET in the ACTRIS Research Infrastructure Project by the European Union's Horizon 2020 research and innovation programme under grant agreement no. 654169 and previously under grant agreement no. 262254 in the Seventh Framework Programme (FP7/2007-2013) is gratefully acknowledged.
Unidata: A geoscience e-infrastructure for International Data Sharing
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan
2017-04-01
The Internet and its myriad manifestations, including the World Wide Web, have amply demonstrated the compounding benefits of a global cyberinfrastructure and the power of networked communities as institutions and people exchange knowledge, ideas, and resources. The Unidata Program recognizes those benefits, and over the past several years it has developed a growing portfolio of international data distribution activities, conducted in close collaboration with academic, research and operational institutions on several continents, to advance earth system science education and research. The portfolio includes provision of data, tools, support and training as well as outreach activities that bring various stakeholders together to address important issues, all toward the goals of building a community with a shared vision. The overarching goals of Unidata's international data sharing activities include: • democratization of access-to and use-of data that describe the dynamic earth system by facilitating data access to a broad spectrum of observations and forecasts • building capacity and empowering geoscientists and educators worldwide by building encouraging local communities where data, tools, and best practices in education and research are shared • strengthening international science partnerships for exchanging knowledge and expertise • Supporting faculty and students at research and educational institutions in the use of Unidata systems building regional and global communities around specific geoscientific themes. In this presentation, I will present Unidata's ongoing data sharing activities in Latin America, Europe, Africa and Antarctica that are enabling linkages to existing and emergent e-infrastructures and operational networks, including recent advances to develop interoperable data systems, tools, and services that benefit the geosciences. Particular emphasis in the presentation will be made to describe the examples of the use of Unidata's International Data Distribution Network, Local Data Manager, and THREDDS in various settings, as well as experiences and lessons learned with the implementation and benefits of the myriad data sharing efforts.
DNAseq Workflow in a Diagnostic Context and an Example of a User Friendly Implementation.
Wolf, Beat; Kuonen, Pierre; Dandekar, Thomas; Atlan, David
2015-01-01
Over recent years next generation sequencing (NGS) technologies evolved from costly tools used by very few, to a much more accessible and economically viable technology. Through this recently gained popularity, its use-cases expanded from research environments into clinical settings. But the technical know-how and infrastructure required to analyze the data remain an obstacle for a wider adoption of this technology, especially in smaller laboratories. We present GensearchNGS, a commercial DNAseq software suite distributed by Phenosystems SA. The focus of GensearchNGS is the optimal usage of already existing infrastructure, while keeping its use simple. This is achieved through the integration of existing tools in a comprehensive software environment, as well as custom algorithms developed with the restrictions of limited infrastructures in mind. This includes the possibility to connect multiple computers to speed up computing intensive parts of the analysis such as sequence alignments. We present a typical DNAseq workflow for NGS data analysis and the approach GensearchNGS takes to implement it. The presented workflow goes from raw data quality control to the final variant report. This includes features such as gene panels and the integration of online databases, like Ensembl for annotations or Cafe Variome for variant sharing.
Higashi, Takahiro; Nakamura, Fumiaki; Shibata, Akiko; Emori, Yoshiko; Nishimoto, Hiroshi
2014-01-01
Monitoring the current status of cancer care is essential for effective cancer control and high-quality cancer care. To address the information needs of patients and physicians in Japan, hospital-based cancer registries are operated in 397 hospitals designated as cancer care hospitals by the national government. These hospitals collect information on all cancer cases encountered in each hospital according to precisely defined coding rules. The Center for Cancer Control and Information Services at the National Cancer Center supports the management of the hospital-based cancer registry by providing training for tumor registrars and by developing and maintaining the standard software and continuing communication, which includes mailing lists, a customizable web site and site visits. Data from the cancer care hospitals are submitted annually to the Center, compiled, and distributed as the National Cancer Statistics Report. The report reveals the national profiles of patient characteristics, route to discovery, stage distribution, and first-course treatments of the five major cancers in Japan. A system designed to follow up on patient survival will soon be established. Findings from the analyses will reveal characteristics of designated cancer care hospitals nationwide and will show how characteristics of patients with cancer in Japan differ from those of patients with cancer in other countries. The database will provide an infrastructure for future clinical and health services research and will support quality measurement and improvement of cancer care. Researchers and policy-makers in Japan are encouraged to take advantage of this powerful tool to enhance cancer control and their clinical practice.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
Napoles, Anna; Cook, Elise; Ginossar, Tamar; Knight, Kendrea D.; Ford, Marvella E.
2017-01-01
The underrepresentation of ethnically diverse populations in cancer clinical trials results in the inequitable distribution of the risks and benefits of this research. Using a case study approach, we apply a conceptual framework of factors associated with the participation of diverse population groups in cancer clinical trials developed by Dr. Jean Ford and colleagues to increase understanding of the specific strategies, and barriers and promoters addressed by these strategies, that resulted in marked success in accrual of racially and ethnically diverse populations in cancer clinical research. Results indicate that the studies presented were able to successfully engage minority participants due to the creation and implementation of multi-level, multifaceted strategies that included: culturally and linguistically appropriate outreach, education, and research studies that were accessible in local communities; infrastructure to support engagement of key stakeholders, clinicians, and organizations serving minority communities; testimonials by ethnically diverse cancer survivors; availability of medical interpretation services; and providing infrastructure that facilitated the engagement in clinical research of clinicians who care for minority patient populations. These strategic efforts were effective in addressing limited awareness of trials, lack of opportunities to participate, and acceptance of engagement in cancer clinical trials. Careful attention to the context and population characteristics in which cancer clinical trials are conducted will be necessary to address disparities in research participation and cancer outcomes. These studies illustrate that progress on minority accrual into clinical research requires intentional efforts to overcome barriers at all three stages of the accrual process: awareness, opportunity and acceptance of participation. PMID:28052822
Holve, Erin; Segal, Courtney; Hamilton Lopez, Marianne
2012-07-01
The Electronic Data Methods (EDM) Forum brings together perspectives from the Prospective Outcome Systems using Patient-specific Electronic data to Compare Tests and therapies (PROSPECT) studies, the Scalable Distributed Research Networks, and the Enhanced Registries projects. This paper discusses challenges faced by the research teams as part of their efforts to develop electronic clinical data (ECD) infrastructure to support comparative effectiveness research (CER). The findings reflect a set of opportunities for transdisciplinary learning, and will ideally enhance the transparency and generalizability of CER using ECD. Findings are based on 6 exploratory site visits conducted under naturalistic inquiry in the spring of 2011. Themes, challenges, and innovations were identified in the visit summaries through coding, keyword searches, and review for complex concepts. : The identified overarching challenges and emerging opportunities include: the substantial level of effort to establish and sustain data sharing partnerships; the importance of understanding the strengths and limitations of clinical informatics tools, platforms, and models that have emerged to enable research with ECD; the need for rigorous methods to assess data validity, quality, and context for multisite studies; and, emerging opportunities to achieve meaningful patient and consumer engagement and work collaboratively with multidisciplinary teams. The new infrastructure must evolve to serve a diverse set of potential users and must scale to address a range of CER or patient-centered outcomes research (PCOR) questions. To achieve this aim-to improve the quality, transparency, and reproducibility of CER and PCOR-a high level of collaboration and support is necessary to foster partnership and best practices as part of the EDM Forum.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science and Technology.
The state of university science and engineering research capabilities is considered. Attention is directed to the need for improving and enhancing the research infrastructure, including support for instrumentation, buildings, and other related research facilities. U.S. universities and colleges are encountering severe facilities and…
Infrastructure development for radioactive materials at the NSLS-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprouster, D. J.; Weidner, R.; Ghose, S. K.
2018-02-01
The X-ray Powder Diffraction (XPD) Beamline at the National Synchrotron Light Source-II is a multipurpose instrument designed for high-resolution, high-energy X-ray scattering techniques. In this article, the capabilities, opportunities and recent developments in the characterization of radioactive materials at XPD are described. The overarching goal of this work is to provide researchers access to advanced synchrotron techniques suited to the structural characterization of materials for advanced nuclear energy systems. XPD is a new beamline providing high photon flux for X-ray Diffraction, Pair Distribution Function analysis and Small Angle X-ray Scattering. The infrastructure and software described here extend the existing capabilitiesmore » at XPD to accommodate radioactive materials. Such techniques will contribute crucial information to the characterization and quantification of advanced materials for nuclear energy applications. We describe the automated radioactive sample collection capabilities and recent X-ray Diffraction and Small Angle X-ray Scattering results from neutron irradiated reactor pressure vessel steels and oxide dispersion strengthened steels.« less
Vogel, Jason R; Moore, Trisha L; Coffman, Reid R; Rodie, Steven N; Hutchinson, Stacy L; McDonough, Kelsey R; McLemore, Alex J; McMaine, John T
2015-09-01
Since its inception, Low Impact Development (LID) has become part of urban stormwater management across the United States, marking progress in the gradual transition from centralized to distributed runoff management infrastructure. The ultimate goal of LID is full, cost-effective implementation to maximize watershed-scale ecosystem services and enhance resilience. To reach that goal in the Great Plains, the multi-disciplinary author team presents this critical review based on thirteen technical questions within the context of regional climate and socioeconomics across increasing complexities in scale and function. Although some progress has been made, much remains to be done including continued basic and applied research, development of local LID design specifications, local demonstrations, and identifying funding mechanisms for these solutions. Within the Great Plains and beyond, by addressing these technical questions within a local context, the goal of widespread acceptance of LID can be achieved, resulting in more effective and resilient stormwater management.
Infrastructure development for radioactive materials at the NSLS-II
Sprouster, David J.; Weidner, R.; Ghose, S. K.; ...
2017-11-04
The X-ray Powder Diffraction (XPD) Beamline at the National Synchrotron Light Source-II is a multipurpose instrument designed for high-resolution, high-energy X-ray scattering techniques. In this paper, the capabilities, opportunities and recent developments in the characterization of radioactive materials at XPD are described. The overarching goal of this work is to provide researchers access to advanced synchrotron techniques suited to the structural characterization of materials for advanced nuclear energy systems. XPD is a new beamline providing high photon flux for X-ray Diffraction, Pair Distribution Function analysis and Small Angle X-ray Scattering. The infrastructure and software described here extend the existing capabilitiesmore » at XPD to accommodate radioactive materials. Such techniques will contribute crucial information to the characterization and quantification of advanced materials for nuclear energy applications. Finally, we describe the automated radioactive sample collection capabilities and recent X-ray Diffraction and Small Angle X-ray Scattering results from neutron irradiated reactor pressure vessel steels and oxide dispersion strengthened steels.« less
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
URBAN INFRASTRUCTURE RESEARCH PLAN WATER AND WASTEWATER ISSUES
As we approach the twenty-first century, we should be considering where we are today and where the consequences of our actions will place us tomorrow. This is especially true in the management of our aging and growing infrastructure. Infrastructure facilitates movement of people ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.
2015-01-27
The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less