Sample records for user support infrastructure

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Christian Robert; Lopez, Graham; Shipman, Galen

    This report documents the completion of milestone STPM12-2 Kokkos User Support Infrastructure. The goal of this milestone was to develop and deploy an initial Kokkos support infrastructure, which facilitates communication and growth of the user community, adds a central place for user documentation and manages access to technical experts. Multiple possible support infrastructure venues were considered and a solution was put into place by Q1 of FY 18 consisting of (1) a Wiki programming guide, (2) github issues and projects for development planning and bug tracking and (3) a “Slack” channel for low latency support communications with the Kokkos usermore » community. Furthermore, the desirability of a cloud based training infrastructure was recognized and put in place in order to support training events.« less

  2. Requirements Engineering in Building Climate Science Software

    NASA Astrophysics Data System (ADS)

    Batcheller, Archer L.

    Software has an important role in supporting scientific work. This dissertation studies teams that build scientific software, focusing on the way that they determine what the software should do. These requirements engineering processes are investigated through three case studies of climate science software projects. The Earth System Modeling Framework assists modeling applications, the Earth System Grid distributes data via a web portal, and the NCAR (National Center for Atmospheric Research) Command Language is used to convert, analyze and visualize data. Document analysis, observation, and interviews were used to investigate the requirements-related work. The first research question is about how and why stakeholders engage in a project, and what they do for the project. Two key findings arise. First, user counts are a vital measure of project success, which makes adoption important and makes counting tricky and political. Second, despite the importance of quantities of users, a few particular "power users" develop a relationship with the software developers and play a special role in providing feedback to the software team and integrating the system into user practice. The second research question focuses on how project objectives are articulated and how they are put into practice. The team seeks to both build a software system according to product requirements but also to conduct their work according to process requirements such as user support. Support provides essential communication between users and developers that assists with refining and identifying requirements for the software. It also helps users to learn and apply the software to their real needs. User support is a vital activity for scientific software teams aspiring to create infrastructure. The third research question is about how change in scientific practice and knowledge leads to changes in the software, and vice versa. The "thickness" of a layer of software infrastructure impacts whether the software team or users have control and responsibility for making changes in response to new scientific ideas. Thick infrastructure provides more functionality for users, but gives them less control of it. The stability of infrastructure trades off against the responsiveness that the infrastructure can have to user needs.

  3. The open black box: The role of the end-user in GIS integration

    USGS Publications Warehouse

    Poore, B.S.

    2003-01-01

    Formalist theories of knowledge that underpin GIS scholarship on integration neglect the importance and creativity of end-users in knowledge construction. This has practical consequences for the success of large distributed databases that contribute to spatial-data infrastructures. Spatial-data infrastructures depend on participation at local levels, such as counties and watersheds, and they must be developed to support feedback from local users. Looking carefully at the work of scientists in a watershed in Puget Sound, Washington, USA during the salmon crisis reveals that the work of these end-users articulates different worlds of knowledge. This view of the user is consonant with recent work in science and technology studies and research into computer-supported cooperative work. GIS theory will be enhanced when it makes room for these users and supports their practical work. ?? / Canadian Association of Geographers.

  4. A Cis-Lunar Propellant Infrastructure for Flexible Path Exploration and Space Commerce

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.

    2012-01-01

    This paper describes a space infrastructure concept that exploits lunar water for propellant production and delivers it to users in cis-lunar space. The goal is to provide responsive economical space transportation to destinations beyond low Earth orbit (LEO) and enable in-space commerce. This is a game changing concept that could fundamentally affect future space operations, provide greater access to space beyond LEO, and broaden participation in space exploration. The challenge is to minimize infrastructure development cost while achieving a low operational cost. This study discusses the evolutionary development of the infrastructure from a very modest robotic operation to one that is capable of supporting human operations. The cis-lunar infrastructure involves a mix of technologies including cryogenic propellant production, reusable lunar landers, propellant tankers, orbital transfer vehicles, aerobraking technologies, and electric propulsion. This cislunar propellant infrastructure replaces Earth-launched propellants for missions beyond LEO. It enables users to reach destinations with smaller launchers or effectively multiplies the user s existing payload capacity. Users can exploit the expanded capacity to launch logistics material that can then be traded with the infrastructure for propellants. This mutually beneficial trade between the cis-lunar infrastructure and propellant users forms the basis of in-space commerce.

  5. Interactive Model-Centric Systems Engineering (IMCSE) Phase Two

    DTIC Science & Technology

    2015-02-28

    109 Backend Implementation...42 Figure 10. Interactive Epoch-Era Analysis leverages humans-in-the-loop analysis and supporting infrastructure ...preliminary supporting 10 infrastructure . This will inform the transition strategies, additional case application and prototype user testing. • The

  6. The TENCompetence Infrastructure: A Learning Network Implementation

    NASA Astrophysics Data System (ADS)

    Vogten, Hubert; Martens, Harrie; Lemmers, Ruud

    The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.

  7. Authentication and Authorisation Infrastructure for the Mobility of Users of Academic Libraries: An Overview of Developments

    ERIC Educational Resources Information Center

    Hudomalj, Emil; Jauk, Avgust

    2006-01-01

    Purpose: To give an overview of the current state and trends in authentication and authorisation in satisfying academic library users' mobility and instant access to digital information resources, and to propose that libraries strongly support efforts to establish a global authentication and authorisation infrastructure.…

  8. InterMine Webservices for Phytozome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Hayes, David; Goodstein, David

    2014-01-10

    A data warehousing framework for biological information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make either simple and common, or complex and unique, queries of the data

  9. Sustaining a Community Computing Infrastructure for Online Teacher Professional Development: A Case Study of Designing Tapped In

    NASA Astrophysics Data System (ADS)

    Farooq, Umer; Schank, Patricia; Harris, Alexandra; Fusco, Judith; Schlager, Mark

    Community computing has recently grown to become a major research area in human-computer interaction. One of the objectives of community computing is to support computer-supported cooperative work among distributed collaborators working toward shared professional goals in online communities of practice. A core issue in designing and developing community computing infrastructures — the underlying sociotechnical layer that supports communitarian activities — is sustainability. Many community computing initiatives fail because the underlying infrastructure does not meet end user requirements; the community is unable to maintain a critical mass of users consistently over time; it generates insufficient social capital to support significant contributions by members of the community; or, as typically happens with funded initiatives, financial and human capital resource become unavailable to further maintain the infrastructure. On the basis of more than 9 years of design experience with Tapped In-an online community of practice for education professionals — we present a case study that discusses four design interventions that have sustained the Tapped In infrastructure and its community to date. These interventions represent broader design strategies for developing online environments for professional communities of practice.

  10. European grid services for global earth science

    NASA Astrophysics Data System (ADS)

    Brewer, S.; Sipos, G.

    2012-04-01

    This presentation will provide an overview of the distributed computing services that the European Grid Infrastructure (EGI) offers to the Earth Sciences community and also explain the processes whereby Earth Science users can engage with the infrastructure. One of the main overarching goals for EGI over the coming year is to diversify its user-base. EGI therefore - through the National Grid Initiatives (NGIs) that provide the bulk of resources that make up the infrastructure - offers a number of routes whereby users, either individually or as communities, can make use of its services. At one level there are two approaches to working with EGI: either users can make use of existing resources and contribute to their evolution and configuration; or alternatively they can work with EGI, and hence the NGIs, to incorporate their own resources into the infrastructure to take advantage of EGI's monitoring, networking and managing services. Adopting this approach does not imply a loss of ownership of the resources. Both of these approaches are entirely applicable to the Earth Sciences community. The former because researchers within this field have been involved with EGI (and previously EGEE) as a Heavy User Community and the latter because they have very specific needs, such as incorporating HPC services into their workflows, and these will require multi-skilled interventions to fully provide such services. In addition to the technical support services that EGI has been offering for the last year or so - the applications database, the training marketplace and the Virtual Organisation services - there now exists a dynamic short-term project framework that can be utilised to establish and operate services for Earth Science users. During this talk we will present a summary of various on-going projects that will be of interest to Earth Science users with the intention that suggestions for future projects will emerge from the subsequent discussions: • The Federated Cloud Task Force is already providing a cloud infrastructure through a few committed NGIs. This is being made available to research communities participating in the Task Force and the long-term aim is to integrate these national clouds into a pan-European infrastructure for scientific communities. • The MPI group provides support for application developers to port and scale up parallel applications to the global European Grid Infrastructure. • A lively portal developer and provider community that is able to setup and operate custom, application and/or community specific portals for members of the Earth Science community to interact with EGI. • A project to assess the possibilities for federated identity management in EGI and the readiness of EGI member states for federated authentication and authorisation mechanisms. • Operating resources and user support services to process data with new types of services and infrastructures, such as desktop grids, map-reduce frameworks, GPU clusters.

  11. e-Infrastructures supporting research into depression, self-harm and suicide.

    PubMed

    McCafferty, S; Doherty, T; Sinnott, R O; Watt, J

    2010-08-28

    The Economic and Social Research Council (ESRC)-funded Data Management through e-Social Sciences (DAMES) project is investigating, as one of its four research themes, how research into depression, self-harm and suicide may be enhanced through the adoption of e-Science infrastructures and techniques. In this paper, we explore the challenges in supporting such research infrastructures and describe the distributed and heterogeneous datasets that need to be provisioned to support such research. We describe and demonstrate the application of an advanced user and security-driven infrastructure that has been developed specifically to meet these challenges in an on-going study into depression, self-harm and suicide.

  12. The INDIGO-Datacloud Authentication and Authorization Infrastructure

    NASA Astrophysics Data System (ADS)

    Ceccanti, A.; Hardt, M.; Wegh, B.; Millar, AP; Caberletti, M.; Vianello, E.; Licehammer, S.

    2017-10-01

    Contemporary distributed computing infrastructures (DCIs) are not easily and securely accessible by scientists. These computing environments are typically hard to integrate due to interoperability problems resulting from the use of different authentication mechanisms, identity negotiation protocols and access control policies. Such limitations have a big impact on the user experience making it hard for user communities to port and run their scientific applications on resources aggregated from multiple providers. The INDIGO-DataCloud project wants to provide the services and tools needed to enable a secure composition of resources from multiple providers in support of scientific applications. In order to do so, a common AAI architecture has to be defined that supports multiple authentication mechanisms, support delegated authorization across services and can be easily integrated in off-the-shelf software. In this contribution we introduce the INDIGO Authentication and Authorization Infrastructure, describing its main components and their status and how authentication, delegation and authorization flows are implemented across services.

  13. Scaling the CERN OpenStack cloud

    NASA Astrophysics Data System (ADS)

    Bell, T.; Bompastor, B.; Bukowiec, S.; Castro Leon, J.; Denis, M. K.; van Eldik, J.; Fermin Lobo, M.; Fernandez Alvarez, L.; Fernandez Rodriguez, D.; Marino, A.; Moreira, B.; Noel, B.; Oulevey, T.; Takase, W.; Wiebalck, A.; Zilli, S.

    2015-12-01

    CERN has been running a production OpenStack cloud since July 2013 to support physics computing and infrastructure services for the site. In the past year, CERN Cloud Infrastructure has seen a constant increase in nodes, virtual machines, users and projects. This paper will present what has been done in order to make the CERN cloud infrastructure scale out.

  14. Dynamic Collaboration Infrastructure for Hydrologic Science

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the results of this proof-of-concept prototype which enabled HydroShare users to readily instantiate virtual infrastructure marshaling arbitrary combinations, varieties, and quantities of distributed data and computing infrastructure in addressing big problems in hydrology.

  15. Decision Tools for Transportation Infrastructure Reinvestment: User Guidelines for Microcomputer Decision Support System (DSS)

    DOT National Transportation Integrated Search

    1988-07-01

    This report is intended to improve the quality of decisions about reinvestments, : and modest new investments, in highway transportation infrastructure. Decisions : of this type comprise the majority of planning actions taken in the field of : public...

  16. A genome-wide association study platform built on iPlant cyber-infrastructure

    USDA-ARS?s Scientific Manuscript database

    We demonstrated a flexible Genome-Wide Association (GWA) Study (GWAS) platform built upon the iPlant Collaborative Cyber-infrastructure. The platform supports big data management, sharing, and large scale study of both genotype and phenotype data on clusters. End users can add their own analysis too...

  17. ECHO Services: Foundational Middleware for a Science Cyberinfrastructure

    NASA Technical Reports Server (NTRS)

    Burnett, Michael

    2005-01-01

    This viewgraph presentation describes ECHO, an interoperability middleware solution. It uses open, XML-based APIs, and supports net-centric architectures and solutions. ECHO has a set of interoperable registries for both data (metadata) and services, and provides user accounts and a common infrastructure for the registries. It is built upon a layered architecture with extensible infrastructure for supporting community unique protocols. It has been operational since November, 2002 and it available as open source.

  18. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    ERIC Educational Resources Information Center

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  19. VERA 3.6 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Richard L.; Kochunas, Brendan; Adams, Brian M.

    The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms.

  20. Nuclear Energy Infrastructure Database Description and User’s Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less

  1. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  2. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog)

    PubMed Central

    MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen

    2017-01-01

    The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. PMID:27899670

  3. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  4. Improving global flood risk awareness through collaborative research: Id-Lab

    NASA Astrophysics Data System (ADS)

    Weerts, A.; Zijderveld, A.; Cumiskey, L.; Buckman, L.; Verlaan, M.; Baart, F.

    2015-12-01

    Scientific and end-user collaboration on operational flood risk modelling and forecasting requires an environment where scientists and end-users can physically work together and demonstrate, enhance and learn about new tools, methods and models for forecasting and warning purposes. Therefore, Deltares has built a real-time demonstration, training and research infrastructure ('operational' room and ICT backend). This research infrastructure supports various functions like (1) Real time response and disaster management, (2) Training, (3) Collaborative Research, (4) Demonstration. The research infrastructure will be used for a mixture of these functions on a regular basis by Deltares and a multitude of both scientists as well as end users such as universities, research institutes, consultants, governments and aid agencies. This infrastructure facilitates emergency advice and support during international and national disasters caused by rainfall, tropical cyclones or tsunamis. It hosts research flood and storm surge forecasting systems for global/continental/regional scale. It facilitates training for emergency & disaster management (along with hosting forecasting system user trainings in for instance the forecasting platform Delft-FEWS) both internally and externally. The facility is expected to inspire and initiate creative innovations by bringing together different experts from various organizations. The room hosts interactive modelling developments, participatory workshops and stakeholder meetings. State of the art tools, models and software, being applied across the globe are available and on display within the facility. We will present the Id-Lab in detail and we will put particular focus on the global operational forecasting systems GLOFFIS (Global Flood Forecasting Information System) and GLOSSIS (Global Storm Surge Information System).

  5. Sharing Data and Analytical Resources Securely in a Biomedical Research Grid Environment

    PubMed Central

    Langella, Stephen; Hastings, Shannon; Oster, Scott; Pan, Tony; Sharma, Ashish; Permar, Justin; Ervin, David; Cambazoglu, B. Barla; Kurc, Tahsin; Saltz, Joel

    2008-01-01

    Objectives To develop a security infrastructure to support controlled and secure access to data and analytical resources in a biomedical research Grid environment, while facilitating resource sharing among collaborators. Design A Grid security infrastructure, called Grid Authentication and Authorization with Reliably Distributed Services (GAARDS), is developed as a key architecture component of the NCI-funded cancer Biomedical Informatics Grid (caBIG™). The GAARDS is designed to support in a distributed environment 1) efficient provisioning and federation of user identities and credentials; 2) group-based access control support with which resource providers can enforce policies based on community accepted groups and local groups; and 3) management of a trust fabric so that policies can be enforced based on required levels of assurance. Measurements GAARDS is implemented as a suite of Grid services and administrative tools. It provides three core services: Dorian for management and federation of user identities, Grid Trust Service for maintaining and provisioning a federated trust fabric within the Grid environment, and Grid Grouper for enforcing authorization policies based on both local and Grid-level groups. Results The GAARDS infrastructure is available as a stand-alone system and as a component of the caGrid infrastructure. More information about GAARDS can be accessed at http://www.cagrid.org. Conclusions GAARDS provides a comprehensive system to address the security challenges associated with environments in which resources may be located at different sites, requests to access the resources may cross institutional boundaries, and user credentials are created, managed, revoked dynamically in a de-centralized manner. PMID:18308979

  6. Network Computing Infrastructure to Share Tools and Data in Global Nuclear Energy Partnership

    NASA Astrophysics Data System (ADS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer-Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP.

  7. Multi-Sensor Distributive On-Line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Technical Reports Server (NTRS)

    Teng, William; Berrick, Steve; Leptuokh, Gregory; Liu, Zhong; Rui, Hualan; Pham, Long; Shen, Suhung; Zhu, Tong

    2004-01-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM On-line Visualization and Analysis System precipitation and other satellite data products and services. AIS outputs will be ,integrated into existing operational decision support system for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that ,allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical ,parameters, area of interest, and time period; and the system generates an output ,on screen in a matter of seconds.

  8. Technical support for Life Sciences communities on a production grid infrastructure.

    PubMed

    Michel, Franck; Montagnat, Johan; Glatard, Tristan

    2012-01-01

    Production operation of large distributed computing infrastructures (DCI) still requires a lot of human intervention to reach acceptable quality of service. This may be achievable for scientific communities with solid IT support, but it remains a show-stopper for others. Some application execution environments are used to hide runtime technical issues from end users. But they mostly aim at fault-tolerance rather than incident resolution, and their operation still requires substantial manpower. A longer-term support activity is thus needed to ensure sustained quality of service for Virtual Organisations (VO). This paper describes how the biomed VO has addressed this challenge by setting up a technical support team. Its organisation, tooling, daily tasks, and procedures are described. Results are shown in terms of resource usage by end users, amount of reported incidents, and developed software tools. Based on our experience, we suggest ways to measure the impact of the technical support, perspectives to decrease its human cost and make it more community-specific.

  9. Advanced Optical Burst Switched Network Concepts

    NASA Astrophysics Data System (ADS)

    Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian

    In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.

  10. Support for designing waste sorting systems: A mini review.

    PubMed

    Rousta, Kamran; Ordoñez, Isabel; Bolton, Kim; Dahlén, Lisa

    2017-11-01

    This article presents a mini review of research aimed at understanding material recovery from municipal solid waste. It focuses on two areas, waste sorting behaviour and collection systems, so that research on the link between these areas could be identified and evaluated. The main results presented and the methods used in the articles are categorised and appraised. The mini review reveals that most of the work that offered design guidelines for waste management systems was based on optimising technical aspects only. In contrast, most of the work that focused on user involvement did not consider developing the technical aspects of the system, but was limited to studies of user behaviour. The only clear consensus among the articles that link user involvement with the technical system is that convenient waste collection infrastructure is crucial for supporting source separation. This mini review reveals that even though the connection between sorting behaviour and technical infrastructure has been explored and described in some articles, there is still a gap when using this knowledge to design waste sorting systems. Future research in this field would benefit from being multidisciplinary and from using complementary methods, so that holistic solutions for material recirculation can be identified. It would be beneficial to actively involve users when developing sorting infrastructures, to be sure to provide a waste management system that will be properly used by them.

  11. Quality of service provision assessment in the healthcare information and telecommunications infrastructures.

    PubMed

    Babulak, Eduard

    2006-01-01

    The continuous increase in the complexity and the heterogeneity of corporate and healthcare telecommunications infrastructures will require new assessment methods of quality of service (QoS) provision that are capable of addressing all engineering and social issues with much faster speeds. Speed and accessibility to any information at any time from anywhere will create global communications infrastructures with great performance bottlenecks that may put in danger human lives, power supplies, national economy and security. Regardless of the technology supporting the information flows, the final verdict on the QoS is made by the end user. The users' perception of telecommunications' network infrastructure QoS provision is critical to the successful business management operation of any organization. As a result, it is essential to assess the QoS Provision in the light of user's perception. This article presents a cost effective methodology to assess the user's perception of quality of service provision utilizing the existing Staffordshire University Network (SUN) by adding a component of measurement to the existing model presented by Walker. This paper presents the real examples of CISCO Networking Solutions for Health Care givers and offers a cost effective approach to assess the QoS provision within the campus network, which could be easily adapted to any health care organization or campus network in the world.

  12. Audited credential delegation: a usable security solution for the virtual physiological human toolkit.

    PubMed

    Haidar, Ali N; Zasada, Stefan J; Coveney, Peter V; Abdallah, Ali E; Beckles, Bruce; Jones, Mike A S

    2011-06-06

    We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username-password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale.

  13. Environmentally-Preferable Launch Coatings

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2015-01-01

    The Ground Systems Development and Operations (GSDO) Program at NASA Kennedy Space Center (KSC), Florida, has the primary objective of modernizing and transforming the launch and range complex at KSC to benefit current and future NASA programs along with other emerging users. Described as the launch support and infrastructure modernization program in the NASA Authorization Act of 2010, the GSDO Program will develop and implement shared infrastructure and process improvements to provide more flexible, affordable, and responsive capabilities to a multi-user community. In support of NASA and the GSDO Program, the objective of this project is to determine the feasibility of environmentally friendly corrosion protecting coatings for launch facilities and ground support equipment (GSE). The focus of the project is corrosion resistance and survivability with the goal to reduce the amount of maintenance required to preserve the performance of launch facilities while reducing mission risk. The project compares coating performance of the selected alternatives to existing coating systems or standards.

  14. Audited credential delegation: a usable security solution for the virtual physiological human toolkit

    PubMed Central

    Haidar, Ali N.; Zasada, Stefan J.; Coveney, Peter V.; Abdallah, Ali E.; Beckles, Bruce; Jones, Mike A. S.

    2011-01-01

    We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users' experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. PMID:22670214

  15. Flexible services for the support of research.

    PubMed

    Turilli, Matteo; Wallom, David; Williams, Chris; Gough, Steve; Curran, Neal; Tarrant, Richard; Bretherton, Dan; Powell, Andy; Johnson, Matt; Harmer, Terry; Wright, Peter; Gordon, John

    2013-01-28

    Cloud computing has been increasingly adopted by users and providers to promote a flexible, scalable and tailored access to computing resources. Nonetheless, the consolidation of this paradigm has uncovered some of its limitations. Initially devised by corporations with direct control over large amounts of computational resources, cloud computing is now being endorsed by organizations with limited resources or with a more articulated, less direct control over these resources. The challenge for these organizations is to leverage the benefits of cloud computing while dealing with limited and often widely distributed computing resources. This study focuses on the adoption of cloud computing by higher education institutions and addresses two main issues: flexible and on-demand access to a large amount of storage resources, and scalability across a heterogeneous set of cloud infrastructures. The proposed solutions leverage a federated approach to cloud resources in which users access multiple and largely independent cloud infrastructures through a highly customizable broker layer. This approach allows for a uniform authentication and authorization infrastructure, a fine-grained policy specification and the aggregation of accounting and monitoring. Within a loosely coupled federation of cloud infrastructures, users can access vast amount of data without copying them across cloud infrastructures and can scale their resource provisions when the local cloud resources become insufficient.

  16. Building a Generic Virtual Research Environment Framework for Multiple Earth and Space Science Domains and a Diversity of Users.

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Fraser, R.; Evans, B. J. K.; Friedrich, C.; Klump, J. F.; Lescinsky, D. T.

    2017-12-01

    Virtual Research Environments (VREs) are now part of academic infrastructures. Online research workflows can be orchestrated whereby data can be accessed from multiple external repositories with processing taking place on public or private clouds, and centralised supercomputers using a mixture of user codes, and well-used community software and libraries. VREs enable distributed members of research teams to actively work together to share data, models, tools, software, workflows, best practices, infrastructures, etc. These environments and their components are increasingly able to support the needs of undergraduate teaching. External to the research sector, they can also be reused by citizen scientists, and be repurposed for industry users to help accelerate the diffusion and hence enable the translation of research innovations. The Virtual Geophysics Laboratory (VGL) in Australia was started in 2012, built using a collaboration between CSIRO, the National Computational Infrastructure (NCI) and Geoscience Australia, with support funding from the Australian Government Department of Education. VGL comprises three main modules that provide an interface to enable users to first select their required data; to choose a tool to process that data; and then access compute infrastructure for execution. VGL was initially built to enable a specific set of researchers in government agencies access to specific data sets and a limited number of tools. Over the years it has evolved into a multi-purpose Earth science platform with access to an increased variety of data (e.g., Natural Hazards, Geochemistry), a broader range of software packages, and an increasing diversity of compute infrastructures. This expansion has been possible because of the approach to loosely couple data, tools and compute resources via interfaces that are built on international standards and accessed as network-enabled services wherever possible. Built originally for researchers that were not fussy about general usability, increasing emphasis on User Interfaces (UIs) and stability will lead to increased uptake in the education and industry sectors. Simultaneously, improvements are being added to facilitate access to data and tools by experienced researchers who want direct access to both data and flexible workflows.

  17. The SCIDIP-ES project - towards an international collaboration strategy for long term preservation of earth science data

    NASA Astrophysics Data System (ADS)

    Riddick, Andrew; Glaves, Helen; Marelli, Fulvio; Albani, Mirko; Tona, Calogera; Marketakis, Yannis; Tzitzikas, Yannis; Guarino, Raffaele; Giaretta, David; Di Giammatteo, Ugo

    2013-04-01

    The capability for long term preservation of earth science data is a key requirement to support on-going research and collaboration within and between many earth science disciplines. A number of critically important current research directions (e.g. understanding climate change, and ensuring sustainability of natural resources) rely on the preservation of data often collected over several decades in a form in which it can be accessed and used easily. Another key driver for strategic long term data preservation is that key research challenges (such as those described above) frequently require cross disciplinary research utilising raw and interpreted data from a number of earth science disciplines. Effective data preservation strategies can support this requirement for interoperability and collaboration, and thereby stimulate scientific innovation. The SCIDIP-ES project (EC FP7 grant agreement no. 283401) seeks to address these and other data preservation challenges by developing a Europe wide infrastructure for long term data preservation comprising appropriate software tools and infrastructure services to enable and promote long term preservation of earth science data. Because we define preservation in terms of continued usability of the digitally encoded information, the generic infrastructure services will allow a wide variety of data to be made usable by researchers from many different domains. This approach promotes international collaboration between researchers and will enable the cost for long-term usability across disciplines to be shared supporting the creation of strong business cases for the long term support of that data. This paper will describe our progress to date, including the results of community engagement and user consultation exercises designed to specify and scope the required tools and services. Our user engagement methodology, ensuring that we are capturing the views of a representative sample of institutional users, will be described. Key results of an in-depth user requirements exercise, and also the conclusions from a survey of existing technologies and policies for earth science data preservation involving almost five hundred respondents across Europe and beyond will also be outlined. A key aim of the project will also be to create harmonised data preservation and access policies for earth science data in Europe, taking into account the requirements of relevant earth science data users and archive providers across Europe, and liaising appropriately with other European data integration and e-infrastructure projects to ensure a collaborative strategy.

  18. Crowdsourced Contributions to the Nation's Geodetic Elevation Infrastructure

    NASA Astrophysics Data System (ADS)

    Stone, W. A.

    2014-12-01

    NOAA's National Geodetic Survey (NGS), a United States Department of Commerce agency, is engaged in providing the nation's fundamental positioning infrastructure - the National Spatial Reference System (NSRS) - which includes the framework for latitude, longitude, and elevation determination as well as various geodetic models, tools, and data. Capitalizing on Global Navigation Satellite System (GNSS) technology for improved access to the nation's precise geodetic elevation infrastructure requires use of a geoid model, which relates GNSS-derived heights (ellipsoid heights) with traditional elevations (orthometric heights). NGS is facilitating the use of crowdsourced GNSS observations collected at published elevation control stations by the professional surveying, geospatial, and scientific communities to help improve NGS' geoid modeling capability. This collocation of published elevation data and newly collected GNSS data integrates together the two height systems. This effort in turn supports enhanced access to accurate elevation information across the nation, thereby benefiting all users of geospatial data. By partnering with the public in this collaborative effort, NGS is not only helping facilitate improvements to the elevation infrastructure for all users but also empowering users of NSRS with the capability to do their own high-accuracy positioning. The educational outreach facet of this effort helps inform the public, including the scientific community, about the utility of various NGS tools, including the widely used Online Positioning User Service (OPUS). OPUS plays a key role in providing user-friendly and high accuracy access to NSRS, with optional sharing of results with NGS and the public. All who are interested in helping evolve and improve the nationwide elevation determination capability are invited to participate in this nationwide partnership and to learn more about the geodetic infrastructure which is a vital component of viable spatial data for many disciplines, including the geosciences.

  19. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog).

    PubMed

    MacArthur, Jacqueline; Bowler, Emily; Cerezo, Maria; Gil, Laurent; Hall, Peggy; Hastings, Emma; Junkins, Heather; McMahon, Aoife; Milano, Annalisa; Morales, Joannella; Pendlington, Zoe May; Welter, Danielle; Burdett, Tony; Hindorff, Lucia; Flicek, Paul; Cunningham, Fiona; Parkinson, Helen

    2017-01-04

    The NHGRI-EBI GWAS Catalog has provided data from published genome-wide association studies since 2008. In 2015, the database was redesigned and relocated to EMBL-EBI. The new infrastructure includes a new graphical user interface (www.ebi.ac.uk/gwas/), ontology supported search functionality and an improved curation interface. These developments have improved the data release frequency by increasing automation of curation and providing scaling improvements. The range of available Catalog data has also been extended with structured ancestry and recruitment information added for all studies. The infrastructure improvements also support scaling for larger arrays, exome and sequencing studies, allowing the Catalog to adapt to the needs of evolving study design, genotyping technologies and user needs in the future. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Citric Acid Alternative to Nitric Acid Passivation

    NASA Technical Reports Server (NTRS)

    Lewis, Pattie L. (Compiler)

    2013-01-01

    The Ground Systems Development and Operations GSDO) Program at NASA John F. Kennedy Space Center (KSC) has the primary objective of modernizing and transforming the launch and range complex at KSC to benefit current and future NASA programs along with other emerging users. Described as the launch support and infrastructure modernization program in the NASA Authorization Act of 2010, the GSDO Program will develop and implement shared infrastructure and process improvements to provide more flexible, affordable, and responsive capabilities to a multi-user community. In support of the GSDO Program, the purpose of this project is to demonstratevalidate citric acid as a passivation agent for stainless steel. Successful completion of this project will result in citric acid being qualified for use as an environmentally preferable alternative to nitric acid for passivation of stainless steel alloys in NASA and DoD applications.

  1. Strategic behaviors and governance challenges in social-ecological systems

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Rachata; Anderies, John M.

    2017-08-01

    The resource management and environmental policy literature focuses on devising regulations and incentive structures to achieve desirable goals. It often presumes the existence of public infrastructure that actualizes these incentives and regulations through a process loosely referred to as `governance.' In many cases, it is not clear if and how such governance infrastructure can be created and supported. Here, we take a complex systems view in which `governance' is an emergent phenomenon generated by interactions between social, economic, and environmental (both built and natural) factors. We present a framework and formal stylized model to explore under what circumstances stable governance structures may emerge endogenously in coupled infrastructure systems comprising shared natural, social, and built infrastructures of which social-ecological systems are specific examples. The model allows us to derive general conditions for a sustainable coupled infrastructure system in which critical infrastructure (e.g., canals) is provided by a governing entity that enables resource users (e.g., farmers) to produce outputs from natural infrastructure (e.g., water) to meet their needs while supporting the governing entity.

  2. Universal Design and the Smart Home.

    PubMed

    Pennick, Tim; Hessey, Sue; Craigie, Roland

    2016-01-01

    The related concepts of Universal Design, Inclusive Design, and Design For All, all recognise that no one solution will fit the requirements of every possible user. This paper considers the extent to which current developments in smart home technology can help to reduce the numbers of users for whom mainstream technology is not sufficiently inclusive, proposing a flexible approach to user interface (UI) implementation focussed on the capabilities of the user. This implies development of the concepts underlying Universal Design to include the development of a flexible inclusive support infrastructure, servicing the requirements of individual users and their personalised user interface devices.

  3. A Modular Repository-based Infrastructure for Simulation Model Storage and Execution Support in the Context of In Silico Oncology and In Silico Medicine.

    PubMed

    Christodoulou, Nikolaos A; Tousert, Nikolaos E; Georgiadi, Eleni Ch; Argyri, Katerina D; Misichroni, Fay D; Stamatakos, Georgios S

    2016-01-01

    The plethora of available disease prediction models and the ongoing process of their application into clinical practice - following their clinical validation - have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features.

  4. A Modular Repository-based Infrastructure for Simulation Model Storage and Execution Support in the Context of In Silico Oncology and In Silico Medicine

    PubMed Central

    Christodoulou, Nikolaos A.; Tousert, Nikolaos E.; Georgiadi, Eleni Ch.; Argyri, Katerina D.; Misichroni, Fay D.; Stamatakos, Georgios S.

    2016-01-01

    The plethora of available disease prediction models and the ongoing process of their application into clinical practice – following their clinical validation – have created new needs regarding their efficient handling and exploitation. Consolidation of software implementations, descriptive information, and supportive tools in a single place, offering persistent storage as well as proper management of execution results, is a priority, especially with respect to the needs of large healthcare providers. At the same time, modelers should be able to access these storage facilities under special rights, in order to upgrade and maintain their work. In addition, the end users should be provided with all the necessary interfaces for model execution and effortless result retrieval. We therefore propose a software infrastructure, based on a tool, model and data repository that handles the storage of models and pertinent execution-related data, along with functionalities for execution management, communication with third-party applications, user-friendly interfaces to access and use the infrastructure with minimal effort and basic security features. PMID:27812280

  5. SCIDIP-ES - A science data e-infrastructure for preservation of earth science data

    NASA Astrophysics Data System (ADS)

    Riddick, Andrew; Glaves, Helen; Marelli, Fulvio; Albani, Mirko; Tona, Calogera; Marketakis, Yannis; Tzitzikas, Yannis; Guarino, Raffaele; Giaretta, David; Di Giammatteo, Ugo

    2013-04-01

    The capability for long term preservation of earth science data is a key requirement to support on-going research and collaboration within and between many earth science disciplines. A number of critically important current research directions (e.g. understanding climate change, and ensuring sustainability of natural resources) rely on the preservation of data often collected over several decades in a form in which it can be accessed and used easily. In many branches of the earth sciences the capture of key observational data may be difficult or impossible to repeat. For example, a specific geological exposure or subsurface borehole may be only temporarily available, and deriving earth observation data from a particular satellite mission is clearly often a unique opportunity. At the same time such unrepeatable observations may be a critical input to environmental, economic and political decision making. Another key driver for strategic long term data preservation is that key research challenges (such as those described above) frequently require cross disciplinary research utilising raw and interpreted data from a number of earth science disciplines. Effective data preservation strategies can support this requirement for interoperability, and thereby stimulate scientific innovation. The SCIDIP-ES project (EC FP7 grant agreement no. 283401) seeks to address these and other data preservation challenges by developing a Europe wide e-infrastructure for long term data preservation comprising appropriate software tools and infrastructure services to enable and promote long term preservation of earth science data. Because we define preservation in terms of continued usability of the digitally encoded information, the generic infrastructure services will allow a wide variety of data to be made usable by researchers from many different domains. This approach will enable the cost for long-term usability across disciplines to be shared supporting the creation of strong business cases for the long term support of that data. This paper will describe our progress to date, including the results of community engagement and user consultation exercises designed to specify and scope the required tools and services. Our user engagement methodology, ensuring that we are capturing the views of a representative sample of institutional users, will be described. Key results of an in-depth user requirements exercise, and also the conclusions from a survey of existing technologies and policies for earth science data preservation involving almost five hundred respondents across Europe and beyond will also be outlined. A key aim of the project will also be to create harmonised data preservation and access policies for earth science data in Europe, taking into account the requirements of relevant earth science data users and archive providers across Europe, liaising appropriately with other European e-infrastructure projects, and progress on this will be explained.

  6. FermiGrid—experience and future plans

    NASA Astrophysics Data System (ADS)

    Chadwick, K.; Berman, E.; Canal, P.; Hesselroth, T.; Garzoglio, G.; Levshina, T.; Sergeev, V.; Sfiligoi, I.; Sharma, N.; Timm, S.; Yocum, D. R.

    2008-07-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid (OSG) and the Worldwide LHC Computing Grid Collaboration (WLCG). FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the OSG, EGEE, and the WLCG. Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure - the successes and the problems.

  7. FermiGrid - experience and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, K.; Berman, E.; Canal, P.

    2007-09-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and themore » Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems.« less

  8. A twenty-first century perspective. [NASA space communication infrastructure to support space missions

    NASA Technical Reports Server (NTRS)

    Aller, Robert O.; Miller, Albert

    1990-01-01

    The status of the NASA assets which are operated by the Office of Space Operations is briefly reviewed. These assets include the ground network, the space network, and communications and data handling facilities. The current plans for each element are examined, and a projection of each is made to meet the user needs in the 21st century. The following factors are noted: increasingly responsive support will be required by the users; operational support concepts must be cost-effective to serve future missions; and a high degree of system reliability and availability will be required to support manned exploration and increasingly complex missions.

  9. Learning from LANCE: Developing a Web Portal Infrastructure for NASA Earth Science Data (Invited)

    NASA Astrophysics Data System (ADS)

    Murphy, K. J.

    2013-12-01

    NASA developed the Land Atmosphere Near real-time Capability for EOS (LANCE) in response to a growing need for timely satellite observations by applications users, operational agencies and researchers. EOS capabilities originally intended for long-term Earth science research were modified to deliver satellite data products with sufficient latencies to meet the needs of the NRT user communities. LANCE products are primarily distributed as HDF data files for analysis, however novel capabilities for distribution of NRT imagery for visualization have been added which have expanded the user base. Additionally systems to convert data to information such as the MODIS hotspot/active fire data are also provided through the Fire Information for Resource Management System (FIRMS). LANCE services include: FTP/HTTP file distribution, Rapid Response (RR), Worldview, Global Imagery Browse Services (GIBS) and FIRMS. This paper discusses how NASA has developed services specifically for LANCE and is taking the lessons learned through these activities to develop an Earthdata Web Infrastructure. This infrastructure is being used as a platform to support development of data portals that address specific science issues for much of EOSDIS data.

  10. An interoperable research data infrastructure to support climate service development

    NASA Astrophysics Data System (ADS)

    De Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena

    2018-02-01

    Accessibility, availability, re-use and re-distribution of scientific data are prerequisites to build climate services across Europe. From this perspective the Institute of Biometeorology of the National Research Council (IBIMET-CNR), aiming at contributing to the sharing and integration of research data, has developed a research data infrastructure to support the scientific activities conducted in several national and international research projects. The proposed architecture uses open-source tools to ensure sustainability in the development and deployment of Web applications with geographic features and data analysis functionalities. The spatial data infrastructure components are organized in typical client-server architecture and interact from the data provider download data process to representation of the results to end users. The availability of structured raw data as customized information paves the way for building climate service purveyors to support adaptation, mitigation and risk management at different scales.

    This work is a bottom-up collaborative initiative between different IBIMET-CNR research units (e.g. geomatics and information and communication technology - ICT; agricultural sustainability; international cooperation in least developed countries - LDCs) that embrace the same approach for sharing and re-use of research data and informatics solutions based on co-design, co-development and co-evaluation among different actors to support the production and application of climate services. During the development phase of Web applications, different users (internal and external) were involved in the whole process so as to better define user needs and suggest the implementation of specific custom functionalities. Indeed, the services are addressed to researchers, academics, public institutions and agencies - practitioners who can access data and findings from recent research in the field of applied meteorology and climatology.

  11. Scaling the PuNDIT project for wide area deployments

    NASA Astrophysics Data System (ADS)

    McKee, Shawn; Batista, Jorge; Carcassi, Gabriele; Dovrolis, Constantine; Lee, Danny

    2017-10-01

    In today’s world of distributed scientific collaborations, there are many challenges to providing reliable inter-domain network infrastructure. Network operators use a combination of active monitoring and trouble tickets to detect problems, but these are often ineffective at identifying issues that impact wide-area network users. Additionally, these approaches do not scale to wide area inter-domain networks due to unavailability of data from all the domains along typical network paths. The Pythia Network Diagnostic InfrasTructure (PuNDIT) project aims to create a scalable infrastructure for automating the detection and localization of problems across these networks. The project goal is to gather and analyze metrics from existing perfSONAR monitoring infrastructures to identify the signatures of possible problems, locate affected network links, and report them to the user in an intuitive fashion. Simply put, PuNDIT seeks to convert complex network metrics into easily understood diagnoses in an automated manner. We present our progress in creating the PuNDIT system and our status in developing, testing and deploying PuNDIT. We report on the project progress to-date, describe the current implementation architecture and demonstrate some of the various user interfaces it will support. We close by discussing the remaining challenges and next steps and where we see the project going in the future.

  12. E-DECIDER Decision Support Gateway For Earthquake Disaster Response

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Stough, T. M.; Parker, J. W.; Burl, M. C.; Donnellan, A.; Blom, R. G.; Pierce, M. E.; Wang, J.; Ma, Y.; Rundle, J. B.; Yoder, M. R.

    2013-12-01

    Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) is a NASA-funded project developing capabilities for decision-making utilizing remote sensing data and modeling software in order to provide decision support for earthquake disaster management and response. E-DECIDER incorporates earthquake forecasting methodology and geophysical modeling tools developed through NASA's QuakeSim project in order to produce standards-compliant map data products to aid in decision-making following an earthquake. Remote sensing and geodetic data, in conjunction with modeling and forecasting tools, help provide both long-term planning information for disaster management decision makers as well as short-term information following earthquake events (i.e. identifying areas where the greatest deformation and damage has occurred and emergency services may need to be focused). E-DECIDER utilizes a service-based GIS model for its cyber-infrastructure in order to produce standards-compliant products for different user types with multiple service protocols (such as KML, WMS, WFS, and WCS). The goal is to make complex GIS processing and domain-specific analysis tools more accessible to general users through software services as well as provide system sustainability through infrastructure services. The system comprises several components, which include: a GeoServer for thematic mapping and data distribution, a geospatial database for storage and spatial analysis, web service APIs, including simple-to-use REST APIs for complex GIS functionalities, and geoprocessing tools including python scripts to produce standards-compliant data products. These are then served to the E-DECIDER decision support gateway (http://e-decider.org), the E-DECIDER mobile interface, and to the Department of Homeland Security decision support middleware UICDS (Unified Incident Command and Decision Support). The E-DECIDER decision support gateway features a web interface that delivers map data products including deformation modeling results (slope change and strain magnitude) and aftershock forecasts, with remote sensing change detection results under development. These products are event triggered (from the USGS earthquake feed) and will be posted to event feeds on the E-DECIDER webpage and accessible via the mobile interface and UICDS. E-DECIDER also features a KML service that provides infrastructure information from the FEMA HAZUS database through UICDS and the mobile interface. The back-end GIS service architecture and front-end gateway components form a decision support system that is designed for ease-of-use and extensibility for end-users.

  13. Converged Infrastructure for Emerging Regions - A Research Agenda

    NASA Astrophysics Data System (ADS)

    Chevrollier, Nicolas; Zidbeck, Juha; Ntlatlapa, Ntsibane; Simsek, Burak; Marikar, Achim

    In remote parts of Africa, the lack of energy supply, of wired infrastructure, of trained personnel and the limitation in OPEX and CAPEX impose stringent requirements on the network building blocks that support the communication infrastructure. Consequently, in this promising but untapped market, the research aims at designing and implementing energy-efficient, robust, reliable and affordable wide heterogeneous wireless mesh networks to connect geographically very large areas in a challenged environment. This paper proposes a solution that is aimed at enhancing the usability of Internet services in the harsh target environment and especially how the end-users experience the reliability of these services.

  14. Commercial Maritime Industry: Updated Information on Federal Assessments

    DOT National Transportation Integrated Search

    1999-09-16

    One of the means by which the federal government generates revenue to support America's maritime infrastructure is to enable federal agencies to levy assessments - user fees, taxes, and other charges - upon the commercial maritime industry. As of the...

  15. 802.11 Wireless Infrastructure To Enhance Medical Response to Disasters

    PubMed Central

    Arisoylu, Mustafa; Mishra, Rajesh; Rao, Ramesh; Lenert, Leslie A.

    2005-01-01

    802.11 (WiFi) is a well established network communications protocol that has wide applicability in civil infrastructure. This paper describes research that explores the design of 802.11 networks enhanced to support data communications in disaster environments. The focus of these efforts is to create network infrastructure to support operations by Metropolitan Medical Response System (MMRS) units and Federally-sponsored regional teams that respond to mass casualty events caused by a terrorist attack with chemical, biological, nuclear or radiological weapons or by a hazardous materials spill. In this paper, we describe an advanced WiFi-based network architecture designed to meet the needs of MMRS operations. This architecture combines a Wireless Distribution Systems for peer-to-peer multihop connectivity between access points with flexible and shared access to multiple cellular backhauls for robust connectivity to the Internet. The architecture offers a high bandwidth data communications infrastructure that can penetrate into buildings and structures while also supporting commercial off-the-shelf end-user equipment such as PDAs. It is self-configuring and is self-healing in the event of a loss of a portion of the infrastructure. Testing of prototype units is ongoing. PMID:16778990

  16. International Symposium on Grids and Clouds (ISGC) 2016

    NASA Astrophysics Data System (ADS)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  17. A Team Approach to Managing Technology: Despite Our Differences--We Had To Make IT Work!

    ERIC Educational Resources Information Center

    Giuliani, Peter R.

    Franklin University, a private urban university with 4500 students located in Columbus, Ohio, completed the initial phase of a long-range, campus-wide technology plan. The plan creates a well supported and managed computing and communications infrastructure focusing on: user support systems; classrooms and laboratories; offices; outside access;…

  18. Multi-Sector Sustainability Browser (MSSB) User Manual: A Decision Support Tool (DST) for Supporting Sustainability Efforts in Four Areas - Land Use, Transportation, Buildings and Infrastructure, and Materials Management

    EPA Science Inventory

    EPA’s Sustainable and Healthy Communities (SHC) Research Program is developing methodologies, resources, and tools to assist community members and local decision makers in implementing policy choices that facilitate sustainable approaches in managing their resources affecti...

  19. ESA SSA Space Weather Services Supporting Space Surveillance and Tracking

    NASA Astrophysics Data System (ADS)

    Luntama, Juha-Pekka; Glover, Alexi; Hilgers, Alain; Fletcher, Emmet

    2012-07-01

    ESA Space Situational Awareness (SSA) Preparatory Programme was started in 2009. The objective of the programme is to support the European independent utilisation of and access to space research or services. This will be performed through providing timely and quality data, information, services and knowledge regarding the environment, the threats and the sustainable exploitation of the outer space surrounding the planet Earth. SSA serves the implementation of the strategic missions of the European Space Policy based on the peaceful uses of the outer space by all states, by supporting the autonomous capacity to securely and safely operate the critical European space infrastructures. The Space Weather (SWE) Segment of the SSA will provide user services related to the monitoring of the Sun, the solar wind, the radiation belts, the magnetosphere and the ionosphere. These services will include near real time information and forecasts about the characteristics of the space environment and predictions of space weather impacts on sensitive spaceborne and ground based infrastructure. The SSA SWE system will also include establishment of a permanent database for analysis, model development and scientific research. These services are will support a wide variety of user domains including spacecraft designers, spacecraft operators, human space flights, users and operators of transionospheric radio links, and space weather research community. The precursor SWE services to be established starting in 2010. This presentation provides an overview of the ESA SSA SWE services focused on supporting the Space Surveillance and Tracking users. This services include estimates of the atmospheric drag and archive and forecasts of the geomagnetic and solar indices. In addition, the SSA SWE system will provide nowcasts of the ionospheric group delay to support mitigation of the ionospheric impact on radar signals. The paper will discuss the user requirements for the services, the data requirements and the foreseen development needs for the ESA SSA SWE system before the full service capability is available.

  20. Solar research with ALMA: Czech node of European ARC as your user-support infrastructure

    NASA Astrophysics Data System (ADS)

    Bárta, M.; Skokić, I.; Brajša, R.; Czech ARC Node Team

    2017-08-01

    ALMA (Atacama Large Millimeter/sub-millimeter Array) is by far the largest project of current ground-based observational facilities in astronomy and astrophysics. It is built and operated in the world-wide cooperation (ESO, NRAO, NAOJ) at altitude of 5000m in the desert of Atacama, Chile. Because of its unprecedented capabilities, ALMA is considered as a cutting-edge research device in astrophysics with potential for many breakthrough discoveries in the next decade and beyond. In spite it is not exclusively solar-research dedicated instrument, science observations of the Sun are now possible and has recently started in the observing Cycle 4 (2016-2017). In order to facilitate user access to this top-class, but at the same moment very complicated device to researchers lacking technical expertise, a network of three ALMA Regional Centers (ARCs) has been formed in Europe, North America, and East Asia as a user-support infrastructure and interface between the observatory and users community. After short introduction to ALMA the roles of ARCs and hint how to utilize their services will be presented, with emphasis to the specific (and in Europe unique) mission of the Czech ARC node in solar research with ALMA. Finally, peculiarities of solar observations that demanded the development of the specific Solar ALMA Observing Modes will be discuss

  1. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  2. Kennedy Space Center: Apollo to Multi-User Spaceport

    NASA Technical Reports Server (NTRS)

    Weber, Philip J.; Kanner, Howard S.

    2017-01-01

    NASA Kennedy Space Center (KSC) was established as the gateway to exploring beyond earth. Since the establishment of KSC in December 1963, the Center has been critical in the execution of the United States of Americas bold mission to send astronauts beyond the grasp of the terra firma. On May 25, 1961, a few weeks after a Soviet cosmonaut became the first person to fly in space, President John F. Kennedy laid out the ambitious goal of landing a man on the moon and returning him safely to the Earth by the end of the decade. The resultant Apollo program was massive endeavor, driven by the Cold War Space Race, and supported with a robust budget. The Apollo program consisted of 18 launches from newly developed infrastructure, including 12 manned missions and six lunar landings, ending with Apollo 17 that launched on December 7, 1972. Continuing to use this infrastructure, the Skylab program launched four missions. During the Skylab program, KSC infrastructure was redesigned to meet the needs of the Space Shuttle program, which launched its first vehicle (STS-1) on April 12, 1981. The Space Shuttle required significant modifications to the Apollo launch pads and assembly facilities, as well as new infrastructure, such as Orbiter and Payload Processing Facilities, as well as the Shuttle Landing Facility. The Space Shuttle was a workhorse that supported many satellite deployments, but was key for the construction and maintenance of the International Space Station, which required additional facilities at KSC to support processing of the flight hardware. After reaching the new Millennium, United States policymakers searched for new ways to reduce the cost of space exploration. The Constellation Program was initiated in 2005 with a goal of providing a crewed lunar landing with a much smaller budget. The very successful Space Shuttle made its last launch on July 8, 2011, after 135 missions. In the subsequent years, KSC continues to evolve, and this paper will address past and future efforts of the transformation of the KSC Apollo and Space Shuttle heritage infrastructure into a more versatile, multi-user spaceport. The paper will also discuss the US Congressional and NASA initiatives for developing and supporting multiple commercial partners, while simultaneously supporting NASAs human exploration initiative, consisting of Space Launch System (SLS), Orion spacecraft and associated ground launch systems. In addition, the paper explains the approach with examples for NASA KSC to leverage new technologies and innovative capabilities developed to reduce the cost to individual users.

  3. NHERI: Advancing the Research Infrastructure of the Multi-Hazard Community

    NASA Astrophysics Data System (ADS)

    Blain, C. A.; Ramirez, J. A.; Bobet, A.; Browning, J.; Edge, B.; Holmes, W.; Johnson, D.; Robertson, I.; Smith, T.; Zuo, D.

    2017-12-01

    The Natural Hazards Engineering Research Infrastructure (NHERI), supported by the National Science Foundation (NSF), is a distributed, multi-user national facility that provides the natural hazards research community with access to an advanced research infrastructure. Components of NHERI are comprised of a Network Coordination Office (NCO), a cloud-based cyberinfrastructure (DesignSafe-CI), a computational modeling and simulation center (SimCenter), and eight Experimental Facilities (EFs), including a post-disaster, rapid response research facility (RAPID). Utimately NHERI enables researchers to explore and test ground-breaking concepts to protect homes, businesses and infrastructure lifelines from earthquakes, windstorms, tsunamis, and surge enabling innovations to help prevent natural hazards from becoming societal disasters. When coupled with education and community outreach, NHERI will facilitate research and educational advances that contribute knowledge and innovation toward improving the resiliency of the nation's civil infrastructure to withstand natural hazards. The unique capabilities and coordinating activities over Year 1 between NHERI's DesignSafe-CI, the SimCenter, and individual EFs will be presented. Basic descriptions of each component are also found at https://www.designsafe-ci.org/facilities/. Additionally to be discussed are the various roles of the NCO in leading development of a 5-year multi-hazard science plan, coordinating facility scheduling and fostering the sharing of technical knowledge and best practices, leading education and outreach programs such as the recent Summer Institute and multi-facility REU program, ensuring a platform for technology transfer to practicing engineers, and developing strategic national and international partnerships to support a diverse multi-hazard research and user community.

  4. Open Data in Global Environmental Research: The Belmont Forum's Open Data Survey.

    PubMed

    Schmidt, Birgit; Gemeinholzer, Birgit; Treloar, Andrew

    2016-01-01

    This paper presents the findings of the Belmont Forum's survey on Open Data which targeted the global environmental research and data infrastructure community. It highlights users' perceptions of the term "open data", expectations of infrastructure functionalities, and barriers and enablers for the sharing of data. A wide range of good practice examples was pointed out by the respondents which demonstrates a substantial uptake of data sharing through e-infrastructures and a further need for enhancement and consolidation. Among all policy responses, funder policies seem to be the most important motivator. This supports the conclusion that stronger mandates will strengthen the case for data sharing.

  5. The RISC-V Instruction Set Manual. Volume 1: User-Level ISA, Version 2.0

    DTIC Science & Technology

    2014-05-06

    x86 architecture (or for that matter, almost every other architecture) is not well supported in the mobile space, though both Intel and ARM are...certain domains, and has to be built for others. • Commercial ISAs come and go. Previous research infrastructures have been built around commercial ISAs...more effort than we had planned at the outset. We have now invested con- siderable effort in building up the RISC-V ISA infrastructure , including

  6. Requirements for plug and play information infrastructure frameworks and architectures to enable virtual enterprises

    NASA Astrophysics Data System (ADS)

    Bolton, Richard W.; Dewey, Allen; Horstmann, Paul W.; Laurentiev, John

    1997-01-01

    This paper examines the role virtual enterprises will have in supporting future business engagements and resulting technology requirements. Two representative end-user scenarios are proposed that define the requirements for 'plug-and-play' information infrastructure frameworks and architectures necessary to enable 'virtual enterprises' in US manufacturing industries. The scenarios provide a high- level 'needs analysis' for identifying key technologies, defining a reference architecture, and developing compliant reference implementations. Virtual enterprises are short- term consortia or alliances of companies formed to address fast-changing opportunities. Members of a virtual enterprise carry out their tasks as if they all worked for a single organization under 'one roof', using 'plug-and-play' information infrastructure frameworks and architectures to access and manage all information needed to support the product cycle. 'Plug-and-play' information infrastructure frameworks and architectures are required to enhance collaboration between companies corking together on different aspects of a manufacturing process. This new form of collaborative computing will decrease cycle-time and increase responsiveness to change.

  7. Defense Technology Plan

    DTIC Science & Technology

    1994-09-01

    implementation of the services necessary to support transparent "information pull " operation of decision support systems. This infrastructure will be implemented...technology. Some aspects of this area such as user- pull , mobile and highly distributed operation, bandwidth needs and degree of securihy are Dol)-driven...by a variety of statutory requirements. R&D will provide enhanced mission effectiveness and maintenance of fragile ecosystems. The goalis to develop

  8. The HyperSkript Authoring Environment--An Integrated Approach for Producing, Maintaining, and Using Multimedia Lecture Material.

    ERIC Educational Resources Information Center

    Brennecke, Andreas; Selke, Harald

    Based on a technical infrastructure that supports face-to-face university teaching, an environment that enables small groups of lecturers to develop and maintain lecture material cooperatively was developed. In order to allow for a flexible use, only a few formal workshops are imposed on the users while cooperation is supported by easy-to-use…

  9. User-level framework for performance monitoring of HPC applications

    NASA Astrophysics Data System (ADS)

    Hristova, R.; Goranov, G.

    2013-10-01

    HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.

  10. NFFA-Europe: enhancing European competitiveness in nanoscience research and innovation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Carsughi, Flavio; Fonseca, Luis

    2017-06-01

    NFFA-EUROPE is an European open access resource for experimental and theoretical nanoscience and sets out a platform to carry out comprehensive projects for multidisciplinary research at the nanoscale extending from synthesis to nanocharacterization to theory and numerical simulation. Advanced infrastructures specialized on growth, nano-lithography, nano-characterization, theory and simulation and fine-analysis with Synchrotron, FEL and Neutron radiation sources are integrated in a multi-site combination to develop frontier research on methods for reproducible nanoscience research and to enable European and international researchers from diverse disciplines to carry out advanced proposals impacting science and innovation. NFFA-EUROPE will enable coordinated access to infrastructures on different aspects of nanoscience research that is not currently available at single specialized ones and without duplicating their specific scopes. Approved user projects will have access to the best suited instruments and support competences for performing the research, including access to analytical large scale facilities, theory and simulation and high-performance computing facilities. Access is offered free of charge to European users and users will receive a financial contribution for their travel, accommodation and subsistence costs. The users access will include several "installations" and will be coordinated through a single entry point portal that will activate an advanced user-infrastructure dialogue to build up a personalized access programme with an increasing return on science and innovation production. The own research activity of NFFA-EUROPE will address key bottlenecks of nanoscience research: nanostructure traceability, protocol reproducibility, in-operando nano-manipulation and analysis, open data.

  11. International Space Station Alpha user payload operations concept

    NASA Technical Reports Server (NTRS)

    Schlagheck, Ronald A.; Crysel, William B.; Duncan, Elaine F.; Rider, James W.

    1994-01-01

    International Space Station Alpha (ISSA) will accommodate a variety of user payloads investigating diverse scientific and technology disciplines on behalf of five international partners: Canada, Europe, Japan, Russia, and the United States. A combination of crew, automated systems, and ground operations teams will control payload operations that require complementary on-board and ground systems. This paper presents the current planning for the ISSA U.S. user payload operations concept and the functional architecture supporting the concept. It describes various NASA payload operations facilities, their interfaces, user facility flight support, the payload planning system, the onboard and ground data management system, and payload operations crew and ground personnel training. This paper summarizes the payload operations infrastructure and architecture developed at the Marshall Space Flight Center (MSFC) to prepare and conduct ISSA on-orbit payload operations from the Payload Operations Integration Center (POIC), and from various user operations locations. The authors pay particular attention to user data management, which includes interfaces with both the onboard data management system and the ground data system. Discussion covers the functional disciplines that define and support POIC payload operations: Planning, Operations Control, Data Management, and Training. The paper describes potential interfaces between users and the POIC disciplines, from the U.S. user perspective.

  12. Combining Interactive Infrastructure Modeling and Evolutionary Algorithm Optimization for Sustainable Water Resources Design

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2013-12-01

    Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.

  13. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  14. On-Line Databases in Mexico.

    ERIC Educational Resources Information Center

    Molina, Enzo

    1986-01-01

    Use of online bibliographic databases in Mexico is provided through Servicio de Consulta a Bancos de Informacion, a public service that provides information retrieval, document delivery, translation, technical support, and training services. Technical infrastructure is based on a public packet-switching network and institutional users may receive…

  15. Cloud Infrastructures for In Silico Drug Discovery: Economic and Practical Aspects

    PubMed Central

    Clematis, Andrea; Quarati, Alfonso; Cesini, Daniele; Milanesi, Luciano; Merelli, Ivan

    2013-01-01

    Cloud computing opens new perspectives for small-medium biotechnology laboratories that need to perform bioinformatics analysis in a flexible and effective way. This seems particularly true for hybrid clouds that couple the scalability offered by general-purpose public clouds with the greater control and ad hoc customizations supplied by the private ones. A hybrid cloud broker, acting as an intermediary between users and public providers, can support customers in the selection of the most suitable offers, optionally adding the provisioning of dedicated services with higher levels of quality. This paper analyses some economic and practical aspects of exploiting cloud computing in a real research scenario for the in silico drug discovery in terms of requirements, costs, and computational load based on the number of expected users. In particular, our work is aimed at supporting both the researchers and the cloud broker delivering an IaaS cloud infrastructure for biotechnology laboratories exposing different levels of nonfunctional requirements. PMID:24106693

  16. AstroCloud, a Cyber-Infrastructure for Astronomy Research: Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Li, C.; Wang, J.; Cui, C.; He, B.; Fan, D.; Yang, Y.; Chen, J.; Zhang, H.; Yu, C.; Xiao, J.; Wang, C.; Cao, Z.; Fan, Y.; Hong, Z.; Li, S.; Mi, L.; Wan, W.; Wang, J.; Yin, S.

    2015-09-01

    AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on CloudStack, an open source software, we set up the cloud computing environment for AstroCloud Project. It consists of five distributed nodes across the mainland of China. Users can use and analysis data in this cloud computing environment. Based on GlusterFS, we built a scalable cloud storage system. Each user has a private space, which can be shared among different virtual machines and desktop systems. With this environments, astronomer can access to astronomical data collected by different telescopes and data centers easily, and data producers can archive their datasets safely.

  17. EnviroAtlas: Two Use Cases in the EnviroAtlas

    EPA Science Inventory

    EnviroAtlas is an online spatial decision support tool for viewing and analyzing the supply, demand, and drivers of change related to natural and built infrastructure at multiple scales for the nation. To maximize usefulness to a broad range of users, EnviroAtlas contains trainin...

  18. The GEOSS User Requirement Registry (URR): A Cross-Cutting Service-Oriented Infrastructure Linking Science, Society and GEOSS

    NASA Astrophysics Data System (ADS)

    Plag, H.-P.; Foley, G.; Jules-Plag, S.; Ondich, G.; Kaufman, J.

    2012-04-01

    The Group on Earth Observations (GEO) is implementing the Global Earth Observation System of Systems (GEOSS) as a user-driven service infrastructure responding to the needs of users in nine interdependent Societal Benefit Areas (SBAs) of Earth observations (EOs). GEOSS applies an interdisciplinary scientific approach integrating observations, research, and knowledge in these SBAs in order to enable scientific interpretation of the collected observations and the extraction of actionable information. Using EOs to actually produce these societal benefits means getting the data and information to users, i.e., decision-makers. Thus, GEO needs to know what the users need and how they would use the information. The GEOSS User Requirements Registry (URR) is developed as a service-oriented infrastructure enabling a wide range of users, including science and technology (S&T) users, to express their needs in terms of EOs and to understand the benefits of GEOSS for their fields. S&T communities need to be involved in both the development and the use of GEOSS, and the development of the URR accounts for the special needs of these communities. The GEOSS Common Infrastructure (GCI) at the core of GEOSS includes system-oriented registries enabling users to discover, access, and use EOs and derived products and services available through GEOSS. In addition, the user-oriented URR is a place for the collection, sharing, and analysis of user needs and EO requirements, and it provides means for an efficient dialog between users and providers. The URR is a community-based infrastructure for the publishing, viewing, and analyzing of user-need related information. The data model of the URR has a core of seven relations for User Types, Applications, Requirements, Research Needs, Infrastructure Needs, Technology Needs, and Capacity Building Needs. The URR also includes a Lexicon, a number of controlled vocabularies, and

  19. Hazard Management with DOORS: Rail Infrastructure Projects

    NASA Astrophysics Data System (ADS)

    Hughes, Dave; Saeed, Amer

    LOI is a major rail infrastructure project that will contribute to a modernised transport system in time for the 2012 Olympic Games. A review of the procedures and tool infrastructure was conducted in early 2006, coinciding with a planned move to main works. A hazard log support tool was needed to provide: an automatic audit trial, version control and support collaborative working. A DOORS based Hazard Log (DHL) was selected as the Tool Strategy. A systematic approach was followed for the development of DHL, after a series of tests and acceptance gateways, DHL was handed over to the project in autumn 2006. The first few months were used for operational trials and he Hazard Management rocedure was modified to be a hybrid approach that used the strengths of DHL and Excel. The user experience in the deployment of DHL is summarised and directions for future improvement identified.

  20. An authentication infrastructure for today and tomorrow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engert, D.E.

    1996-06-01

    The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less

  1. System Dynamics Approach for Critical Infrastructure and Decision Support. A Model for a Potable Water System.

    NASA Astrophysics Data System (ADS)

    Pasqualini, D.; Witkowski, M.

    2005-12-01

    The Critical Infrastructure Protection / Decision Support System (CIP/DSS) project, supported by the Science and Technology Office, has been developing a risk-informed Decision Support System that provides insights for making critical infrastructure protection decisions. The system considers seventeen different Department of Homeland Security defined Critical Infrastructures (potable water system, telecommunications, public health, economics, etc.) and their primary interdependencies. These infrastructures have been modeling in one model called CIP/DSS Metropolitan Model. The modeling approach used is a system dynamics modeling approach. System dynamics modeling combines control theory and the nonlinear dynamics theory, which is defined by a set of coupled differential equations, which seeks to explain how the structure of a given system determines its behavior. In this poster we present a system dynamics model for one of the seventeen critical infrastructures, a generic metropolitan potable water system (MPWS). Three are the goals: 1) to gain a better understanding of the MPWS infrastructure; 2) to identify improvements that would help protect MPWS; and 3) to understand the consequences, interdependencies, and impacts, when perturbations occur to the system. The model represents raw water sources, the metropolitan water treatment process, storage of treated water, damage and repair to the MPWS, distribution of water, and end user demand, but does not explicitly represent the detailed network topology of an actual MPWS. The MPWS model is dependent upon inputs from the metropolitan population, energy, telecommunication, public health, and transportation models as well as the national water and transportation models. We present modeling results and sensitivity analysis indicating critical choke points, negative and positive feedback loops in the system. A general scenario is also analyzed where the potable water system responds to a generic disruption.

  2. Solar research with ALMA: Czech node of European ARC as your user-support infrastructure

    NASA Astrophysics Data System (ADS)

    Bárta, M.; Skokić, I.; Brajša, R.; Czech ARC Node Team

    2017-08-01

    ALMA (Atacama Large Millimeter/sub-millimeter Array) is by far the largest project of current ground-based observational facilities in astronomy and astrophysics. It is built and operated in the world-wide cooperation (ESO, NRAO, NAOJ) at altitude of 5000m in the desert of Atacama, Chile. Because of its unprecedented capabilities, ALMA is considered as a cutting-edge research device in astrophysics with potential for many breakthrough discoveries in the next decade and beyond. In spite it is not exclusively solar-research dedicated instrument, science observations of the Sun are now possible and has recently started in the observing Cycle 4 (2016-2017). In order to facilitate user access to this top-class, but at the same moment very complicated device to researchers lacking technical expertise, a network of three ALMA Regional Centers (ARCs) has been formed in Europe, North America, and East Asia as a user-support infrastructure and interface between the observatory and users community. After short introduction to ALMA the roles of ARCs and hint how to utilize their services will be presented, with emphasis to the specific (and in Europe unique) mission of the Czech ARC node in solar research with ALMA. Finally, peculiarities of solar observations that demanded the development of the specific Solar ALMA Observing Modes will be discussed and the results of Commissioning and Science Verification observing campaigns (solar ALMA maps) will be shown.

  3. Helix Nebula: Enabling federation of existing data infrastructures and data services to an overarching cross-domain e-infrastructure

    NASA Astrophysics Data System (ADS)

    Lengert, Wolfgang; Farres, Jordi; Lanari, Riccardo; Casu, Francesco; Manunta, Michele; Lassalle-Balier, Gerard

    2014-05-01

    Helix Nebula has established a growing public private partnership of more than 30 commercial cloud providers, SMEs, and publicly funded research organisations and e-infrastructures. The Helix Nebula strategy is to establish a federated cloud service across Europe. Three high-profile flagships, sponsored by CERN (high energy physics), EMBL (life sciences) and ESA/DLR/CNES/CNR (earth science), have been deployed and extensively tested within this federated environment. The commitments behind these initial flagships have created a critical mass that attracts suppliers and users to the initiative, to work together towards an "Information as a Service" market place. Significant progress in implementing the following 4 programmatic goals (as outlined in the strategic Plan Ref.1) has been achieved: × Goal #1 Establish a Cloud Computing Infrastructure for the European Research Area (ERA) serving as a platform for innovation and evolution of the overall infrastructure. × Goal #2 Identify and adopt suitable policies for trust, security and privacy on a European-level can be provided by the European Cloud Computing framework and infrastructure. × Goal #3 Create a light-weight governance structure for the future European Cloud Computing Infrastructure that involves all the stakeholders and can evolve over time as the infrastructure, services and user-base grows. × Goal #4 Define a funding scheme involving the three stake-holder groups (service suppliers, users, EC and national funding agencies) into a Public-Private-Partnership model to implement a Cloud Computing Infrastructure that delivers a sustainable business environment adhering to European level policies. Now in 2014 a first version of this generic cross-domain e-infrastructure is ready to go into operations building on federation of European industry and contributors (data, tools, knowledge, ...). This presentation describes how Helix Nebula is being used in the domain of earth science focusing on geohazards. The so called "Supersite Exploitation Platform" (SSEP) provides scientists an overarching federated e-infrastructure with a very fast access to (i) large volume of data (EO/non-space data), (ii) computing resources (e.g. hybrid cloud/grid), (iii) processing software (e.g. toolboxes, RTMs, retrieval baselines, visualization routines), and (iv) general platform capabilities (e.g. user management and access control, accounting, information portal, collaborative tools, social networks etc.). In this federation each data provider remains in full control of the implementation of its data policy. This presentation outlines the Architecture (technical and services) supporting very heterogeneous science domains as well as the procedures for new-comers to join the Helix Nebula Market Place. Ref.1 http://cds.cern.ch/record/1374172/files/CERN-OPEN-2011-036.pdf

  4. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  5. A Security Architecture for Grid-enabling OGC Web Services

    NASA Astrophysics Data System (ADS)

    Angelini, Valerio; Petronzio, Luca

    2010-05-01

    In the proposed presentation we describe an architectural solution for enabling a secure access to Grids and possibly other large scale on-demand processing infrastructures through OGC (Open Geospatial Consortium) Web Services (OWS). This work has been carried out in the context of the security thread of the G-OWS Working Group. G-OWS (gLite enablement of OGC Web Services) is an international open initiative started in 2008 by the European CYCLOPS , GENESI-DR, and DORII Project Consortia in order to collect/coordinate experiences in the enablement of OWS's on top of the gLite Grid middleware. G-OWS investigates the problem of the development of Spatial Data and Information Infrastructures (SDI and SII) based on the Grid/Cloud capacity in order to enable Earth Science applications and tools. Concerning security issues, the integration of OWS compliant infrastructures and gLite Grids needs to address relevant challenges, due to their respective design principles. In fact OWS's are part of a Web based architecture that demands security aspects to other specifications, whereas the gLite middleware implements the Grid paradigm with a strong security model (the gLite Grid Security Infrastructure: GSI). In our work we propose a Security Architectural Framework allowing the seamless use of Grid-enabled OGC Web Services through the federation of existing security systems (mostly web based) with the gLite GSI. This is made possible mediating between different security realms, whose mutual trust is established in advance during the deployment of the system itself. Our architecture is composed of three different security tiers: the user's security system, a specific G-OWS security system, and the gLite Grid Security Infrastructure. Applying the separation-of-concerns principle, each of these tiers is responsible for controlling the access to a well-defined resource set, respectively: the user's organization resources, the geospatial resources and services, and the Grid resources. While the gLite middleware is tied to a consolidated security approach based on X.509 certificates, our system is able to support different kinds of user's security infrastructures. Our central component, the G-OWS Security Framework, is based on the OASIS WS-Trust specifications and on the OGC GeoRM architectural framework. This allows to satisfy advanced requirements such as the enforcement of specific geospatial policies and complex secure web service chained requests. The typical use case is represented by a scientist belonging to a given organization who issues a request to a G-OWS Grid-enabled Web Service. The system initially asks the user to authenticate to his/her organization's security system and, after verification of the user's security credentials, it translates the user's digital identity into a G-OWS identity. This identity is linked to a set of attributes describing the user's access rights to the G-OWS services and resources. Inside the G-OWS Security system, access restrictions are applied making use of the enhanced Geospatial capabilities specified by the OGC GeoXACML. If the required action needs to make use of the Grid environment the system checks if the user is entitled to access a Grid infrastructure. In that case his/her identity is translated to a temporary Grid security token using the Short Lived Credential Services (IGTF Standard). In our case, for the specific gLite Grid infrastructure, some information (VOMS Attributes) is plugged into the Grid Security Token to grant the access to the user's Virtual Organization Grid resources. The resulting token is used to submit the request to the Grid and also by the various gLite middleware elements to verify the user's grants. Basing on the presented framework, the G-OWS Security Working Group developed a prototype, enabling the execution of OGC Web Services on the EGEE Production Grid through the federation with a Shibboleth based security infrastructure. Future plans aim to integrate other Web authentication services such as OpenID, Kerberos and WS-Federation.

  6. The Virtual Geophysics Laboratory (VGL): Scientific Workflows Operating Across Organizations and Across Infrastructures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Wyborn, L. A.; Fraser, R.; Rankine, T.; Woodcock, R.; Vote, J.; Evans, B.

    2012-12-01

    The Virtual Geophysics Laboratory (VGL) is web portal that provides geoscientists with an integrated online environment that: seamlessly accesses geophysical and geoscience data services from the AuScope national geoscience information infrastructure; loosely couples these data to a variety of gesocience software tools; and provides large scale processing facilities via cloud computing. VGL is a collaboration between CSIRO, Geoscience Australia, National Computational Infrastructure, Monash University, Australian National University and the University of Queensland. The VGL provides a distributed system whereby a user can enter an online virtual laboratory to seamlessly connect to OGC web services for geoscience data. The data is supplied in open standards formats using international standards like GeoSciML. A VGL user uses a web mapping interface to discover and filter the data sources using spatial and attribute filters to define a subset. Once the data is selected the user is not required to download the data. VGL collates the service query information for later in the processing workflow where it will be staged directly to the computing facilities. The combination of deferring data download and access to Cloud computing enables VGL users to access their data at higher resolutions and to undertake larger scale inversions, more complex models and simulations than their own local computing facilities might allow. Inside the Virtual Geophysics Laboratory, the user has access to a library of existing models, complete with exemplar workflows for specific scientific problems based on those models. For example, the user can load a geological model published by Geoscience Australia, apply a basic deformation workflow provided by a CSIRO scientist, and have it run in a scientific code from Monash. Finally the user can publish these results to share with a colleague or cite in a paper. This opens new opportunities for access and collaboration as all the resources (models, code, data, processing) are shared in the one virtual laboratory. VGL provides end users with access to an intuitive, user-centered interface that leverages cloud storage and cloud and cluster processing from both the research communities and commercial suppliers (e.g. Amazon). As the underlying data and information services are agnostic of the scientific domain, they can support many other data types. This fundamental characteristic results in a highly reusable virtual laboratory infrastructure that could also be used for example natural hazards, satellite processing, soil geochemistry, climate modeling, agriculture crop modeling.

  7. Incorporation of Personal Single Nucleotide Polymorphism (SNP) Data into a National Level Electronic Health Record for Disease Risk Assessment, Part 2: The Incorporation of SNP into the National Health Information System of Turkey

    PubMed Central

    Beyan, Timur

    2014-01-01

    Background A personalized medicine approach provides opportunities for predictive and preventive medicine. Using genomic, clinical, environmental, and behavioral data, the tracking and management of individual wellness is possible. A prolific way to carry this personalized approach into routine practices can be accomplished by integrating clinical interpretations of genomic variations into electronic medical record (EMR)s/electronic health record (EHR)s systems. Today, various central EHR infrastructures have been constituted in many countries of the world, including Turkey. Objective As an initial attempt to develop a sophisticated infrastructure, we have concentrated on incorporating the personal single nucleotide polymorphism (SNP) data into the National Health Information System of Turkey (NHIS-T) for disease risk assessment, and evaluated the performance of various predictive models for prostate cancer cases. We present our work as a miniseries containing three parts: (1) an overview of requirements, (2) the incorporation of SNP into the NHIS-T, and (3) an evaluation of SNP data incorporated into the NHIS-T for prostate cancer. Methods For the second article of this miniseries, we have analyzed the existing NHIS-T and proposed the possible extensional architectures. In light of the literature survey and characteristics of NHIS-T, we have proposed and argued opportunities and obstacles for a SNP incorporated NHIS-T. A prototype with complementary capabilities (knowledge base and end-user applications) for these architectures has been designed and developed. Results In the proposed architectures, the clinically relevant personal SNP (CR-SNP) and clinicogenomic associations are shared between central repositories and end-users via the NHIS-T infrastructure. To produce these files, we need to develop a national level clinicogenomic knowledge base. Regarding clinicogenomic decision support, we planned to complete interpretation of these associations on the end-user applications. This approach gives us the flexibility to add/update envirobehavioral parameters and family health history that will be monitored or collected by end users. Conclusions Our results emphasized that even though the existing NHIS-T messaging infrastructure supports the integration of SNP data and clinicogenomic association, it is critical to develop a national level, accredited knowledge base and better end-user systems for the interpretation of genomic, clinical, and envirobehavioral parameters. PMID:25599817

  8. Incorporation of personal single nucleotide polymorphism (SNP) data into a national level electronic health record for disease risk assessment, part 2: the incorporation of SNP into the national health information system of Turkey.

    PubMed

    Beyan, Timur; Aydın Son, Yeşim

    2014-08-11

    A personalized medicine approach provides opportunities for predictive and preventive medicine. Using genomic, clinical, environmental, and behavioral data, the tracking and management of individual wellness is possible. A prolific way to carry this personalized approach into routine practices can be accomplished by integrating clinical interpretations of genomic variations into electronic medical record (EMR)s/electronic health record (EHR)s systems. Today, various central EHR infrastructures have been constituted in many countries of the world, including Turkey. As an initial attempt to develop a sophisticated infrastructure, we have concentrated on incorporating the personal single nucleotide polymorphism (SNP) data into the National Health Information System of Turkey (NHIS-T) for disease risk assessment, and evaluated the performance of various predictive models for prostate cancer cases. We present our work as a miniseries containing three parts: (1) an overview of requirements, (2) the incorporation of SNP into the NHIS-T, and (3) an evaluation of SNP data incorporated into the NHIS-T for prostate cancer. For the second article of this miniseries, we have analyzed the existing NHIS-T and proposed the possible extensional architectures. In light of the literature survey and characteristics of NHIS-T, we have proposed and argued opportunities and obstacles for a SNP incorporated NHIS-T. A prototype with complementary capabilities (knowledge base and end-user applications) for these architectures has been designed and developed. In the proposed architectures, the clinically relevant personal SNP (CR-SNP) and clinicogenomic associations are shared between central repositories and end-users via the NHIS-T infrastructure. To produce these files, we need to develop a national level clinicogenomic knowledge base. Regarding clinicogenomic decision support, we planned to complete interpretation of these associations on the end-user applications. This approach gives us the flexibility to add/update envirobehavioral parameters and family health history that will be monitored or collected by end users. Our results emphasized that even though the existing NHIS-T messaging infrastructure supports the integration of SNP data and clinicogenomic association, it is critical to develop a national level, accredited knowledge base and better end-user systems for the interpretation of genomic, clinical, and envirobehavioral parameters.

  9. Software defined networking (SDN) over space division multiplexing (SDM) optical networks: features, benefits and experimental demonstration.

    PubMed

    Amaya, N; Yan, S; Channegowda, M; Rofoee, B R; Shu, Y; Rashidi, M; Ou, Y; Hugues-Salas, E; Zervas, G; Nejabati, R; Simeonidou, D; Puttnam, B J; Klaus, W; Sakaguchi, J; Miyazawa, T; Awaji, Y; Harai, H; Wada, N

    2014-02-10

    We present results from the first demonstration of a fully integrated SDN-controlled bandwidth-flexible and programmable SDM optical network utilizing sliceable self-homodyne spatial superchannels to support dynamic bandwidth and QoT provisioning, infrastructure slicing and isolation. Results show that SDN is a suitable control plane solution for the high-capacity flexible SDM network. It is able to provision end-to-end bandwidth and QoT requests according to user requirements, considering the unique characteristics of the underlying SDM infrastructure.

  10. Environmentally Preferable Coatings for Structural Steel Project

    NASA Technical Reports Server (NTRS)

    Lewis, Pattie L. (Editor)

    2014-01-01

    The Ground Systems Development and Operations (GSDO) Program at NASA John F. Kennedy Space Center (KSC) has the primary objective of modernizing and transforming the launch and range complex at KSC to benefit current and future NASA programs along with other emerging users. Described a the "launch support and infrastructure modernization program" in the NASA Authorization Act of 2010, the GSDO Program will develop and implement shared infrastructure and process improvements to provide more flexible, affordable, and responsive capabilities to a multi-user community. In support of the GSDO Program, the objective of this project is to determine the feasibility of environmentally friendly corrosion resistant coatings for launch facilities and ground support equipment. The focus of the project is corrosion resistance and survivability with the goal to reduce the amount of maintenance required to preserve the performance of launch facilities while reducing mission risk. Number of facilities/structures with metallic structural and non-structural components in a highly corrosive environment. Metals require periodic maintenance activity to guard against the insidious effects of corrosion and thus ensure that structures meet or exceed design or performance life. The standard practice for protecting metallic substrates in atmospheric environments is the application of corrosion protective coating system.

  11. Use Model for a User Centred Design in Multidisciplinary Teams.

    PubMed

    Clark, Colin; Michelle, Jess; Shahi, Sepideh; Stolarick, Kevin; Trevinarus, Jutta; Vanderheiden, Gregg; Vimarlund, Vivian

    2017-01-01

    The Use Model identifies user groups who will be using services and products the Prosperity4All infrastructure offers. The Model provides developers a tool to keep in mind the full diversity of users while building and designing the infrastructure.

  12. Mars mission effects on Space Station evolution

    NASA Technical Reports Server (NTRS)

    Askins, Barbara S.; Cook, Stephen G.

    1989-01-01

    The permanently manned Space Station scheduled to be operational in low earth by the mid 1990's, will provide accommodations for science, applications, technology, and commercial users, and will develop enabling capabilities for future missions. A major aspect of the baseline Space Station design is that provisions for evolution to greater capabilities are included in the systems and subsystems designs. User requirements are the basis for conceptual evolution modes or infrastructure to support the paths. Four such modes are discussed in support of a Human to Mars mission, along with some of the near term actions protecting the future of supporting Mars missions on the Space Station. The evolution modes include crew and payload transfer, storage, checkout, assembly, maintenance, repair, and fueling.

  13. Watch-and-Comment as an Approach to Collaboratively Annotate Points of Interest in Video and Interactive-TV Programs

    NASA Astrophysics Data System (ADS)

    Pimentel, Maria Da Graça C.; Cattelan, Renan G.; Melo, Erick L.; Freitas, Giliard B.; Teixeira, Cesar A.

    In earlier work we proposed the Watch-and-Comment (WaC) paradigm as the seamless capture of multimodal comments made by one or more users while watching a video, resulting in the automatic generation of multimedia documents specifying annotated interactive videos. The aim is to allow services to be offered by applying document engineering techniques to the multimedia document generated automatically. The WaC paradigm was demonstrated with a WaCTool prototype application which supports multimodal annotation over video frames and segments, producing a corresponding interactive video. In this chapter, we extend the WaC paradigm to consider contexts in which several viewers may use their own mobile devices while watching and commenting on an interactive-TV program. We first review our previous work. Next, we discuss scenarios in which mobile users can collaborate via the WaC paradigm. We then present a new prototype application which allows users to employ their mobile devices to collaboratively annotate points of interest in video and interactive-TV programs. We also detail the current software infrastructure which supports our new prototype; the infrastructure extends the Ginga middleware for the Brazilian Digital TV with an implementation of the UPnP protocol - the aim is to provide the seamless integration of the users' mobile devices into the TV environment. As a result, the work reported in this chapter defines the WaC paradigm for the mobile-user as an approach to allow the collaborative annotation of the points of interest in video and interactive-TV programs.

  14. A Service Oriented Infrastructure for Earth Science exchange

    NASA Astrophysics Data System (ADS)

    Burnett, M.; Mitchell, A.

    2008-12-01

    NASA's Earth Science Distributed Information System (ESDIS) program has developed an infrastructure for the exchange of Earth Observation related resources. Fundamentally a platform for Service Oriented Architectures, ECHO provides standards-based interfaces based on the basic interactions for a SOA pattern: Publish, Find and Bind. This infrastructure enables the realization of the benefits of Service Oriented Architectures, namely the reduction of stove-piped systems, the opportunity for reuse and flexibility to meet dynamic business needs, on a global scale. ECHO is the result of the infusion of IT technologies, including those standards of Web Services and Service Oriented Architecture technologies. The infrastructure is based on standards and leverages registries for data, services, clients and applications. As an operational system, ECHO currently representing over 110 million Earth Observation resources from a wide number of provider organizations. These partner organizations each have a primary mission - serving a particular facet of the Earth Observation community. Through ECHO, those partners can serve the needs of not only their target portion of the community, but also enable a wider range of users to discover and leverage their data resources, thereby increasing the value of their offerings. The Earth Observation community benefits from this infrastructure because it provides a set of common mechanisms for the discovery and access to resources from a much wider range of data and service providers. ECHO enables innovative clients to be built for targeted user types and missions. There several examples of those clients already in process. Applications built on this infrastructure can include User-driven, GUI-clients (web-based or thick clients), analysis programs (as intermediate components of larger systems), models or decision support systems. This paper will provide insight into the development of ECHO, as technologies were evaluated for infusion, and a summary of how technologies where leveraged into a significant operational system for the Earth Observation community.

  15. The open science grid

    NASA Astrophysics Data System (ADS)

    Pordes, Ruth; OSG Consortium; Petravick, Don; Kramer, Bill; Olson, Doug; Livny, Miron; Roy, Alain; Avery, Paul; Blackburn, Kent; Wenaus, Torre; Würthwein, Frank; Foster, Ian; Gardner, Rob; Wilde, Mike; Blatecky, Alan; McGee, John; Quick, Rob

    2007-07-01

    The Open Science Grid (OSG) provides a distributed facility where the Consortium members provide guaranteed and opportunistic access to shared computing and storage resources. OSG provides support for and evolution of the infrastructure through activities that cover operations, security, software, troubleshooting, addition of new capabilities, and support for existing and engagement with new communities. The OSG SciDAC-2 project provides specific activities to manage and evolve the distributed infrastructure and support it's use. The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over tens of autonomous computing sites, for many hundreds of users, transferring terabytes of data a day, executing tens of thousands of jobs a day, and providing robust and usable resources for scientific groups of all types and sizes. More information can be found at the OSG web site: www.opensciencegrid.org.

  16. The Age of the Supercomputer Gives Way to the Age of the Super Infrastructure.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    In October 1997, the National Science Foundation will discontinue financial support for two university-based supercomputer facilities to concentrate resources on partnerships led by facilities at the University of California, San Diego and the University of Illinois, Urbana-Champaign. The reconfigured program will develop more user-friendly and…

  17. Alternative to Nitric Acid Passivation

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2015-01-01

    The Ground Systems Development and Operations (GSDO) Program at NASA John F. Kennedy Space Center (KSC), Florida, has the primary objective of modernizing and transforming the launch and range complex at KSC to benefit current and future NASA programs along with other emerging users. Described as the launch support and infrastructure modernization program in the NASA Authorization Act of 2010, the GSDO Program will develop and implement shared infrastructure and process improvements to provide more flexible, affordable, and responsive capabilities to a multi-user community. In support of NASA and the GSDO Program, the objective of this project is to qualify citric acid as an environmentally-preferable alternative to nitric acid for passivation of stainless steel alloys. This project is a direct follow-on to United Space Alliance (USA) work at KSC to optimize the parameters for the use of citric acid and verify effectiveness. This project will build off of the USA study to further evaluate citric acids effectiveness and suitability for corrosion protection of a number of stainless steels alloys used by NASA, the Department of Defense (DoD), and the European Space Agency (ESA).

  18. The construction of a public key infrastructure for healthcare information networks in Japan.

    PubMed

    Sakamoto, N

    2001-01-01

    The digital signature is a key technology in the forthcoming Internet society for electronic healthcare as well as for electronic commerce. Efficient exchanges of authorized information with a digital signature in healthcare information networks require a construction of a public key infrastructure (PKI). In order to introduce a PKI to healthcare information networks in Japan, we proposed a development of a user authentication system based on a PKI for user management, user authentication and privilege management of healthcare information systems. In this paper, we describe the design of the user authentication system and its implementation. The user authentication system provides a certification authority service and a privilege management service while it is comprised of a user authentication client and user authentication serves. It is designed on a basis of an X.509 PKI and is implemented with using OpenSSL and OpenLDAP. It was incorporated into the financial information management system for the national university hospitals and has been successfully working for about one year. The hospitals plan to use it as a user authentication method for their whole healthcare information systems. One implementation of the system is free to the national university hospitals with permission of the Japanese Ministry of Education, Culture, Sports, Science and Technology. Another implementation is open to the other healthcare institutes by support of the Medical Information System Development Center (MEDIS-DC). We are moving forward to a nation-wide construction of a PKI for healthcare information networks based on it.

  19. Watershed Management Optimization Support Tool (WMOST) ...

    EPA Pesticide Factsheets

    EPA's Watershed Management Optimization Support Tool (WMOST) version 2 is a decision support tool designed to facilitate integrated water management by communities at the small watershed scale. WMOST allows users to look across management options in stormwater (including green infrastructure), wastewater, drinking water, and land conservation programs to find the least cost solutions. The pdf version of these presentations accompany the recorded webinar with closed captions to be posted on the WMOST web page. The webinar was recorded at the time a training workshop took place for EPA's Watershed Management Optimization Support Tool (WMOST, v2).

  20. The Federal Geospatial Platform a shared infrastructure for publishing, discovering and exploiting public data and spatial applications.

    NASA Astrophysics Data System (ADS)

    Dabolt, T. O.

    2016-12-01

    The proliferation of open data and data services continues to thrive and is creating new challenges on how researchers, policy analysts and other decision makes can quickly discover and use relevant data. While traditional metadata catalog approaches used by applications such as data.gov prove to be useful starting points for data search they can quickly frustrate end users who are seeking ways to quickly find and then use data in machine to machine environs. The Geospatial Platform is overcoming these obstacles and providing end users and applications developers a richer more productive user experience. The Geospatial Platform leverages a collection of open source and commercial technology hosted on Amazon Web Services providing an ecosystem of services delivering trusted, consistent data in open formats to all users as well as a shared infrastructure for federal partners to serve their spatial data assets. It supports a diverse array of communities of practice ranging on topics from the 16 National Geospatial Data Assets Themes, to homeland security and climate adaptation. Come learn how you can contribute your data and leverage others or check it out on your own at https://www.geoplatform.gov/

  1. Communications system evolutionary scenarios for Martian SEI support

    NASA Technical Reports Server (NTRS)

    Kwong, Paulman W.; Bruno, Ronald C.

    1992-01-01

    In the Space Exploration Initiative (SEI) mission scenarios, expanding human presence is the primary driver for high data rate Mars-Earth communications. To support an expanding human presence, the data rate requirement will be gradual, following the phased implementation over time of the evolving SEI mission. Similarly, the growth and evolution of the space communications infrastructure to serve this requirement will also be gradual to efficiently exploit the useful life of the installed communications infrastructure and to ensure backward compatibility with long-term users. In work conducted over the past year, a number of alternatives for supporting high data rate Mars-Earth communications have been analyzed with respect to their compatibility with gradual evolution of the space communications infrastructure. The alternatives include RF, millimeter wave (MMW), and optical implementations, and incorporate both surface and space-based relay terminals in the Mars and Earth regions. Each alternative is evaluated with respect to its ability to efficiently meet a projected growth in data rate over time, its technology readiness, and its capability to satisfy the key conditions and constraints imposed by evolutionary transition. As a result of this analysis, a set of attractive alternative communications architectures have been identified and described, and a road map is developed that illustrates the most rational and beneficial evolutionary paths for the communications infrastructure.

  2. Spatial Data Services for Interdisciplinary Applications from the NASA Socioeconomic Data and Applications Center

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; MacManus, K.; Vinay, S.; Yetman, G.

    2016-12-01

    The Socioeconomic Data and Applications Center (SEDAC), one of 12 Distributed Active Archive Centers (DAACs) in the NASA Earth Observing System Data and Information System (EOSDIS), has developed a variety of operational spatial data services aimed at providing online access, visualization, and analytic functions for geospatial socioeconomic and environmental data. These services include: open web services that implement Open Geospatial Consortium (OGC) specifications such as Web Map Service (WMS), Web Feature Service (WFS), and Web Coverage Service (WCS); spatial query services that support Web Processing Service (WPS) and Representation State Transfer (REST); and web map clients and a mobile app that utilize SEDAC and other open web services. These services may be accessed from a variety of external map clients and visualization tools such as NASA's WorldView, NOAA's Climate Explorer, and ArcGIS Online. More than 200 data layers related to population, settlements, infrastructure, agriculture, environmental pollution, land use, health, hazards, climate change and other aspects of sustainable development are available through WMS, WFS, and/or WCS. Version 2 of the SEDAC Population Estimation Service (PES) supports spatial queries through WPS and REST in the form of a user-defined polygon or circle. The PES returns an estimate of the population residing in the defined area for a specific year (2000, 2005, 2010, 2015, or 2020) based on SEDAC's Gridded Population of the World version 4 (GPWv4) dataset, together with measures of accuracy. The SEDAC Hazards Mapper and the recently released HazPop iOS mobile app enable users to easily submit spatial queries to the PES and see the results. SEDAC has developed an operational virtualized backend infrastructure to manage these services and support their continual improvement as standards change, new data and services become available, and user needs evolve. An ongoing challenge is to improve the reliability and performance of the infrastructure, in conjunction with external services, to meet both research and operational needs.

  3. Infrastructure Management Information System User Manual

    DOT National Transportation Integrated Search

    1998-10-01

    This publication describes and explains the user interface for the Infrastructure Management Information System (IMIS). The IMIS is designed to answer questions regarding public water supply, wastewater treatment, and census information. This publica...

  4. A reference architecture for integrated EHR in Colombia.

    PubMed

    de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd

    2011-01-01

    The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.

  5. Quasi-experimental study designs series-paper 11: supporting the production and use of health systems research syntheses that draw on quasi-experimental study designs.

    PubMed

    Lavis, John N; Bärnighausen, Till; El-Jardali, Fadi

    2017-09-01

    To describe the infrastructure available to support the production of policy-relevant health systems research syntheses, particularly those incorporating quasi-experimental evidence, and the tools available to support the use of these syntheses. Literature review. The general challenges associated with the available infrastructure include their sporadic nature or limited coverage of issues and countries, whereas the specific ones related to policy-relevant syntheses of quasi-experimental evidence include the lack of mechanism to register synthesis titles and scoping review protocols, the limited number of groups preparing user-friendly summaries, and the difficulty of finding quasi-experimental studies for inclusion in rapid syntheses and research syntheses more generally. Although some new tools have emerged in recent years, such as guidance workbooks and citizen briefs and panels, challenges related to using available tools to support the use of policy-relevant syntheses of quasi-experimental evidence arise from such studies potentially being harder for policymakers and stakeholders to commission and understand. Policymakers, stakeholders, and researchers need to expand the coverage and institutionalize the use of the available infrastructure and tools to support the use of health system research syntheses containing quasi-experimental evidence. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. WCSC environmental process improvement study and demonstration program

    NASA Technical Reports Server (NTRS)

    Pawlick, Joseph F., Jr.; Severo, Orlando C.

    1993-01-01

    CSTAR's objective to develop commercial infrastructure is multi-faceted and includes diverse elements of the orbital and suborbital missions. Goals to this eight-month project with the WCSC are aimed at simplifying the environmental assessment, approval, and licensing process for commercial users. Included in this overarching set of goals are two specific processes: (1) air pollution control, and (2) the environmental assessment mechanism. Resolution of the potentially user unfriendly aspects of these environmentally sensitive criteria are readily transferable to other ranges where commercial space activity will be supported.

  7. Introducing Live ePortfolios to Support Self Organised Learning

    ERIC Educational Resources Information Center

    Kirkham, Thomas; Winfield, Sandra; Smallwood, Angela; Coolin, Kirstie; Wood, Stuart; Searchwell, Louis

    2009-01-01

    This paper presents a platform on which a new generation of applications targeted to aid the self-organised learner can be presented. The new application is enabled by innovations in trust-based security of data built upon emerging infrastructures to aid federated data access in the UK education sector. Within the proposed architecture, users and…

  8. Strategic Planning for a Data-Driven, Shared-Access Research Enterprise: Virginia Tech Research Data Assessment and Landscape Study

    ERIC Educational Resources Information Center

    Shen, Yi

    2016-01-01

    The data landscape study at Virginia Tech addresses the changing modes of faculty scholarship and supports the development of a user-centric data infrastructure, management, and curation system. The study investigates faculty researchers' current practices in organizing, describing, and preserving data and the emerging needs for services and…

  9. Missile Defense Information Technology Small Business Conference

    DTIC Science & Technology

    2009-09-01

    NetOps Survivability 4 • Supported User Base • Number of Workstations • Number of Servers • Number of Special Circuits • Number of Sites • Number...Contracts, MDIOC • Ground Test (DTC) • MDSEC (SS) • Infrastructure (IC) • BMDS Support (BCT) • JTAAS – SETA • Mod & Sim ( DES ) • Analysis (GML) • Tenants...AUG 09) 4 MDA DOCE Engineering Functions • Design Engineers – Develop detailed design artifacts based on architectural specifications – Coordinate

  10. Visualizing common operating picture of critical infrastructure

    NASA Astrophysics Data System (ADS)

    Rummukainen, Lauri; Oksama, Lauri; Timonen, Jussi; Vankka, Jouko

    2014-05-01

    This paper presents a solution for visualizing the common operating picture (COP) of the critical infrastructure (CI). The purpose is to improve the situational awareness (SA) of the strategic-level actor and the source system operator in order to support decision making. The information is obtained through the Situational Awareness of Critical Infrastructure and Networks (SACIN) framework. The system consists of an agent-based solution for gathering, storing, and analyzing the information, and a user interface (UI) is presented in this paper. The UI consists of multiple views visualizing information from the CI in different ways. Different CI actors are categorized in 11 separate sectors, and events are used to present meaningful incidents. Past and current states, together with geographical distribution and logical dependencies, are presented to the user. The current states are visualized as segmented circles to represent event categories. Geographical distribution of assets is displayed with a well-known map tool. Logical dependencies are presented in a simple directed graph, and users also have a timeline to review past events. The objective of the UI is to provide an easily understandable overview of the CI status. Therefore, testing methods, such as a walkthrough, an informal walkthrough, and the Situation Awareness Global Assessment Technique (SAGAT), were used in the evaluation of the UI. Results showed that users were able to obtain an understanding of the current state of CI, and the usability of the UI was rated as good. In particular, the designated display for the CI overview and the timeline were found to be efficient.

  11. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  12. The Climate-G Portal: a Grid Enabled Scientifc Gateway for Climate Change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Grid portals are web gateways aiming at concealing the underlying infrastructure through a pervasive, transparent, user-friendly, ubiquitous and seamless access to heterogeneous and geographical spread resources (i.e. storage, computational facilities, services, sensors, network, databases). Definitively they provide an enhanced problem-solving environment able to deal with modern, large scale scientific and engineering problems. Scientific gateways are able to introduce a revolution in the way scientists and researchers organize and carry out their activities. Access to distributed resources, complex workflow capabilities, and community-oriented functionalities are just some of the features that can be provided by such a web-based environment. In the context of the EGEE NA4 Earth Science Cluster, Climate-G is a distributed testbed focusing on climate change research topics. The Euro-Mediterranean Center for Climate Change (CMCC) is actively participating in the testbed providing the scientific gateway (Climate-G Portal) to access to the entire infrastructure. The Climate-G Portal has to face important and critical challenges as well as has to satisfy and address key requirements. In the following, the most relevant ones are presented and discussed. Transparency: the portal has to provide a transparent access to the underlying infrastructure preventing users from dealing with low level details and the complexity of a distributed grid environment. Security: users must be authenticated and authorized on the portal to access and exploit portal functionalities. A wide set of roles is needed to clearly assign the proper one to each user. The access to the computational grid must be completely secured, since the target infrastructure to run jobs is a production grid environment. A security infrastructure (based on X509v3 digital certificates) is strongly needed. Pervasivity and ubiquity: the access to the system must be pervasive and ubiquitous. This is easily true due to the nature of the needed web approach. Usability and simplicity: the portal has to provide simple, high level and user friendly interfaces to ease the access and exploitation of the entire system. Coexistence of general purpose and domain oriented services: along with general purpose services (file transfer, job submission, etc.), the portal has to provide domain based services and functionalities. Subsetting of data, visualization of 2D maps around a virtual globe, delivery of maps through OGC compliant interfaces (i.e. Web Map Service - WMS) are just some examples. Since april 2009, about 70 users (85% coming from the climate change community) got access to the portal. A key challenge of this work is the idea to provide users with an integrated working environment, that is a place where scientists can find huge amount of data, complete metadata support, a wide set of data access services, data visualization and analysis tools, easy access to the underlying grid infrastructure and advanced monitoring interfaces.

  13. Federation in genomics pipelines: techniques and challenges.

    PubMed

    Chaterji, Somali; Koo, Jinkyu; Li, Ninghui; Meyer, Folker; Grama, Ananth; Bagchi, Saurabh

    2017-08-29

    Federation is a popular concept in building distributed cyberinfrastructures, whereby computational resources are provided by multiple organizations through a unified portal, decreasing the complexity of moving data back and forth among multiple organizations. Federation has been used in bioinformatics only to a limited extent, namely, federation of datastores, e.g. SBGrid Consortium for structural biology and Gene Expression Omnibus (GEO) for functional genomics. Here, we posit that it is important to federate both computational resources (CPU, GPU, FPGA, etc.) and datastores to support popular bioinformatics portals, with fast-increasing data volumes and increasing processing requirements. A prime example, and one that we discuss here, is in genomics and metagenomics. It is critical that the processing of the data be done without having to transport the data across large network distances. We exemplify our design and development through our experience with metagenomics-RAST (MG-RAST), the most popular metagenomics analysis pipeline. Currently, it is hosted completely at Argonne National Laboratory. However, through a recently started collaborative National Institutes of Health project, we are taking steps toward federating this infrastructure. Being a widely used resource, we have to move toward federation without disrupting 50 K annual users. In this article, we describe the computational tools that will be useful for federating a bioinformatics infrastructure and the open research challenges that we see in federating such infrastructures. It is hoped that our manuscript can serve to spur greater federation of bioinformatics infrastructures by showing the steps involved, and thus, allow them to scale to support larger user bases. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Common MD-IS infrastructure for wireless data technologies

    NASA Astrophysics Data System (ADS)

    White, Malcolm E.

    1995-12-01

    The expansion of global networks, caused by growth and acquisition within the commercial sector, is forcing users to move away from proprietary systems in favor of standards-based, open systems architectures. The same is true in the wireless data communications arena, where operators of proprietary wireless data networks have endeavored to convince users that their particular implementation provides the best service. However, most of the vendors touting these solutions have failed to gain the critical mass that might have lead to their technologies' adoption as a defacto standard, and have been held back by a lack of applications and the high cost of mobile devices. The advent of the cellular digital packet data (CDPD) specification and its support by much of the public cellular service industry has set the stage for the ubiquitous coverage of wireless packet data services across the Unites States. Although CDPD was developed for operation over the advanced mobile phone system (AMPS) cellular network, many of the defined protocols are industry standards that can be applied to the construction of a common infrastructure supporting multiple airlink standards. This approach offers overall cost savings and operation efficiency for service providers, hardware, and software developers and end-users alike, and could be equally advantageous for those service operators using proprietary end system protocols, should they wish to migrate towards an open standard.

  15. International epidemiology of HIV and AIDS among injecting drug users.

    PubMed

    Des Jarlais, D C; Friedman, S R; Choopanya, K; Vanichseni, S; Ward, T P

    1992-10-01

    HIV/AIDS and iv drug use (IVDU) are of significant multinational scope and growing. Supporting increased IVDU in many countries are countries' geographical proximity to illicit drug trafficking distribution routes, law enforcement efforts which increase the demand for more efficient drug distribution and consumption, and countries' infrastructural and social modernization. Given the failures of intensified law enforcement efforts to thwart the use and proliferation of illegal drugs, countries with substantial IVDU should look away from preventing use to preventing HIV transmission within drug user populations. With HIV seroprevalence rates rapidly reaching 40-50% in some developing country IVDU groups, a variety of prevention programs is warranted. Such programs should be supported and implemented while prevention remains feasible. This paper examines the variation in HIV seroprevalence among IVD users, rapid HIV spread among users, HIV among IVDUs in Bangkok, emerging issues in HIV transmission among IVDUs, non-AIDS manifestations of HIV infection among IVDUs, prevention programs and effectiveness, and harm reduction.

  16. StarTrax --- The Next Generation User Interface

    NASA Astrophysics Data System (ADS)

    Richmond, Alan; White, Nick

    StarTrax is a software package to be distributed to end users for installation on their local computing infrastructure. It will provide access to many services of the HEASARC, i.e. bulletins, catalogs, proposal and analysis tools, initially for the ROSAT MIPS (Mission Information and Planning System), later for the Next Generation Browse. A user activating the GUI will reach all HEASARC capabilities through a uniform view of the system, independent of the local computing environment and of the networking method of accessing StarTrax. Use it if you prefer the point-and-click metaphor of modern GUI technology, to the classical command-line interfaces (CLI). Notable strengths include: easy to use; excellent portability; very robust server support; feedback button on every dialog; painstakingly crafted User Guide. It is designed to support a large number of input devices including terminals, workstations and personal computers. XVT's Portability Toolkit is used to build the GUI in C/C++ to run on: OSF/Motif (UNIX or VMS), OPEN LOOK (UNIX), or Macintosh, or MS-Windows (DOS), or character systems.

  17. EUFAR the unique portal for airborne research in Europe

    NASA Astrophysics Data System (ADS)

    Gérard, Elisabeth; Brown, Philip

    2016-04-01

    Created in 2000 and supported by the EU Framework Programmes since then, EUFAR was born out of the necessity to create a central network and access point for the airborne research community in Europe. With the aim to support researchers by granting them access to research infrastructures, not accessible in their home countries, EUFAR also provides technical support and training in the field of airborne research for the environmental and geo-sciences. Today, EUFAR2 (2014-2018) coordinates and facilitates transnational access to 18 instrumented aircraft and 3 remote-sensing instruments through the 13 operators who are part of EUFAR's current 24-partner European consortium. In addition, the current project supports networking and research activities focused on providing an enabling environment for and promoting airborne research. The EUFAR2 activities cover three objectives, supported by the internet website www.eufar.net: (I - Institutional) improvement of the access to the research infrastructures and development of the future fleet according to the strategic advisory committee (SAC) recommendations; (ii - Innovation) improvement of the scientific knowledge and promotion of innovating instruments, processes and services for the emergence of new industrial technologies, with an identification of industrial needs by the SAC; (iii - Service) optimisation and harmonisation of the use of the research infrastructures through the development of the community of young researches in airborne science, of the standards and protocols and of the airborne central database. With the launch of a brand new website (www.eufar.net) in mid-November 2015, EUFAR aims to improve user experience on the website, which serves as a source of information and a hub where users are able to collaborate, learn, share expertise and best practices, and apply for transnational access, and education and training funded opportunities within the network. With its newly designed eye-catching interface, the website offers easy navigation, and user friendly functionalities. New features also include a section on news and airborne research stories to keep users up-to-date on EUFAR's activities, a career section, photo galleries, and much more. By elaborating new solutions for the web portal, EUFAR continues to serve as an interactive and dynamic platform bringing together experts, early-stage researchers, operators, data users, industry and other stakeholders in the airborne research community. A main focus of the current project is the establishment of a sustainable legal structure for EUFAR. This is critical to ensuring the continuity of EUFAR and securing, at the least, partial financial independence from the European Commission who has been funding the project since its start. After carefully examining different legal forms relevant for EUFAR, the arguments are strongly in favour of establishing an International non-profit Association under the Belgian law (AISBL). Together with the implementation of an Open Access scheme by means of resource-sharing to support the mobility of personnel across countries envisaged in 2016, such a sustainable structure would contribute substantially toward broadening the user base of existing airborne research facilities in Europe and mobilising additional resources for this end. In essence, this would cement EUFAR's position as the key portal for airborne research in Europe.

  18. Building a federated data infrastructure for integrating the European Supersites

    NASA Astrophysics Data System (ADS)

    Freda, Carmela; Cocco, Massimo; Puglisi, Giuseppe; Borgstrom, Sven; Vogfjord, Kristin; Sigmundsson, Freysteinn; Ergintav, Semih; Meral Ozel, Nurcan; Consortium, Epos

    2017-04-01

    The integration of satellite and in-situ Earth observations fostered by the GEO Geohazards Supersites and National Laboratories (GSNL) initiative is aimed at providing access to spaceborne and in-situ geoscience data for selected sites prone to earthquake, volcanic eruptions and/or other environmental hazards. The initiative was launched with the "Frascati declaration" at the conclusion of the 3rd International Geohazards workshop of the Group of Earth Observation (GEO) held in November 2007 in Frascati, Italy. The development of the GSNL and the integration of in-situ and space Earth observations require the implementation of in-situ e-infrastructures and services for scientific users and other stakeholders. The European Commission has funded three projects to support the development of the European supersites: FUTUREVOLC for the Icelandic volcanoes, MED-SUV for Mt. Etna and Campi Flegrei/Vesuvius (Italy), and MARSITE for the Marmara Sea near fault observatory (Turkey). Because the establishment of a network of supersites in Europe will, among other advantages, facilitate the link with the Global Earth Observation System of Systems (GEOSS), EPOS (the European Plate Observing System) has supported these initiatives by integrating the observing systems and infrastructures developed in these three projects in its implementation plan aimed at integrating existing and new research infrastructures for solid Earth sciences. In this contribution we will present the EPOS federated approach and the key actions needed to: i) develop sustainable long-term Earth observation strategies preceding and following earthquakes and volcanic eruptions; ii) develop an innovative integrated e-infrastructure component necessary to create an effective service for users; iii) promote the strategic and outreach actions to meet the specific user needs; iv) develop expertise in the use and interpretation of Supersites data in order to promote capacity building and timely transfer of scientific knowledge. All these will facilitate new scientific discoveries through the availability of unprecedented data sets and it will increase resilience and preparedness in the society. Making straightway available observations of natural processes controlling natural phenomena and promoting their comparison with numerical simulations and their interpretation through theoretical analyses will foster scientific excellence in solid Earth research. The EPOS federated approach might be considered as a proxy for other regions of the world and therefore it could contribute to develop the supersite initiative globally.

  19. SeaDataNet - Pan-European infrastructure for marine and ocean data management: Unified access to distributed data sets (www.seadatanet.org)

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Maudire, Gilbert

    2010-05-01

    SeaDataNet is a leading infrastructure in Europe for marine & ocean data management. It is actively operating and further developing a Pan-European infrastructure for managing, indexing and providing access to ocean and marine data sets and data products, acquired via research cruises and other observational activities, in situ and remote sensing. The basis of SeaDataNet is interconnecting 40 National Oceanographic Data Centres and Marine Data Centers from 35 countries around European seas into a distributed network of data resources with common standards for metadata, vocabularies, data transport formats, quality control methods and flags, and access. Thereby most of the NODC's operate and/or are developing national networks to other institutes in their countries to ensure national coverage and long-term stewardship of available data sets. The majority of data managed by SeaDataNet partners concerns physical oceanography, marine chemistry, hydrography, and a substantial volume of marine biology and geology and geophysics. These are partly owned by the partner institutes themselves and for a major part also owned by other organizations from their countries. The SeaDataNet infrastructure is implemented with support of the EU via the EU FP6 SeaDataNet project to provide the Pan-European data management system adapted both to the fragmented observation system and the users need for an integrated access to data, meta-data, products and services. The SeaDataNet project has a duration of 5 years and started in 2006, but builds upon earlier data management infrastructure projects, undertaken over a period of 20 years by an expanding network of oceanographic data centres from the countries around all European seas. Its predecessor project Sea-Search had a strict focus on metadata. SeaDataNet maintains significant interest in the further development of the metadata infrastructure, extending its services with the provision of easy data access and generic data products. Version 1 of its infrastructure upgrade was launched in April 2008 and is now well underway to include all 40 data centres at V1 level. It comprises the network of 40 interconnected data centres (NODCs) and a central SeaDataNet portal. V1 provides users a unified and transparent overview of the metadata and controlled access to the large collections of data sets, that are managed at these data centres. The SeaDataNet V1 infrastructure comprises the following middleware services: • Discovery services = Metadata directories and User interfaces • Vocabulary services = Common vocabularies and Governance • Security services = Authentication, Authorization & Accounting • Delivery services = Requesting and Downloading of data sets • Viewing services = Mapping of metadata • Monitoring services = Statistics on system usage and performance and Registration of data requests and transactions • Maintenance services = Entry and updating of metadata by data centres Also good progress is being made with extending the SeaDataNet infrastructure with V2 services: • Viewing services = Quick views and Visualisation of data and data products • Product services = Generic and standard products • Exchange services = transformation of SeaDataNet portal CDI output to INSPIRE compliance As a basis for the V1 services, common standards have been defined for metadata and data formats, common vocabularies, quality flags, and quality control methods, based on international standards, such as ISO 19115, OGC, NetCDF (CF), ODV, best practices from IOC and ICES, and following INSPIRE developments. An important objective of the SeaDataNet V1 infrastructure is to provide transparent access to the distributed data sets via a unique user interface and download service. In the SeaDataNet V1 architecture the Common Data Index (CDI) V1 metadata service provides the link between discovery and delivery of data sets. The CDI user interface enables users to have a detailed insight of the availability and geographical distribution of marine data, archived at the connected data centres. It provides sufficient information to allow the user to assess the data relevance. Moreover the CDI user interface provides the means for downloading data sets in common formats via a transaction mechanism. The SeaDataNet portal provides registered users access to these distributed data sets via the CDI V1 Directory and a shopping basket mechanism. This allows registered users to locate data of interest and submit their data requests. The requests are forwarded automatically from the portal to the relevant SeaDataNet data centres. This process is controlled via the Request Status Manager (RSM) Web Service at the portal and a Download Manager (DM) java software module, implemented at each of the data centres. The RSM also enables registered users to check regularly the status of their requests and download data sets, after access has been granted. Data centres can follow all transactions for their data sets online and can handle requests which require their consent. The actual delivery of data sets is done between the user and the selected data centre. Very good progress is being made with connecting all SeaDataNet data centres and their data sets to the CDI V1 system. At present the CDI V1 system provides users functionality to discover and download more than 500.000 data sets, a number which is steadily increasing. The SeaDataNet architecture provides a coherent system of the various V1 services and inclusion of the V2 services. For the implementation, a range of technical components have been defined and developed. These make use of recent web technologies, and also comprise Java components, to provide multi-platform support and syntactic interoperability. To facilitate sharing of resources and interoperability, SeaDataNet has adopted the technology of SOAP Web services for various communication tasks. The SeaDataNet architecture has been designed as a multi-disciplinary system from the beginning. It is able to support a wide variety of data types and to serve several sector communities. SeaDataNet is willing to share its technologies and expertise, to spread and expand its approach, and to build bridges to other well established infrastructures in the marine domain. Therefore SeaDataNet has developed a strategy of seeking active cooperation on a national scale with other data holding organisations via its NODC networks and on an international scale with other European and international data management initiatives and networks. This is done with the objective to achieve a wider coverage of data sources and an overall interoperability between data infrastructures in the marine and ocean domains. Recent examples are e.g. the EU FP7 projects Geo-Seas for geology and geophysical data sets, UpgradeBlackSeaScene for a Black Sea data management infrastructure, CaspInfo for a Caspian Sea data management infrastructure, the EU EMODNET pilot projects, for hydrographic, chemical, and biological data sets. All projects are adopting the SeaDataNet standards and extending its services. Also active cooperation takes place with EuroGOOS and MyOcean in the domain of real-time and delayed mode metocean monitoring data. SeaDataNet Partners: IFREMER (France), MARIS (Netherlands), HCMR/HNODC (Greece), ULg (Belgium), OGS (Italy), NERC/BODC (UK), BSH/DOD (Germany), SMHI (Sweden), IEO (Spain), RIHMI/WDC (Russia), IOC (International), ENEA (Italy), INGV (Italy), METU (Turkey), CLS (France), AWI (Germany), IMR (Norway), NERI (Denmark), ICES (International), EC-DG JRC (International), MI (Ireland), IHPT (Portugal), RIKZ (Netherlands), RBINS/MUMM (Belgium), VLIZ (Belgium), MRI (Iceland), FIMR (Finland ), IMGW (Poland), MSI (Estonia), IAE/UL (Latvia), CMR (Lithuania), SIO/RAS (Russia), MHI/DMIST (Ukraine), IO/BAS (Bulgaria), NIMRD (Romania), TSU (Georgia), INRH (Morocco), IOF (Croatia), PUT (Albania), NIB (Slovenia), UoM (Malta), OC/UCY (Cyprus), IOLR (Israel), NCSR/NCMS (Lebanon), CNR-ISAC (Italy), ISMAL (Algeria), INSTM (Tunisia)

  20. Barriers to integrating information technology in Saudi Arabia science education

    NASA Astrophysics Data System (ADS)

    Al-Alwani, Abdulkareem Eid Salamah

    This study examined current level of information technology integration in science education in the Yanbu school district in Saudi Arabia, and barriers to use. Sub-domains investigated included: infrastructure and resources, policy and support, science teachers' personal beliefs, and staff development. Survey determined demographic data and level of technology implementation, personal computer use, and current instructional practice. Mean frequency of information technology use was 1--2 times during a semester. Science teachers rated barriers limiting use of technology in teaching with a scale ranging from 0 (does not limit) to 3 (greatly limits). Results found all four factors were significant barriers: infrastructure and resources (M = 2.06; p < .001), staff development (M = 2.02; p <.001), policy and support (M = 1.84; p < .041) and science teachers' personal beliefs regarding technology (M = 1.15; p < .001). Regression analysis found that locations, level of training, teaching experience, and gender predicted frequency of use (F(3,168) = 3.63, R2 = .10, p < .014). Teachers who received in-service training programs used IT significantly more frequently than those who did not receive any training (t = 2.41, p = 0.017). Teachers who received both pre-service and in-service training used IT significantly more frequently than those who did not receive any training (t = 2.61, p = 0.01). Low technology users perceived that there was no support or incentives for using technology, while high technology users did not perceive these barriers (r = -0.18, p = .01). High technology users had positive personal beliefs about how information technology benefits learning, while low technology users held negative beliefs about technology use (r = -0.20, p = .003). The more barriers science teachers experienced, the less likely they were to be information technology users (r = -0.16, p = .02). There is a need for more computers in school, more teacher training, more time for teachers to learn to use technology, and more readily-available, technical support staff. Further studies are needed to represent all science teachers in Saudi Arabia, assess technology capacity of all schools, and assess in-service staff development strategies.

  1. A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks

    PubMed Central

    Chen, Huifang; Ge, Linlin; Xie, Lei

    2015-01-01

    The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes. PMID:26184224

  2. A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks.

    PubMed

    Chen, Huifang; Ge, Linlin; Xie, Lei

    2015-07-14

    The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes.

  3. Modeling Freight Ocean Rail and Truck Transportation Flows to Support Policy Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Wang, Hao; Nozick, Linda Karen

    Freight transportation represents about 9.5% of GDP, is responsible for about 8% of greenhouse gas emissions and supports the import and export of about 3.6 trillion in international trade; hence it is important that our national freight transportation system is designed and operated efficiently and embodies user fees and other policies that balance costs and environmental consequences. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail and truck modes which can be used to study the impacts of changes in our infrastructure as well as the imposition of new user fees andmore » changes in operating policies. This model is applied to two case studies: (1) a disruption of the maritime ports at Los Angeles/Long Beach similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.« less

  4. The GENIUS Grid Portal and robot certificates: a new tool for e-Science

    PubMed Central

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-01-01

    Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747

  5. The GENIUS Grid Portal and robot certificates: a new tool for e-Science.

    PubMed

    Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio

    2009-06-16

    Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.

  6. The Cloud Area Padovana: from pilot to production

    NASA Astrophysics Data System (ADS)

    Andreetto, P.; Costa, F.; Crescente, A.; Dorigo, A.; Fantinel, S.; Fanzago, F.; Sgaravatto, M.; Traldi, S.; Verlato, M.; Zangrando, L.

    2017-10-01

    The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across two different sites: the INFN Padova Unit and the INFN Legnaro National Labs. The hardware resources have been scaled horizontally and vertically, by upgrading some hypervisors and by adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in the OpenStack dashboard, such as a tool for user and project registrations with direct support for the INFN-AAI Identity Provider as a new option for the user authentication. In collaboration with the EU-funded Indigo DataCloud project, the integration with Docker-based containers has been experimented with and will be available in production soon. This computing facility now satisfies the computational and storage demands of more than 70 users affiliated with about 20 research projects. We present here the architecture of this Cloud infrastructure, the tools and procedures used to operate it. We also focus on the lessons learnt in these two years, describing the problems that were found and the corrective actions that had to be applied. We also discuss about the chosen strategy for upgrades, which combines the need to promptly integrate the OpenStack new developments, the demand to reduce the downtimes of the infrastructure, and the need to limit the effort requested for such updates. We also discuss how this Cloud infrastructure is being used. In particular we focus on two big physics experiments which are intensively exploiting this computing facility: CMS and SPES. CMS deployed on the cloud a complex computational infrastructure, composed of several user interfaces for job submission in the Grid environment/local batch queues or for interactive processes; this is fully integrated with the local Tier-2 facility. To avoid a static allocation of the resources, an elastic cluster, based on cernVM, has been configured: it allows to automatically create and delete virtual machines according to the user needs. SPES, using a client-server system called TraceWin, exploits INFN’s virtual resources performing a very large number of simulations on about a thousand nodes elastically managed.

  7. WebProtégé: A Collaborative Ontology Editor and Knowledge Acquisition Tool for the Web

    PubMed Central

    Tudorache, Tania; Nyulas, Csongor; Noy, Natalya F.; Musen, Mark A.

    2012-01-01

    In this paper, we present WebProtégé—a lightweight ontology editor and knowledge acquisition tool for the Web. With the wide adoption of Web 2.0 platforms and the gradual adoption of ontologies and Semantic Web technologies in the real world, we need ontology-development tools that are better suited for the novel ways of interacting, constructing and consuming knowledge. Users today take Web-based content creation and online collaboration for granted. WebProtégé integrates these features as part of the ontology development process itself. We tried to lower the entry barrier to ontology development by providing a tool that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise. The declarative user interface enabled us to create custom knowledge-acquisition forms tailored for domain experts. We built WebProtégé using the existing Protégé infrastructure, which supports collaboration on the back end side, and the Google Web Toolkit for the front end. The generic and extensible infrastructure allowed us to easily deploy WebProtégé in production settings for several projects. We present the main features of WebProtégé and its architecture and describe briefly some of its uses for real-world projects. WebProtégé is free and open source. An online demo is available at http://webprotege.stanford.edu. PMID:23807872

  8. Web Services and Handle Infrastructure - WDCC's Contributions to International Projects

    NASA Astrophysics Data System (ADS)

    Föll, G.; Weigelt, T.; Kindermann, S.; Lautenschlager, M.; Toussaint, F.

    2012-04-01

    Climate science demands on data management are growing rapidly as climate models grow in the precision with which they depict spatial structures and in the completeness with which they describe a vast range of physical processes. The ExArch project is exploring the challenges of developing a software management infrastructure which will scale to the multi-exabyte archives of climate data which are likely to be crucial to major policy decisions in by the end of the decade. The ExArch approach to future integration of exascale climate archives is based on one hand on a distributed web service architecture providing data analysis and quality control functionality across archvies. On the other hand a consistent persistent identifier infrastructure is deployed to support distributed data management and data replication. Distributed data analysis functionality is based on the CDO climate data operators' package. The CDO-Tool is used for processing of the archived data and metadata. CDO is a collection of command line Operators to manipulate and analyse Climate and forecast model Data. A range of formats is supported and over 500 operators are provided. CDO presently is designed to work in a scripting environment with local files. ExArch will extend the tool to support efficient usage in an exascale archive with distributed data and computational resources by providing flexible scheduling capabilities. Quality control will become increasingly important in an exascale computing context. Researchers will be dealing with millions of data files from multiple sources and will need to know whether the files satisfy a range of basic quality criterea. Hence ExArch will provide a flexible and extensible quality control system. The data will be held at more than 30 computing centres and data archives around the world, but for users it will appear as a single archive due to a standardized ExArch Web Processing Service. Data infrastructures such as the one built by ExArch can greatly benefit from assigning persistent identifiers (PIDs) to the main entities, such as data and metadata records. A PID should then not only consist of a globally unique identifier, but also support built-in facilities to relate PIDs to each other, to build multi-hierarchical virtual collections and to enable attaching basic metadata directly to PIDs. With such a toolset, PIDs can support crucial data management tasks. For example, data replication performed in ExArch can be supported through PIDs as they can help to establish durable links between identical copies. By linking derivative data objects together, their provenance can be traced with a level of detail and reliability currently unavailable in the Earth system modelling domain. Regarding data transfers, virtual collections of PIDs may be used to package data prior to transmission. If the PID of such a collection is used as the primary key in data transfers, safety of transfer and traceability of data objects across repositories increases. End-users can benefit from PIDs as well since they make data discovery independent from particular storage sites and enable user-friendly communication about primary research objects. A generic PID system can in fact be a fundamental building block for scientific e-infrastructures across projects and domains.

  9. Integration of end-user Cloud storage for CMS analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  10. Integration of end-user Cloud storage for CMS analysis

    DOE PAGES

    Riahi, Hassen; Aimar, Alberto; Ayllon, Alejandro Alvarez; ...

    2017-05-19

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achievemore » results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with the CMS distributed computing model. We describe the new challenges faced in data management between Grid and Cloud and how they were addressed, along with details of the support for Cloud storage recently introduced into the WLCG data movement middleware, FTS3. Finally, the commissioning experience of CERNBox for the distributed data analysis activity is also presented.« less

  11. a Bottom-Up Geosptial Data Update Mechanism for Spatial Data Infrastructure Updating

    NASA Astrophysics Data System (ADS)

    Tian, W.; Zhu, X.; Liu, Y.

    2012-08-01

    Currently, the top-down spatial data update mechanism has made a big progress and it is wildly applied in many SDI (spatial data infrastructure). However, this mechanism still has some issues. For example, the update schedule is limited by the professional department's project, usually which is too long for the end-user; the data form collection to public cost too much time and energy for professional department; the details of geospatial information does not provide sufficient attribute, etc. Thus, how to deal with the problems has become the effective shortcut. Emerging Internet technology, 3S technique and geographic information knowledge which is popular in the public promote the booming development of geoscience in volunteered geospatial information. Volunteered geospatial information is the current "hotspot", which attracts many researchers to study its data quality and credibility, accuracy, sustainability, social benefit, application and so on. In addition to this, a few scholars also pay attention to the value of VGI to support the SDI updating. And on that basis, this paper presents a bottom-up update mechanism form VGI to SDI, which includes the processes of match homonymous elements between VGI and SDI vector data , change data detection, SDI spatial database update and new data product publication to end-users. Then, the proposed updating cycle is deeply discussed about the feasibility of which can detect the changed elements in time and shorten the update period, provide more accurate geometry and attribute data for spatial data infrastructure and support update propagation.

  12. Big Data Analytics for Disaster Preparedness and Response of Mobile Communication Infrastructure during Natural Hazards

    NASA Astrophysics Data System (ADS)

    Zhong, L.; Takano, K.; Ji, Y.; Yamada, S.

    2015-12-01

    The disruption of telecommunications is one of the most critical disasters during natural hazards. As the rapid expanding of mobile communications, the mobile communication infrastructure plays a very fundamental role in the disaster response and recovery activities. For this reason, its disruption will lead to loss of life and property, due to information delays and errors. Therefore, disaster preparedness and response of mobile communication infrastructure itself is quite important. In many cases of experienced disasters, the disruption of mobile communication networks is usually caused by the network congestion and afterward long-term power outage. In order to reduce this disruption, the knowledge of communication demands during disasters is necessary. And big data analytics will provide a very promising way to predict the communication demands by analyzing the big amount of operational data of mobile users in a large-scale mobile network. Under the US-Japan collaborative project on 'Big Data and Disaster Research (BDD)' supported by the Japan Science and Technology Agency (JST) and National Science Foundation (NSF), we are going to investigate the application of big data techniques in the disaster preparedness and response of mobile communication infrastructure. Specifically, in this research, we have considered to exploit the big amount of operational information of mobile users for predicting the communications needs in different time and locations. By incorporating with other data such as shake distribution of an estimated major earthquake and the power outage map, we are able to provide the prediction information of stranded people who are difficult to confirm safety or ask for help due to network disruption. In addition, this result could further facilitate the network operators to assess the vulnerability of their infrastructure and make suitable decision for the disaster preparedness and response. In this presentation, we are going to introduce the results we obtained based on the big data analytics of mobile user statistical information and discuss the implications of these results.

  13. Lowering the Barriers to Using Data: Enabling Desktop-based HPD Science through Virtual Environments and Web Data Services

    NASA Astrophysics Data System (ADS)

    Druken, K. A.; Trenham, C. E.; Steer, A.; Evans, B. J. K.; Richards, C. J.; Smillie, J.; Allen, C.; Pringle, S.; Wang, J.; Wyborn, L. A.

    2016-12-01

    The Australian National Computational Infrastructure (NCI) provides access to petascale data in climate, weather, Earth observations, and genomics, and terascale data in astronomy, geophysics, ecology and land use, as well as social sciences. The data is centralized in a closely integrated High Performance Computing (HPC), High Performance Data (HPD) and cloud facility. Despite this, there remain significant barriers for many users to find and access the data: simply hosting a large volume of data is not helpful if researchers are unable to find, access, and use the data for their particular need. Use cases demonstrate we need to support a diverse range of users who are increasingly crossing traditional research discipline boundaries. To support their varying experience, access needs and research workflows, NCI has implemented an integrated data platform providing a range of services that enable users to interact with our data holdings. These services include: - A GeoNetwork catalog built on standardized Data Management Plans to search collection metadata, and find relevant datasets; - Web data services to download or remotely access data via OPeNDAP, WMS, WCS and other protocols; - Virtual Desktop Infrastructure (VDI) built on a highly integrated on-site cloud with access to both the HPC peak machine and research data collections. The VDI is a fully featured environment allowing visualization, code development and analysis to take place in an interactive desktop environment; and - A Learning Management System (LMS) containing User Guides, Use Case examples and Jupyter Notebooks structured into courses, so that users can self-teach how to use these facilities with examples from our system across a range of disciplines. We will briefly present these components, and discuss how we engage with data custodians and consumers to develop standardized data structures and services that support the range of needs. We will also highlight some key developments that have improved user experience in utilizing the services, particularly enabling transdisciplinary science. This work combines with other developments at NCI to increase the confidence of scientists from any field to undertake research and analysis on these important data collections regardless of their preferred work environment or level of skill.

  14. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  15. Prioritized service system behavior

    NASA Astrophysics Data System (ADS)

    Oliver, Huw

    2001-07-01

    Internet technology is becoming the infrastructure of the future for any information that can be transmitted digitally, including voice, audio, video and data services of all kinds. The trend to integrate voice and data traffic observed in the Internet is expected to continue until the full integration of all media types is achieved. At the same time it is obvious that the business model employed for current Internet usage is not sustainable for the creation of an infrastructure suitable to support a diverse and ever-increasing range of application services. Currently the Internet provides only a single class of best-effort service and prices are mainly built on flat-fee, access based schemes. We propose the use of pricing mechanisms for controlling demand for scarce resources, in order to improve the economic efficiency of the system. Standard results in economic theory suggest that increasing the value of the network services to the users is beneficial to both the users and the network operator (since he can charge them more and get back a bigger percentage of their surplus). Using pricing mechanisms helps in that respect. When demand is high, prices are being raised and hence deter the users with low valuation for the service to use it. This leaves resources to be available for the users that value them more, and hence are ready to pay more.

  16. WLCG scale testing during CMS data challenges

    NASA Astrophysics Data System (ADS)

    Gutsche, O.; Hajdu, C.

    2008-07-01

    The CMS computing model to process and analyze LHC collision data follows a data-location driven approach and is using the WLCG infrastructure to provide access to GRID resources. As a preparation for data taking, CMS tests its computing model during dedicated data challenges. An important part of the challenges is the test of the user analysis which poses a special challenge for the infrastructure with its random distributed access patterns. The CMS Remote Analysis Builder (CRAB) handles all interactions with the WLCG infrastructure transparently for the user. During the 2006 challenge, CMS set its goal to test the infrastructure at a scale of 50,000 user jobs per day using CRAB. Both direct submissions by individual users and automated submissions by robots were used to achieve this goal. A report will be given about the outcome of the user analysis part of the challenge using both the EGEE and OSG parts of the WLCG. In particular, the difference in submission between both GRID middlewares (resource broker vs. direct submission) will be discussed. In the end, an outlook for the 2007 data challenge is given.

  17. SeaDataNet II - Second phase of developments for the pan-European infrastructure for marine and ocean data management

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Fichaut, Michele

    2013-04-01

    The second phase of the project SeaDataNet started on October 2011 for another 4 years with the aim to upgrade the SeaDataNet infrastructure built during previous years. The numbers of the project are quite impressive: 59 institutions from 35 different countries are involved. In particular, 45 data centers are sharing human and financial resources in a common efforts to sustain an operationally robust and state-of-the-art Pan-European infrastructure for providing up-to-date and high quality access to ocean and marine metadata, data and data products. The main objective of SeaDataNet II is to improve operations and to progress towards an efficient data management infrastructure able to handle the diversity and large volume of data collected via the Pan-European oceanographic fleet and the new observation systems, both in real-time and delayed mode. The infrastructure is based on a semi-distributed system that incorporates and enhance the existing NODCs network. SeaDataNet aims at serving users from science, environmental management, policy making, and economical sectors. Better integrated data systems are vital for these users to achieve improved scientific research and results, to support marine environmental and integrated coastal zone management, to establish indicators of Good Environmental Status for sea basins, and to support offshore industry developments, shipping, fisheries, and other economic activities. The recent EU communication "MARINE KNOWLEDGE 2020 - marine data and observation for smart and sustainable growth" states that the creation of marine knowledge begins with observation of the seas and oceans. In addition, directives, policies, science programmes require reporting of the state of the seas and oceans in an integrated pan-European manner: of particular note are INSPIRE, MSFD, WISE-Marine and GMES Marine Core Service. These underpin the importance of a well functioning marine and ocean data management infrastructure. SeaDataNet is now one of the major players in informatics in oceanography and collaborative relationships have been created with other EU and non EU projects. In particular SeaDataNet has recognised roles in the continuous serving of common vocabularies, the provision of tools for data management, as well as giving access to metadata, data sets and data products of importance for society. The SeaDataNet infrastructure comprises a network of interconnected data centres and a central SeaDataNet portal. The portal provides users not only background information about SeaDataNet and the various SeaDataNet standards and tools, but also a unified and transparent overview of the metadata and controlled access to the large collections of data sets, managed by the interconnected data centres. The presentation will give information on present services of the SeaDataNet infrastructure and services, and highlight a number of key achievements in SeaDataNet II so far.

  18. Characteristics of the auto users and non-users of central Texas toll roads.

    DOT National Transportation Integrated Search

    2009-08-01

    As toll road usage increases to finance new road infrastructure or add capacity to existing road infrastructure, the : question of who does and does not use toll roads becomes increasingly important to toll road developers, financiers, : Traffic and ...

  19. A Climate Information Platform for Copernicus (CLIPC): managing the data flood

    NASA Astrophysics Data System (ADS)

    Juckes, Martin; Swart, Rob; Bärring, Lars; Groot, Annemarie; Thysse, Peter; Som de Cerff, Wim; Costa, Luis; Lückenkötter, Johannes; Callaghan, Sarah; Bennett, Victoria

    2016-04-01

    The FP7 project "Climate Information Platform for Copernicus" (CLIPC) is developing a demonstration portal for the Copernicus Climate Change Service (C3S). The project confronts many problems associated with the huge diversity of underlying data, complex multi-layered uncertainties and extremely complex and evolving user requirements. The infrastructure is founded on a comprehensive approach to managing data and documentation, using global domain independent standards where possible. An extensive thesaurus of terms provides both a robust and flexible foundation for data discovery services and accessible definitions to support users. It is, of course, essential to provide information to users through an interface which reflects their expectations rather than the intricacies of abstract data models. CLIPC has reviewed user engagement activities from other collaborative European projects, conducted user polls, interviews and meetings and is now entering an evaluation phase in which users discuss new features and options in the portal design. The CLIPC portal will provide access to raw climate science data and climate impact indicators derived from that data. The portal needs the flexibility to support access to extremely large datasets as well as providing means to manipulate data and explore complex products interactively.

  20. ESA SSA Programme in support of Space Weather forecasting

    NASA Astrophysics Data System (ADS)

    Luntama, J.; Glover, A.; Hilgers, A. M.

    2010-12-01

    In 2009 European Space Agency (ESA) started a new programme called Space Situational Awareness (SSA) Preparatory Programme. The objective of the programme is to support the European independent utilisation of and access to space research or services. This will be performed through providing timely and quality data, information, services and knowledge regarding the environment, the threats and the sustainable exploitation of the outer space surrounding the planet Earth. SSA serves the implementation of the strategic missions of the European Space Policy based on the peaceful uses of the outer space by all states, by supporting the autonomous capacity to securely and safely operate the critical European space infrastructures. The SSA Preparatory Program will establish the initial elements that will eventually lead into the full deployment of the European SSA services. The SWE Segment of the SSA will provide user services related to the monitoring of the Sun, the solar wind, the radiation belts, the magnetosphere and the ionosphere. These services will include near real time information and forecasts about the characteristics of the space environment and predictions of space weather impacts on sensitive spaceborne and ground based infrastructure. The SSA SWE system will also include establishment of a permanent database for analysis, model development and scientific research. These services are will support a wide variety of user domains including spacecraft designers, spacecraft operators, human space flights, users and operators of transionospheric radio links, and space weather research community. The precursor SWE services to be established starting in 2010 will include a selected subset of these services based on pre-existing space weather applications and services in Europe. This paper will present the key characteristics of the SSA SWE system that is currently being designed. The presentation will focus on the system characteristics that support space weather forecasting and the related services. The presentation will show results from the analysis of the existing European assets and the identified development needs in the mid and long term future to ensure forecasting capability for the services requested the by SSA SWE users. The analysis covers the future SSA SWE space segment and the service development needs for the ground segment.

  1. Characteristics of the truck users and non-users of Texas toll roads.

    DOT National Transportation Integrated Search

    2009-08-01

    As the use of toll roads increase to finance new road infrastructure or add capacity to existing road infrastructure, the : question of who use and do not use toll roads becomes increasingly important to toll road developers, financiers, : Traffic an...

  2. Linking information and people in a social system for academic conferences

    NASA Astrophysics Data System (ADS)

    Brusilovsky, Peter; Oh, Jung Sun; López, Claudia; Parra, Denis; Jeng, Wei

    2017-04-01

    This paper investigates the feasibility of maintaining a social information system to support attendees at an academic conference. The main challenge of this work was to create an infrastructure where users' social activities, such as bookmarking, tagging, and social linking could be used to enhance user navigation and maximize the users' ability to locate two important types of information in conference settings: presentations to attend and attendees to meet. We developed Conference Navigator 3, a social conference support system that integrates a conference schedule planner with a social linking service. We examined its potential and functions in the context of a medium-scale academic conference. In this paper, we present the design of the system's socially enabled features and report the results of a conference-based study. Our study demonstrates the feasibility of social information systems for supporting academic conferences. Despite the low number of potential users and the short timeframe in which conferences took place, the usage of the system was high enough to provide sufficient data for social mechanisms. The study shows that most critical social features were highly appreciated and used, and provides direction for further research.

  3. Analysis of Pervasive Mobile Ad Hoc Routing Protocols

    NASA Astrophysics Data System (ADS)

    Qadri, Nadia N.; Liotta, Antonio

    Mobile ad hoc networks (MANETs) are a fundamental element of pervasive networks and therefore, of pervasive systems that truly support pervasive computing, where user can communicate anywhere, anytime and on-the-fly. In fact, future advances in pervasive computing rely on advancements in mobile communication, which includes both infrastructure-based wireless networks and non-infrastructure-based MANETs. MANETs introduce a new communication paradigm, which does not require a fixed infrastructure - they rely on wireless terminals for routing and transport services. Due to highly dynamic topology, absence of established infrastructure for centralized administration, bandwidth constrained wireless links, and limited resources in MANETs, it is challenging to design an efficient and reliable routing protocol. This chapter reviews the key studies carried out so far on the performance of mobile ad hoc routing protocols. We discuss performance issues and metrics required for the evaluation of ad hoc routing protocols. This leads to a survey of existing work, which captures the performance of ad hoc routing algorithms and their behaviour from different perspectives and highlights avenues for future research.

  4. SeaDataCloud - further developing the pan-European SeaDataNet infrastructure for marine and ocean data management

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Fichaut, Michele

    2017-04-01

    SeaDataCloud marks the third phase of developing the pan-European SeaDataNet infrastructure for marine and ocean data management. The SeaDataCloud project is funded by EU and runs for 4 years from 1st November 2016. It succeeds the successful SeaDataNet II (2011 - 2015) and SeaDataNet (2006 - 2011) projects. SeaDataNet has set up and operates a pan-European infrastructure for managing marine and ocean data and is undertaken by National Oceanographic Data Centres (NODC's) and oceanographic data focal points from 34 coastal states in Europe. The infrastructure comprises a network of interconnected data centres and central SeaDataNet portal. The portal provides users a harmonised set of metadata directories and controlled access to the large collections of datasets, managed by the interconnected data centres. The population of directories has increased considerably in cooperation with and involvement in many associated EU projects and initiatives such as EMODnet. SeaDataNet at present gives overview and access to more than 1.9 million data sets for physical oceanography, chemistry, geology, geophysics, bathymetry and biology from more than 100 connected data centres from 34 countries riparian to European seas. SeaDataNet is also active in setting and governing marine data standards, and exploring and establishing interoperability solutions to connect to other e-infrastructures on the basis of standards of ISO (19115, 19139), and OGC (WMS, WFS, CS-W and SWE). Standards and associated SeaDataNet tools are made available at the SeaDataNet portal for wide uptake by data handling and managing organisations. SeaDataCloud aims at further developing standards, innovating services & products, adopting new technologies, and giving more attention to users. Moreover, it is about implementing a cooperation between the SeaDataNet consortium of marine data centres and the EUDAT consortium of e-infrastructure service providers. SeaDataCloud aims at considerably advancing services and increasing their usage by adopting cloud and High Performance Computing technology. SeaDataCloud will empower researchers with a packaged collection of services and tools, tailored to their specific needs, supporting research and enabling generation of added-value products from marine and ocean data. Substantial activities will be focused on developing added-value services, such as data subsetting, analysis, visualisation, and publishing workflows for users, both regular and advanced users, as part of a Virtual Research Environment (VRE). SeaDataCloud aims at a number of leading user communities that have new challenges for upgrading and expanding the SeaDataNet standards and services: Science, EMODnet, Copernicus Marine Environmental Monitoring Service (CMEMS) and EuroGOOS, and International scientific programmes. The presentation will give information on present services of the SeaDataNet infrastructure and services, and the new challenges in SeaDataCloud, and will highlight a number of key achievements in SeaDataCloud so far.

  5. The Satellite Data Thematic Core Service within the EPOS Research Infrastructure

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Casu, Francesco; Zinno, Ivana; De Luca, Claudio; Buonanno, Sabatino; Zeni, Giovanni; Wright, Tim; Hooper, Andy; Diament, Michel; Ostanciaux, Emilie; Mandea, Mioara; Walter, Thomas; Maccaferri, Francesco; Fernandez, Josè; Stramondo, Salvatore; Bignami, Christian; Bally, Philippe; Pinto, Salvatore; Marin, Alessandro; Cuomo, Antonio

    2017-04-01

    EPOS, the European Plate Observing System, is a long-term plan to facilitate the integrated use of data, data products, software and services, available from distributed Research Infrastructures (RI), for solid Earth science in Europe. Indeed, EPOS integrates a large number of existing European RIs belonging to several fields of the Earth science, from seismology to geodesy, near fault and volcanic observatories as well as anthropogenic hazards. The EPOS vision is that the integration of the existing national and trans-national research infrastructures will increase access and use of the multidisciplinary data recorded by the solid Earth monitoring networks, acquired in laboratory experiments and/or produced by computational simulations. The establishment of EPOS will foster the interoperability of products and services in the Earth science field to a worldwide community of users. Accordingly, the EPOS aim is to integrate the diverse and advanced European Research Infrastructures for solid Earth science, and build on new e-science opportunities to monitor and understand the dynamic and complex solid-Earth System. One of the EPOS Thematic Core Services (TCS), referred to as Satellite Data, aims at developing, implementing and deploying advanced satellite data products and services, mainly based on Copernicus data (namely Sentinel acquisitions), for the Earth science community. This work intends to present the technological enhancements, fostered by EPOS, to deploy effective satellite services in a harmonized and integrated way. In particular, the Satellite Data TCS will deploy five services, EPOSAR, GDM, COMET, 3D-Def and MOD, which are mainly based on the exploitation of SAR data acquired by the Sentinel-1 constellation and designed to provide information on Earth surface displacements. In particular, the planned services will provide both advanced DInSAR products (deformation maps, velocity maps, deformation time series) and value-added measurements (source model, 3D displacement maps, seismic hazard maps). Moreover, the services will release both on-demand and systematic products. The latter will be generated and made available to the users on a continuous basis, by processing each Sentinel-1 data once acquired, over a defined number of areas of interest; while the former will allow users to select data, areas, and time period to carry out their own analyses via an on-line platform. The satellite components will be integrated within the EPOS infrastructure through a common and harmonized interface that will allow users to search, process and share remote sensing images and results. This gateway to the satellite services will be represented by the ESA- Geohazards Exploitation Platform (GEP), a new cloud-based platform for the satellite Earth Observations designed to support the scientific community in the understanding of high impact natural disasters. Satellite Data TCS will use GEP as the common interface toward the main EPOS portal to provide EPOS users not only with data products but also with relevant processing and visualisation software, thus allowing users to gather and process on a cloud-computing infrastructure large datasets without any need to download them locally.

  6. Virtual Labs (Science Gateways) as platforms for Free and Open Source Science

    NASA Astrophysics Data System (ADS)

    Lescinsky, David; Car, Nicholas; Fraser, Ryan; Friedrich, Carsten; Kemp, Carina; Squire, Geoffrey

    2016-04-01

    The Free and Open Source Software (FOSS) movement promotes community engagement in software development, as well as provides access to a range of sophisticated technologies that would be prohibitively expensive if obtained commercially. However, as geoinformatics and eResearch tools and services become more dispersed, it becomes more complicated to identify and interface between the many required components. Virtual Laboratories (VLs, also known as Science Gateways) simplify the management and coordination of these components by providing a platform linking many, if not all, of the steps in particular scientific processes. These enable scientists to focus on their science, rather than the underlying supporting technologies. We describe a modular, open source, VL infrastructure that can be reconfigured to create VLs for a wide range of disciplines. Development of this infrastructure has been led by CSIRO in collaboration with Geoscience Australia and the National Computational Infrastructure (NCI) with support from the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service (ANDS). Initially, the infrastructure was developed to support the Virtual Geophysical Laboratory (VGL), and has subsequently been repurposed to create the Virtual Hazards Impact and Risk Laboratory (VHIRL) and the reconfigured Australian National Virtual Geophysics Laboratory (ANVGL). During each step of development, new capabilities and services have been added and/or enhanced. We plan on continuing to follow this model using a shared, community code base. The VL platform facilitates transparent and reproducible science by providing access to both the data and methodologies used during scientific investigations. This is further enhanced by the ability to set up and run investigations using computational resources accessed through the VL. Data is accessed using registries pointing to catalogues within public data repositories (notably including the NCI National Environmental Research Data Interoperability Platform), or by uploading data directly from user supplied addresses or files. Similarly, scientific software is accessed through registries pointing to software repositories (e.g., GitHub). Runs are configured by using or modifying default templates designed by subject matter experts. After the appropriate computational resources are identified by the user, Virtual Machines (VMs) are spun up and jobs are submitted to service providers (currently the NeCTAR public cloud or Amazon Web Services). Following completion of the jobs the results can be reviewed and downloaded if desired. By providing a unified platform for science, the VL infrastructure enables sophisticated provenance capture and management. The source of input data (including both collection and queries), user information, software information (version and configuration details) and output information are all captured and managed as a VL resource which can be linked to output data sets. This provenance resource provides a mechanism for publication and citation for Free and Open Source Science.

  7. Making Temporal Search More Central in Spatial Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Corti, P.; Lewis, B.

    2017-10-01

    A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.

  8. The EGI-Engage EPOS Competence Center - Interoperating heterogeneous AAI mechanisms and Orchestrating distributed computational resources

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Scardaci, Diego; Spinuso, Alessandro; Sterzel, Mariusz; Schwichtenberg, Horst; Gemuend, Andre

    2016-04-01

    The mission of EGI-Engage project [1] is to accelerate the implementation of the Open Science Commons vision, where researchers from all disciplines have easy and open access to the innovative digital services, data, knowledge and expertise they need for collaborative and excellent research. The Open Science Commons is grounded on three pillars: the e-Infrastructure Commons, an ecosystem of services that constitute the foundation layer of distributed infrastructures; the Open Data Commons, where observations, results and applications are increasingly available for scientific research and for anyone to use and reuse; and the Knowledge Commons, in which communities have shared ownership of knowledge, participate in the co-development of software and are technically supported to exploit state-of-the-art digital services. To develop the Knowledge Commons, EGI-Engage is supporting the work of a set of community-specific Competence Centres, with participants from user communities (scientific institutes), National Grid Initiatives (NGIs), technology and service providers. Competence Centres collect and analyse requirements, integrate community-specific applications into state-of-the-art services, foster interoperability across e-Infrastructures, and evolve services through a user-centric development model. One of these Competence Centres is focussed on the European Plate Observing System (EPOS) [2] as representative of the solid earth science communities. EPOS is a pan-European long-term plan to integrate data, software and services from the distributed (and already existing) Research Infrastructures all over Europe, in the domain of the solid earth science. EPOS will enable innovative multidisciplinary research for a better understanding of the Earth's physical and chemical processes that control earthquakes, volcanic eruptions, ground instability and tsunami as well as the processes driving tectonics and Earth's surface dynamics. EPOS will improve our ability to better manage the use of the subsurface of the Earth. EPOS started its Implementation Phase in October 2015 and is now actively working in order to integrate multidisciplinary data into a single e-infrastructure. Multidisciplinary data are organized and governed by the Thematic Core Services (TCS) - European wide organizations and e-Infrastructure providing community specific data and data products - and are driven by various scientific communities encompassing a wide spectrum of Earth science disciplines. TCS data, data products and services will be integrated into the Integrated Core Services (ICS) system, that will ensure their interoperability and access to these services by the scientific community as well as other users within the society. The EPOS competence center (EPOS CC) goal is to tackle two of the main challenges that the ICS are going to face in the near future, by taking advantage of the technical solutions provided by EGI. In order to do this, we will present the two pilot use cases the EGI-EPOS CC is developing: 1) The AAI pilot, dealing with the provision of transparent and homogeneous access to the ICS infrastructure to users owning different kind of credentials (e.g. eduGain, OpenID Connect, X509 certificates etc.). Here the focus is on the mechanisms which allow the credential delegation. 2) The computational pilot, Improve the back-end services of an existing application in the field of Computational Seismology, developed in the context of the EC funded project VERCE. The application allows the processing and the comparison of data resulting from the simulation of seismic wave propagation following a real earthquake and real measurements recorded by seismographs. While the simulation data is produced directly by the users and stored in a Data Management System, the observations need to be pre-staged from institutional data-services, which are maintained by the community itself. This use case aims at exploiting the EGI FedCloud e-infrastructure for Data Intensive analysis and also explores possible interaction with other Common Data Infrastructure initiatives as EUDAT. In the presentation, the state of the art of the two use cases, together with the open challenges and the future application will be discussed. Also, possible integration of EGI solutions with EPOS and other e-infrastructure providers will be considered. [1] EGI-ENGAGE https://www.egi.eu/about/egi-engage/ [2] EPOS http://www.epos-eu.org/

  9. Security and Policy for Group Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Foster; Carl Kesselman

    2006-07-31

    “Security and Policy for Group Collaboration” was a Collaboratory Middleware research project aimed at providing the fundamental security and policy infrastructure required to support the creation and operation of distributed, computationally enabled collaborations. The project developed infrastructure that exploits innovative new techniques to address challenging issues of scale, dynamics, distribution, and role. To reduce greatly the cost of adding new members to a collaboration, we developed and evaluated new techniques for creating and managing credentials based on public key certificates, including support for online certificate generation, online certificate repositories, and support for multiple certificate authorities. To facilitate the integration ofmore » new resources into a collaboration, we improved significantly the integration of local security environments. To make it easy to create and change the role and associated privileges of both resources and participants of collaboration, we developed community wide authorization services that provide distributed, scalable means for specifying policy. These services make it possible for the delegation of capability from the community to a specific user, class of user or resource. Finally, we instantiated our research results into a framework that makes it useable to a wide range of collaborative tools. The resulting mechanisms and software have been widely adopted within DOE projects and in many other scientific projects. The widespread adoption of our Globus Toolkit technology has provided, and continues to provide, a natural dissemination and technology transfer vehicle for our results.« less

  10. Analysing the usage and evidencing the importance of fast chargers for the adoption of battery electric vehicles

    DOE PAGES

    Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.; ...

    2017-06-27

    An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less

  11. Analysing the usage and evidencing the importance of fast chargers for the adoption of battery electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neaimeh, Myriam; Salisbury, Shawn D.; Hill, Graeme A.

    An appropriate charging infrastructure is one of the key aspects needed to support the mass adoption of battery electric vehicles (BEVs), and it is suggested that publically available fast chargers could play a key role in this infrastructure. As fast charging is a relatively new technology, very little research is conducted on the topic using real world datasets, and it is of utmost importance to measure actual usage of this technology and provide evidence on its importance to properly inform infrastructure planning. 90,000 fast charge events collected from the first large-scale roll-outs and evaluation projects of fast charging infrastructure inmore » the UK and the US and 12,700 driving days collected from 35 BEVs in the UK were analysed. Using multiple regression analysis, we examined the relationship between daily driving distance and standard and fast charging and demonstrated that fast chargers are more influential. Fast chargers enabled using BEVs on journeys above their single-charge range that would have been impractical using standard chargers. Fast chargers could help overcome perceived and actual range barriers, making BEVs more attractive to future users. At current BEV market share, there is a vital need for policy support to accelerate the development of fast charge networks.« less

  12. A near miss: the importance of context in a public health informatics project in a New Zealand case study.

    PubMed

    Wells, Stewart; Bullen, Chris

    2008-01-01

    This article describes the near failure of an information technology (IT) system designed to support a government-funded, primary care-based hepatitis B screening program in New Zealand. Qualitative methods were used to collect data and construct an explanatory model. Multiple incorrect assumptions were made about participants, primary care workflows and IT capacity, software vendor user knowledge, and the health IT infrastructure. Political factors delayed system development and it was implemented untested, almost failing. An intensive rescue strategy included system modifications, relaxation of data validity rules, close engagement with software vendors, and provision of intensive on-site user support. This case study demonstrates that consideration of the social, political, technological, and health care contexts is important for successful implementation of public health informatics projects.

  13. e-Science on Earthquake Disaster Mitigation by EUAsiaGrid

    NASA Astrophysics Data System (ADS)

    Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong

    2010-05-01

    Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the earthquake wave propagation to tsunami mitigation would be feasible once the user community support is in place.

  14. Validating the usability of an interactive Earth Observation based web service for landslide investigation

    NASA Astrophysics Data System (ADS)

    Albrecht, Florian; Weinke, Elisabeth; Eisank, Clemens; Vecchiotti, Filippo; Hölbling, Daniel; Friedl, Barbara; Kociu, Arben

    2017-04-01

    Regional authorities and infrastructure maintainers in almost all mountainous regions of the Earth need detailed and up-to-date landslide inventories for hazard and risk management. Landslide inventories usually are compiled through ground surveys and manual image interpretation following landslide triggering events. We developed a web service that uses Earth Observation (EO) data to support the mapping and monitoring tasks for improving the collection of landslide information. The planned validation of the EO-based web service does not only cover the analysis of the achievable landslide information quality but also the usability and user friendliness of the user interface. The underlying validation criteria are based on the user requirements and the defined tasks and aims in the work description of the FFG project Land@Slide (EO-based landslide mapping: from methodological developments to automated web-based information delivery). The service will be validated in collaboration with stakeholders, decision makers and experts. Users are requested to test the web service functionality and give feedback with a web-based questionnaire by following the subsequently described workflow. The users will operate the web-service via the responsive user interface and can extract landslide information from EO data. They compare it to reference data for quality assessment, for monitoring changes and for assessing landslide-affected infrastructure. An overview page lets the user explore a list of example projects with resulting landslide maps and mapping workflow descriptions. The example projects include mapped landslides in several test areas in Austria and Northern Italy. Landslides were extracted from high resolution (HR) and very high resolution (VHR) satellite imagery, such as Landsat, Sentinel-2, SPOT-5, WorldView-2/3 or Pléiades. The user can create his/her own project by selecting available satellite imagery or by uploading new data. Subsequently, a new landslide extraction workflow can be initiated through the functionality that the web service provides: (1) a segmentation of the image into spectrally homogeneous objects, (2) a classification of the objects into landslide and non-landslide areas and (3) an editing tool for the manual refinement of extracted landslide boundaries. In addition, the user interface of the web service provides tools that enable the user (4) to perform a monitoring that identifies changes between landslide maps of different points in time, (5) to perform a validation of the landslide maps by comparing them to reference data, and (6) to perform an assessment of affected infrastructure by comparing the landslide maps to respective infrastructure data. After exploring the web service functionality, the users are asked to fill in the online validation protocol in form of a questionnaire in order to provide their feedback. Concerning usability, we evaluate how intuitive the web service functionality can be operated, how well the integrated help information guides the users, and what kind of background information, e.g. remote sensing concepts and theory, is necessary for a practitioner to fully exploit the value of EO data. The feedback will be used for improving the user interface and for the implementation of additional functionality.

  15. Activities report of PTT Research

    NASA Astrophysics Data System (ADS)

    In the field of postal infrastructure research, activities were performed on postcode readers, radiolabels, and techniques of operations research and artificial intelligence. In the field of telecommunication, transportation, and information, research was made on multipurpose coding schemes, speech recognition, hypertext, a multimedia information server, security of electronic data interchange, document retrieval, improvement of the quality of user interfaces, domotics living support (techniques), and standardization of telecommunication prototcols. In the field of telecommunication infrastructure and provisions research, activities were performed on universal personal telecommunications, advanced broadband network technologies, coherent techniques, measurement of audio quality, near field facilities, local beam communication, local area networks, network security, coupling of broadband and narrowband integrated services digital networks, digital mapping, and standardization of protocols.

  16. The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    NASA Astrophysics Data System (ADS)

    Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David

    2017-05-01

    We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.

  17. InterMine Webservices for Phytozome (Rev2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Joseph; Goodstein, David; Rokhsar, Dan

    2014-07-10

    A datawarehousing framework for information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make complex, and often unique, queries of the data. Previously, phytozome.net used BioMart to provide the infrastructure. As the complexity, scale and diversity of the dataset as grown, we decided to implement an InterMine web service on our servers. This change was largely motivated by the ability to have a more complex table structure and richer webmore » reporting mechanism than BioMart. For InterMine to achieve its more complex database schema it requires an XML description of the data and an appropriate loader. Unlimited one-to-many and many-to-many relationship between the tables can be enabled in the schema. We have implemented support for:1.) Genomes and annotations for the data in Phytozome. This set is the 48 organisms currently stored in a back end CHADO datastore. The data loaders are modified versions of the CHADO data adapters from FlyMine. 2.) Interproscan results from all proteins in the Phytozome database. 3.) Clusters of proteins into a grouped heirarchically by similarity. 4.) Cufflinks results from tissue-specific RNA-Seq data of Phytozome organisms. 5.) Diversity data (GATK and SnpEFF results) from a set of individual organism. The last two datatypes are new in this implementation of our web services. We anticipate that the scale of these data will increase considerably in the near future.« less

  18. Measuring Systemic Impacts of Bike Infrastructure Projects

    DOT National Transportation Integrated Search

    2018-05-01

    This paper qualitatively identifies the impacts of bicycle infrastructure on all roadway users, including safety, operations, and travel route choice. Bicycle infrastructure includes shared lanes, conventional bike lanes, and separated bike lanes. Th...

  19. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    NASA Astrophysics Data System (ADS)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  20. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    Stormwater discharges continue to cause impairment of our Nation’s waterbodies. EPA has developed the National Stormwater Calculator (SWC) to help support local, state, and national stormwater management objectives to reduce runoff through infiltration and retention using green infrastructure practices as low impact development (LID) controls. The primary focus of the SWC is to inform site developers on how well they can meet a desired stormwater retention target with and without the use of green infrastructure. It can also be used by landscapers and homeowners. Platform. The SWC is a Windows-based desktop program that requires an internet connection. A mobile web application version that will be compatible with all operating systems is currently being developed and is expected to be released in the fall of 2017.Cost Module. An LID cost estimation module within the application allows planners and managers to evaluate LID controls based on comparison of regional and national project planning level cost estimates (capital and average annual maintenance) and predicted LID control performance. Cost estimation is accomplished based on user-identified size configuration of the LID control infrastructure and other key project and site-specific variables. This includes whether the project is being applied as part of new development or redevelopment and if there are existing site constraints.Climate Scenarios. The SWC allows users to consider how runoff may vary based

  1. Installed Base as a Facilitator for User-Driven Innovation: How Can User Innovation Challenge Existing Institutional Barriers?

    PubMed Central

    Andersen, Synnøve Thomassen; Jansen, Arild

    2012-01-01

    The paper addresses an ICT-based, user-driven innovation process in the health sector in rural areas in Norway. The empirical base is the introduction of a new model for psychiatric health provision. This model is supported by a technical solution based on mobile phones that is aimed to help the communication between professional health personnel and patients. This innovation was made possible through the use of standard mobile technology rather than more sophisticated systems. The users were heavily involved in the development work. Our analysis shows that by thinking simple and small-scale solutions, including to take the user's needs and premises as a point of departure rather than focusing on advanced technology, the implementation process was made possible. We show that by combining theory on information infrastructures, user-oriented system development, and innovation in a three-layered analytical framework, we can explain the interrelationship between technical, organizational, and health professional factors that made this innovation a success. PMID:23304134

  2. Disaster Response and Decision Support in Partnership with the California Earthquake Clearinghouse

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Rosinski, A.; Vaughan, D.; Morentz, J.

    2014-12-01

    Getting the right information to the right people at the right time is critical during a natural disaster. E-DECIDER (Emergency Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response) is a NASA decision support system designed to produce remote sensing and geophysical modeling data products that are relevant to the emergency preparedness and response communities and serve as a gateway to enable the delivery of NASA decision support products to these communities. The E-DECIDER decision support system has several tools, services, and products that have been used to support end-user exercises in partnership with the California Earthquake Clearinghouse since 2012, including near real-time deformation modeling results and on-demand maps of critical infrastructure that may have been potentially exposed to damage by a disaster. E-DECIDER's underlying service architecture allows the system to facilitate delivery of NASA decision support products to the Clearinghouse through XchangeCore Web Service Data Orchestration that allows trusted information exchange among partner agencies. This in turn allows Clearinghouse partners to visualize data products produced by E-DECIDER and other NASA projects through incident command software such as SpotOnResponse or ArcGIS Online.

  3. Proposed Requirements-driven User-scenario Development Protocol for the Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement

    NASA Astrophysics Data System (ADS)

    Wee, B.; Car, N.; Percivall, G.; Allen, D.; Fitch, P. G.; Baumann, P.; Waldmann, H. C.

    2014-12-01

    The Belmont Forum E-Infrastructure and Data Management Cooperative Research Agreement (CRA) is designed to foster a global community to collaborate on e-infrastructure challenges. One of the deliverables is an implementation plan to address global data infrastructure interoperability challenges and align existing domestic and international capabilities. Work package three (WP3) of the CRA focuses on the harmonization of global data infrastructure for sharing environmental data. One of the subtasks under WP3 is the development of user scenarios that guide the development of applicable deliverables. This paper describes the proposed protocol for user scenario development. It enables the solicitation of user scenarios from a broad constituency, and exposes the mechanisms by which those solicitations are evaluated against requirements that map to the Belmont Challenge. The underlying principle of traceability forms the basis for a structured, requirements-driven approach resulting in work products amenable to trade-off analyses and objective prioritization. The protocol adopts the ISO Reference Model for Open Distributed Processing (RM-ODP) as a top level framework. User scenarios are developed within RM-ODP's "Enterprise Viewpoint". To harmonize with existing frameworks, the protocol utilizes the conceptual constructs of "scenarios", "use cases", "use case categories", and use case templates as adopted by recent GEOSS Architecture Implementation Project (AIP) deliverables and CSIRO's eReefs project. These constructs are encapsulated under the larger construct of "user scenarios". Once user scenarios are ranked by goodness-of-fit to the Belmont Challenge, secondary scoring metrics may be generated, like goodness-of-fit to FutureEarth science themes. The protocol also facilitates an assessment of the ease of implementing given user scenario using existing GEOSS AIP deliverables. In summary, the protocol results in a traceability graph that can be extended to coordinate across research programmes. If implemented using appropriate technologies and harmonized with existing ontologies, this approach enables queries, sensitivity analyses, and visualization of complex relationships.

  4. Columbus VIII - Symposium on Space Station Utilization, 8th, Munich, Germany, Mar. 30-Apr. 4, 1992, Selected Papers

    NASA Astrophysics Data System (ADS)

    1993-03-01

    The symposium includes topics on the Columbus Programme and Precursor missions, the user support and ground infrastructure, the scientific requirements for the Columbus payloads, the payload operations, and the Mir missions. Papers are presented on Columbus Precursor Spacelab missions, the role of the APM Centre in the support of Columbus Precursor flights, the refined decentralized concept and development support, the Microgravity Advanced Research and Support (MARS) Center update, and the Columbus payload requirements in human physiology. Attention is also given to the fluid science users requirements, European space science and Space Station Freedom, payload operations for the Precursor Mission E1, and the strategic role of automation and robotics for Columbus utilization. Other papers are on a joint Austro-Soviet space project AUSTROMIR-91; a study of cognitive functions in microgravity, COGIMIR; the influence of microgravity on immune system and genetic information; and the Mir'92 project. (For individual items see A93-26552 to A93-26573)

  5. The Importance of Biodiversity E-infrastructures for Megadiverse Countries

    PubMed Central

    Canhos, Dora A. L.; Sousa-Baena, Mariane S.; de Souza, Sidnei; Maia, Leonor C.; Stehmann, João R.; Canhos, Vanderlei P.; De Giovanni, Renato; Bonacelli, Maria B. M.; Los, Wouter; Peterson, A. Townsend

    2015-01-01

    Addressing the challenges of biodiversity conservation and sustainable development requires global cooperation, support structures, and new governance models to integrate diverse initiatives and achieve massive, open exchange of data, tools, and technology. The traditional paradigm of sharing scientific knowledge through publications is not sufficient to meet contemporary demands that require not only the results but also data, knowledge, and skills to analyze the data. E-infrastructures are key in facilitating access to data and providing the framework for collaboration. Here we discuss the importance of e-infrastructures of public interest and the lack of long-term funding policies. We present the example of Brazil’s speciesLink network, an e-infrastructure that provides free and open access to biodiversity primary data and associated tools. SpeciesLink currently integrates 382 datasets from 135 national institutions and 13 institutions from abroad, openly sharing ~7.4 million records, 94% of which are associated to voucher specimens. Just as important as the data is the network of data providers and users. In 2014, more than 95% of its users were from Brazil, demonstrating the importance of local e-infrastructures in enabling and promoting local use of biodiversity data and knowledge. From the outset, speciesLink has been sustained through project-based funding, normally public grants for 2–4-year periods. In between projects, there are short-term crises in trying to keep the system operational, a fact that has also been observed in global biodiversity portals, as well as in social and physical sciences platforms and even in computing services portals. In the last decade, the open access movement propelled the development of many web platforms for sharing data. Adequate policies unfortunately did not follow the same tempo, and now many initiatives may perish. PMID:26204382

  6. The Importance of Biodiversity E-infrastructures for Megadiverse Countries.

    PubMed

    Canhos, Dora A L; Sousa-Baena, Mariane S; de Souza, Sidnei; Maia, Leonor C; Stehmann, João R; Canhos, Vanderlei P; De Giovanni, Renato; Bonacelli, Maria B M; Los, Wouter; Peterson, A Townsend

    2015-07-01

    Addressing the challenges of biodiversity conservation and sustainable development requires global cooperation, support structures, and new governance models to integrate diverse initiatives and achieve massive, open exchange of data, tools, and technology. The traditional paradigm of sharing scientific knowledge through publications is not sufficient to meet contemporary demands that require not only the results but also data, knowledge, and skills to analyze the data. E-infrastructures are key in facilitating access to data and providing the framework for collaboration. Here we discuss the importance of e-infrastructures of public interest and the lack of long-term funding policies. We present the example of Brazil's speciesLink network, an e-infrastructure that provides free and open access to biodiversity primary data and associated tools. SpeciesLink currently integrates 382 datasets from 135 national institutions and 13 institutions from abroad, openly sharing ~7.4 million records, 94% of which are associated to voucher specimens. Just as important as the data is the network of data providers and users. In 2014, more than 95% of its users were from Brazil, demonstrating the importance of local e-infrastructures in enabling and promoting local use of biodiversity data and knowledge. From the outset, speciesLink has been sustained through project-based funding, normally public grants for 2-4-year periods. In between projects, there are short-term crises in trying to keep the system operational, a fact that has also been observed in global biodiversity portals, as well as in social and physical sciences platforms and even in computing services portals. In the last decade, the open access movement propelled the development of many web platforms for sharing data. Adequate policies unfortunately did not follow the same tempo, and now many initiatives may perish.

  7. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    PubMed

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  8. AWARE: Adaptive Software Monitoring and Dynamic Reconfiguration for Critical Infrastructure Protection

    DTIC Science & Technology

    2015-04-29

    in which we applied these adaptation patterns to an adaptive news web server intended to tolerate extremely heavy, unexpected loads. To address...collection of existing models used as benchmarks for OO-based refactoring and an existing web -based repository called REMODD to provide users with model...invariant properties. Specifically, we developed Avida- MDE (based on the Avida digital evolution platform) to support the automatic generation of software

  9. T-dominance: Prioritized Defense Deployment for BYOD Security (Post Print)

    DTIC Science & Technology

    2013-10-01

    infrastructure. Employees’ demand/ satisfaction , decreased IT acquisition and support cost, and increased use of cloud/virtualization technologies in...example, a report [8] on hijacking hotel Wi-Fi hotspots for drive-by malware attacks on laptops comes close to what we have in mind; practical man-in...obtaining unwarranted privilege, are often ignored for convenience, or circumvented for customization by the users. Rootkits, like iOS Jailbreak5, are

  10. Engineering With Nature Geographic Project Mapping Tool (EWN ProMap)

    DTIC Science & Technology

    2015-07-01

    EWN ProMap database provides numerous case studies for infrastructure projects such as breakwaters, river engineering dikes, and seawalls that have...the EWN Project Mapping Tool (EWN ProMap) is to assist users in their search for case study information that can be valuable for developing EWN ideas...Essential elements of EWN include: (1) using science and engineering to produce operational efficiencies supporting sustainable delivery of

  11. Municipal water reuse for urban agriculture in Namibia: Modeling nutrient and salt flows as impacted by sanitation user behavior.

    PubMed

    Woltersdorf, L; Scheidegger, R; Liehr, S; Döll, P

    2016-03-15

    Adequate sanitation, wastewater treatment and irrigation infrastructure often lacks in urban areas of developing countries. While treated, nutrient-rich reuse water is a precious resource for crop production in dry regions, excessive salinity might harm the crops. The aim of this study was to quantify, from a system perspective, the nutrient and salt flows a new infrastructure connecting water supply, sanitation, wastewater treatment and nutrient-rich water reuse for the irrigation of agriculture, from a system perspective. For this, we developed and applied a quantitative assessment method to understand the benefits and to support the management of the new water infrastructure in an urban area in semi-arid Namibia. The nutrient and salt flows, as affected by sanitation user behavior, were quantified by mathematical material flow analysis that accounts for the low availability of suitable and certain data in developing countries, by including data ranges and by assessing the effects of different assumptions in cases. Also the nutrient and leaching requirements of a crop scheme were calculated. We found that, with ideal sanitation use, 100% of nutrients and salts are reclaimed and the slightly saline reuse water is sufficient to fertigate 10 m(2)/cap/yr (90% uncertainty interval 7-12 m(2)/cap/yr). However, only 50% of the P contained in human excreta could be finally used for crop nutrition. During the pilot phase fewer sanitation users than expected used slightly more water per capita, used the toilets less frequently and practiced open defecation more frequently. Therefore, it was only possible to reclaim about 85% of nutrients from human excreta, the reuse water was non-saline and contained less nutrient so that the P was the limiting factor for crop fertigation. To reclaim all nutrients from human excreta and fertigate a larger agricultural area, sanitation user behavior needs to be improved. The results and the methodology of this study can be generalized and used worldwide in other semi-arid regions requiring irrigation for agriculture as well as urban areas in developing countries with inadequate sanitation infrastructure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A Broker-based approach for GEOSS authentication/authorization services

    NASA Astrophysics Data System (ADS)

    Santoro, Mattia; Nativi, Stefano

    2015-04-01

    The Group on Earth Observation (GEO) is a voluntary partnership of governments and international organizations coordinating efforts to build a Global Earth Observation System of Systems (GEOSS). GEOSS aims to achieve societal benefits through voluntary contribution and sharing of resources to better understand the relationships between the society and the environment where we live. The GEOSS Common Infrastructure (GCI) implements a digital infrastructure (e-infrastructure) that coordinates access to these systems, interconnecting and harmonizing their data, applications, models, and products. The GCI component implementing the needed interoperability arrangements to interconnect the data systems contributing to GEOSS is the GEO DAB (Discovery and Access Broker). This provides a unique entry point to which client applications (i.e. the portals and apps) can connect for exploiting (search, discover, and access) resources available through GCI. The GEO DAB implements the brokering approach (Nativi et al., 2013) to build a flexible and scalable System of Systems. GEOSS data providers ask for information about who accessed their resources and, in some cases, want to limit the data download. GEOSS users ask for a profiled interaction with the system based on their needs and expertise level. This raised the need for an enrichment of GEO DAB functionalities, i.e. user authentication/authorization. Besides, authentication and authorization is necessary for GEOSS to provide moderated social services - e.g. feedback messages, data "fit for use" comments, etc. In the development of this new functionality, the need to support existing and well-used users' credentials (e.g. Google, Twitter, etc.) stems from GEOSS principles to build on existing systems and lower entry-barriers for users. To cope with these requirements and face the heterogeneity of technologies used by the different data systems and client applications, a broker-based approach for the authentication/authorization was introduced as a new functionality of GEO DAB. This new capability was demonstrated at the last GEO XI Plenary (November 2014). This work will be presented and discussed. Refenrences Nativi, S.; Craglia, M.; Pearlman, J., "Earth Science Infrastructures Interoperability: The Brokering Approach," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of , vol.6, no.3, pp.1118,1129, June 2013

  13. Involving Users to Improve the Collaborative Logical Framework

    PubMed Central

    2014-01-01

    In order to support collaboration in web-based learning, there is a need for an intelligent support that facilitates its management during the design, development, and analysis of the collaborative learning experience and supports both students and instructors. At aDeNu research group we have proposed the Collaborative Logical Framework (CLF) to create effective scenarios that support learning through interaction, exploration, discussion, and collaborative knowledge construction. This approach draws on artificial intelligence techniques to support and foster an effective involvement of students to collaborate. At the same time, the instructors' workload is reduced as some of their tasks—especially those related to the monitoring of the students behavior—are automated. After introducing the CLF approach, in this paper, we present two formative evaluations with users carried out to improve the design of this collaborative tool and thus enrich the personalized support provided. In the first one, we analyze, following the layered evaluation approach, the results of an observational study with 56 participants. In the second one, we tested the infrastructure to gather emotional data when carrying out another observational study with 17 participants. PMID:24592196

  14. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    NASA Astrophysics Data System (ADS)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are relevant European Research infrastructure in the field of Earth Science (EPOS and ICOS), Bioinformatics (BBMRI and ELIXIR) and Space Physics (EISCAT-3D). The first outcome of this activity has been the definition of a generic use case that captures the typical user scenario with respect the integrated use of the EGI and EUDAT infrastructures. This generic use case allows a user to instantiate a set of Virtual Machine images on the EGI Federated Cloud to perform computational jobs that analyse data previously stored on EUDAT long-term storage systems. The results of such analysis can be staged back to EUDAT storages, and if needed, allocated with Permanent identifyers (PIDs) for future use. The implementation of this generic use case requires the following integration activities between EGI and EUDAT: (1) harmonisation of the user authentication and authorisation models, (2) implementing interface connectors between the relevant EGI and EUDAT services, particularly EGI Cloud compute facilities and EUDAT long-term storage and PID systems. In the presentation, the collected user requirements and the implementation status of the universal use case will be showed. Furthermore, how the universal use case is currently applied to satisfy EPOS and ICOS needs will be described.

  15. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  16. Astronomical Infrastructure for Data Access (AIDA): service activities for higher education and outreach

    NASA Astrophysics Data System (ADS)

    Iafrate, G.; Ramella, M.; Boch, T.; Bonnarel, F.; Chèreau, F.; Fernique, P.; Osuna, P.

    2009-04-01

    We present preliminary simple interfaces developed to enable students, teachers, amateur astronomers and general public to access and use the wealth of astronomical data available in ground-based and space archives through the European Virtual Observatory (EuroVO). The development of these outreach interfaces are the aim of a workpackage of EuroVO-AIDA (Astronomical Infrastructure for Data Access), a project supported by EU in the framework of the FP7 Infrastructure Scientific Research Repositories initiative (project RI2121104). The aim of AIDA is to create an operating infrastructure enabling and stimulating new scientific usage of astronomy digital repositories. Euro VO AIDA is a collaboration between six European countries (PI Francoise Genova, CDS). The professional tools we adapt to the requirements of outreach activities are Aladin (CDS), Stellarium/VirGO (ESO) and VOSpec (ESA VO). Some initial requirements have been set a priori in order to produce a first version of the simplified interfaces, but the plan is to test the initial simplified versions with a sample of target users in order to take their feed-back into account for the development of the final outreach interface. The core of the test program consists of use cases we designed and complemented with proper multilingual documentation covering both the astrophysical context and the use of the software. In the special case of students in the age group 14-18 and their teachers, we take our use cases to schools. We work out the tests in classrooms supporting students working on PCs connected to the internet. At the current stage of the project, we are collecting the users feedback. Relevant links: Euro-VO AIDA Overview http://www.euro-vo.org/pub/aida/overview.html Euro-VO AIDA WP5 http://cds.u-strasbg.fr/twikiAIDA/bin/view/EuroVOAIDA/WP5WorkProgramme

  17. SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure

    NASA Astrophysics Data System (ADS)

    Politi, R.; Capaccioni, F.; Giardino, M.; Fonte, S.; Capria, M. T.; Turrini, D.; De Sanctis, M. C.; Piccioni, G.

    2018-04-01

    Description of SeaBIRD (Searchable and Browsable Infrastructure for Repository of Data), a software and hardware infrastructure for multi-mission planetary datamining, with web-based GUI and API set for the integration in users' software.

  18. Community Needs Assessment and Portal Prototype Development for an Arctic Spatial Data Infrastructure (ASDI)

    NASA Astrophysics Data System (ADS)

    Wiggins, H. V.; Warnick, W. K.; Hempel, L. C.; Henk, J.; Sorensen, M.; Tweedie, C. E.; Gaylord, A. G.

    2007-12-01

    As the creation and use of geospatial data in research, management, logistics, and education applications has proliferated, there is now a tremendous potential for advancing science through a variety of cyber-infrastructure applications, including Spatial Data Infrastructure (SDI) and related technologies. SDIs provide a necessary and common framework of standards, securities, policies, procedures, and technology to support the effective acquisition, coordination, dissemination and use of geospatial data by multiple and distributed stakeholder and user groups. Despite the numerous research activities in the Arctic, there is no established SDI and, because of this lack of a coordinated infrastructure, there is inefficiency, duplication of effort, and reduced data quality and search ability of arctic geospatial data. The urgency for establishing this framework is significant considering the myriad of data that is being collected in celebration of the International Polar Year (IPY) in 2007-2008 and the current international momentum for an improved and integrated circum-arctic terrestrial-marine-atmospheric environmental observatories network. The key objective of this project is to lay the foundation for full implementation of an Arctic Spatial Data Infrastructure (ASDI) through an assessment of community needs, readiness, and resources and through the development of a prototype web-mapping portal.

  19. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  20. Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) - A New U.S. DOE Data Archive

    NASA Astrophysics Data System (ADS)

    Agarwal, D.; Varadharajan, C.; Cholia, S.; Snavely, C.; Hendrix, V.; Gunter, D.; Riley, W. J.; Jones, M.; Budden, A. E.; Vieglais, D.

    2017-12-01

    The ESS-DIVE archive is a new U.S. Department of Energy (DOE) data archive designed to provide long-term stewardship and use of data from observational, experimental, and modeling activities in the earth and environmental sciences. The ESS-DIVE infrastructure is constructed with the long-term vision of enabling broad access to and usage of the DOE sponsored data stored in the archive. It is designed as a scalable framework that incentivizes data providers to contribute well-structured, high-quality data to the archive and that enables the user community to easily build data processing, synthesis, and analysis capabilities using those data. The key innovations in our design include: (1) application of user-experience research methods to understand the needs of users and data contributors; (2) support for early data archiving during project data QA/QC and before public release; (3) focus on implementation of data standards in collaboration with the community; (4) support for community built tools for data search, interpretation, analysis, and visualization tools; (5) data fusion database to support search of the data extracted from packages submitted and data available in partner data systems such as the Earth System Grid Federation (ESGF) and DataONE; and (6) support for archiving of data packages that are not to be released to the public. ESS-DIVE data contributors will be able to archive and version their data and metadata, obtain data DOIs, search for and access ESS data and metadata via web and programmatic portals, and provide data and metadata in standardized forms. The ESS-DIVE archive and catalog will be federated with other existing catalogs, allowing cross-catalog metadata search and data exchange with existing systems, including DataONE's Metacat search. ESS-DIVE is operated by a multidisciplinary team from Berkeley Lab, the National Center for Ecological Analysis and Synthesis (NCEAS), and DataONE. The primarily data copies are hosted at DOE's NERSC supercomputing facility with replicas at DataONE nodes.

  1. Envri Cluster - a Community-Driven Platform of European Environmental Researcher Infrastructures for Providing Common E-Solutions for Earth Science

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Sorvari, S.; Kutsch, W. L.; Laj, P.

    2017-12-01

    European long-term environmental research infrastructures (often referred as ESFRI RIs) are the core facilities for providing services for scientists in their quest for understanding and predicting the complex Earth system and its functioning that requires long-term efforts to identify environmental changes (trends, thresholds and resilience, interactions and feedbacks). Many of the research infrastructures originally have been developed to respond to the needs of their specific research communities, however, it is clear that strong collaboration among research infrastructures is needed to serve the trans-boundary research requires exploring scientific questions at the intersection of different scientific fields, conducting joint research projects and developing concepts, devices, and methods that can be used to integrate knowledge. European Environmental research infrastructures have already been successfully worked together for many years and have established a cluster - ENVRI cluster - for their collaborative work. ENVRI cluster act as a collaborative platform where the RIs can jointly agree on the common solutions for their operations, draft strategies and policies and share best practices and knowledge. Supporting project for the ENVRI cluster, ENVRIplus project, brings together 21 European research infrastructures and infrastructure networks to work on joint technical solutions, data interoperability, access management, training, strategies and dissemination efforts. ENVRI cluster act as one stop shop for multidisciplinary RI users, other collaborative initiatives, projects and programmes and coordinates and implement jointly agreed RI strategies.

  2. ISTIMES Integrated System for Transport Infrastructures Surveillance and Monitoring by Electromagnetic Sensing

    NASA Astrophysics Data System (ADS)

    Argenti, M.; Giannini, V.; Averty, R.; Bigagli, L.; Dumoulin, J.

    2012-04-01

    The EC FP7 ISTIMES project has the goal of realizing an ICT-based system exploiting distributed and local sensors for non destructive electromagnetic monitoring in order to make critical transport infrastructures more reliable and safe. Higher situation awareness thanks to real time and detailed information and images of the controlled infrastructure status allows improving decision capabilities for emergency management stakeholders. Web-enabled sensors and a service-oriented approach are used as core of the architecture providing a sys-tem that adopts open standards (e.g. OGC SWE, OGC CSW etc.) and makes efforts to achieve full interoperability with other GMES and European Spatial Data Infrastructure initiatives as well as compliance with INSPIRE. The system exploits an open easily scalable network architecture to accommodate a wide range of sensors integrated with a set of tools for handling, analyzing and processing large data volumes from different organizations with different data models. Situation Awareness tools are also integrated in the system. Definition of sensor observations and services follows a metadata model based on the ISO 19115 Core set of metadata elements and the O&M model of OGC SWE. The ISTIMES infrastructure is based on an e-Infrastructure for geospatial data sharing, with a Data Cata-log that implements the discovery services for sensor data retrieval, acting as a broker through static connections based on standard SOS and WNS interfaces; a Decision Support component which helps decision makers providing support for data fusion and inference and generation of situation indexes; a Presentation component which implements system-users interaction services for information publication and rendering, by means of a WEB Portal using SOA design principles; A security framework using Shibboleth open source middleware based on the Security Assertion Markup Language supporting Single Sign On (SSO). ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663

  3. Smart Valley Infrastructure.

    ERIC Educational Resources Information Center

    Maule, R. William

    1994-01-01

    Discusses prototype information infrastructure projects in northern California's Silicon Valley. The strategies of the public and private telecommunications carriers vying for backbone services and industries developing end-user infrastructure technologies via office networks, set-top box networks, Internet multimedia, and "smart homes"…

  4. Waggle: A Framework for Intelligent Attentive Sensing and Actuation

    NASA Astrophysics Data System (ADS)

    Sankaran, R.; Jacob, R. L.; Beckman, P. H.; Catlett, C. E.; Keahey, K.

    2014-12-01

    Advances in sensor-driven computation and computationally steered sensing will greatly enable future research in fields including environmental and atmospheric sciences. We will present "Waggle," an open-source hardware and software infrastructure developed with two goals: (1) reducing the separation and latency between sensing and computing and (2) improving the reliability and longevity of sensing-actuation platforms in challenging and costly deployments. Inspired by "deep-space probe" systems, the Waggle platform design includes features that can support longitudinal studies, deployments with varying communication links, and remote management capabilities. Waggle lowers the barrier for scientists to incorporate real-time data from their sensors into their computations and to manipulate the sensors or provide feedback through actuators. A standardized software and hardware design allows quick addition of new sensors/actuators and associated software in the nodes and enables them to be coupled with computational codes both insitu and on external compute infrastructure. The Waggle framework currently drives the deployment of two observational systems - a portable and self-sufficient weather platform for study of small-scale effects in Chicago's urban core and an open-ended distributed instrument in Chicago that aims to support several research pursuits across a broad range of disciplines including urban planning, microbiology and computer science. Built around open-source software, hardware, and Linux OS, the Waggle system comprises two components - the Waggle field-node and Waggle cloud-computing infrastructure. Waggle field-node affords a modular, scalable, fault-tolerant, secure, and extensible platform for hosting sensors and actuators in the field. It supports insitu computation and data storage, and integration with cloud-computing infrastructure. The Waggle cloud infrastructure is designed with the goal of scaling to several hundreds of thousands of Waggle nodes. It supports aggregating data from sensors hosted by the nodes, staging computation, relaying feedback to the nodes and serving data to end-users. We will discuss the Waggle design principles and their applicability to various observational research pursuits, and demonstrate its capabilities.

  5. Swiss Experiment: Design, implemention and use of a cross-disciplinary infrastructure for data intensive science

    NASA Astrophysics Data System (ADS)

    Dawes, N.; Salehi, A.; Clifton, A.; Bavay, M.; Aberer, K.; Parlange, M. B.; Lehning, M.

    2010-12-01

    It has long been known that environmental processes are cross-disciplinary, but data has continued to be acquired and held for a single purpose. Swiss Experiment is a rapidly evolving cross-disciplinary, distributed sensor data infrastructure, where tools for the environmental science community stem directly from computer science research. The platform uses the bleeding edge of computer science to acquire, store and distribute data and metadata from all environmental science disciplines at a variety of temporal and spatial resolutions. SwissEx is simultaneously developing new technologies to allow low cost, high spatial and temporal resolution measurements such that small areas can be intensely monitored. This data is then combined with existing widespread, low density measurements in the cross-disciplinary platform to provide well documented datasets, which are of use to multiple research disciplines. We present a flexible, generic infrastructure at an advanced stage of development. The infrastructure makes the most of Web 2.0 technologies for a collaborative working environment and as a user interface for a metadata database. This environment is already closely integrated with GSN, an open-source database middleware developed under Swiss Experiment for acquisition and storage of generic time-series data (2D and 3D). GSN can be queried directly by common data processing packages and makes data available in real-time to models and 3rd party software interfaces via its web service interface. It also provides real-time push or pull data exchange between instances, a user management system which leaves data owners in charge of their data, advanced real-time processing and much more. The SwissEx interface is increasingly gaining users and supporting environmental science in Switzerland. It is also an integral part of environmental education projects ClimAtscope and O3E, where the technologies can provide rapid feedback of results for children of all ages and where the data from their own stations can be compared to national data networks.

  6. Cyber Security Threats to Safety-Critical, Space-Based Infrastructures

    NASA Astrophysics Data System (ADS)

    Johnson, C. W.; Atencia Yepez, A.

    2012-01-01

    Space-based systems play an important role within national critical infrastructures. They are being integrated into advanced air-traffic management applications, rail signalling systems, energy distribution software etc. Unfortunately, the end users of communications, location sensing and timing applications often fail to understand that these infrastructures are vulnerable to a wide range of security threats. The following pages focus on concerns associated with potential cyber-attacks. These are important because future attacks may invalidate many of the safety assumptions that support the provision of critical space-based services. These safety assumptions are based on standard forms of hazard analysis that ignore cyber-security considerations This is a significant limitation when, for instance, security attacks can simultaneously exploit multiple vulnerabilities in a manner that would never occur without a deliberate enemy seeking to damage space based systems and ground infrastructures. We address this concern through the development of a combined safety and security risk assessment methodology. The aim is to identify attack scenarios that justify the allocation of additional design resources so that safety barriers can be strengthened to increase our resilience against security threats.

  7. Data interoperabilty between European Environmental Research Infrastructures and their contribution to global data networks

    NASA Astrophysics Data System (ADS)

    Kutsch, W. L.; Zhao, Z.; Hardisty, A.; Hellström, M.; Chin, Y.; Magagna, B.; Asmi, A.; Papale, D.; Pfeil, B.; Atkinson, M.

    2017-12-01

    Environmental Research Infrastructures (ENVRIs) are expected to become important pillars not only for supporting their own scientific communities, but also a) for inter-disciplinary research and b) for the European Earth Observation Program Copernicus as a contribution to the Global Earth Observation System of Systems (GEOSS) or global thematic data networks. As such, it is very important that data-related activities of the ENVRIs will be well integrated. This requires common policies, models and e-infrastructure to optimise technological implementation, define workflows, and ensure coordination, harmonisation, integration and interoperability of data, applications and other services. The key is interoperating common metadata systems (utilising a richer metadata model as the `switchboard' for interoperation with formal syntax and declared semantics). The metadata characterises data, services, users and ICT resources (including sensors and detectors). The European Cluster Project ENVRIplus has developed a reference model (ENVRI RM) for common data infrastructure architecture to promote interoperability among ENVRIs. The presentation will provide an overview of recent progress and give examples for the integration of ENVRI data in global integration networks.

  8. Toward a new information infrastructure in health technology assessment: communication, design, process, and results.

    PubMed

    Neikter, Susanna Allgurin; Rehnqvist, Nina; Rosén, Måns; Dahlgren, Helena

    2009-12-01

    The aim of this study was to facilitate effective internal and external communication of an international network and to explore how to support communication and work processes in health technology assessment (HTA). STRUCTURE AND METHODS: European network for Health Technology Assessment (EUnetHTA) connected sixty-four HTA Partner organizations from thirty-three countries. User needs in the different steps of the HTA process were the starting point for developing an information system. A step-wise, interdisciplinary, creative approach was used in developing practical tools. An Information Platform facilitated the exchange of scientific information between Partners and with external target groups. More than 200 virtual meetings were set up during the project using an e-meeting tool. A Clearinghouse prototype was developed with the intent to offering a single point of access to HTA relevant information. This evolved into a next step not planned from the outset: Developing a running HTA Information System including several Web-based tools to support communication and daily HTA processes. A communication strategy guided the communication effort, focusing on practical tools, creating added value, involving stakeholders, and avoiding duplication of effort. Modern technology enables a new information infrastructure for HTA. The potential of information and communication technology was used as a strategic tool. Several target groups were represented among the Partners, which supported collaboration and made it easier to identify user needs. A distinctive visual identity made it easier to gain and maintain visibility on a limited budget.

  9. The NHERI RAPID Facility: Enabling the Next-Generation of Natural Hazards Reconnaissance

    NASA Astrophysics Data System (ADS)

    Wartman, J.; Berman, J.; Olsen, M. J.; Irish, J. L.; Miles, S.; Gurley, K.; Lowes, L.; Bostrom, A.

    2017-12-01

    The NHERI post-disaster, rapid response research (or "RAPID") facility, headquartered at the University of Washington (UW), is a collaboration between UW, Oregon State University, Virginia Tech, and the University of Florida. The RAPID facility will enable natural hazard researchers to conduct next-generation quick response research through reliable acquisition and community sharing of high-quality, post-disaster data sets that will enable characterization of civil infrastructure performance under natural hazard loads, evaluation of the effectiveness of current and previous design methodologies, understanding of socio-economic dynamics, calibration of computational models used to predict civil infrastructure component and system response, and development of solutions for resilient communities. The facility will provide investigators with the hardware, software and support services needed to collect, process and assess perishable interdisciplinary data following extreme natural hazard events. Support to the natural hazards research community will be provided through training and educational activities, field deployment services, and by promoting public engagement with science and engineering. Specifically, the RAPID facility is undertaking the following strategic activities: (1) acquiring, maintaining, and operating state-of-the-art data collection equipment; (2) developing and supporting mobile applications to support interdisciplinary field reconnaissance; (3) providing advisory services and basic logistics support for research missions; (4) facilitating the systematic archiving, processing and visualization of acquired data in DesignSafe-CI; (5) training a broad user base through workshops and other activities; and (6) engaging the public through citizen science, as well as through community outreach and education. The facility commenced operations in September 2016 and will begin field deployments beginning in September 2018. This poster will provide an overview of the vision for the RAPID facility, the equipment that will be available for use, the facility's operations, and opportunities for user training and facility use.

  10. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  11. Organizing phenological data resources to inform natural resource conservation

    USGS Publications Warehouse

    Rosemartin, Alyssa H.; Crimmins, Theresa M.; Enquist, Carolyn A.F.; Gerst, Katharine L.; Kellermann, Jherime L.; Posthumus, Erin E.; Denny, Ellen G.; Guertin, Patricia; Marsh, Lee; Weltzin, Jake F.

    2014-01-01

    Changes in the timing of plant and animal life cycle events, in response to climate change, are already happening across the globe. The impacts of these changes may affect biodiversity via disruption to mutualisms, trophic mismatches, invasions and population declines. To understand the nature, causes and consequences of changed, varied or static phenologies, new data resources and tools are being developed across the globe. The USA National Phenology Network is developing a long-term, multi-taxa phenological database, together with a customizable infrastructure, to support conservation and management needs. We present current and potential applications of the infrastructure, across scales and user groups. The approaches described here are congruent with recent trends towards multi-agency, large-scale research and action.

  12. Xi-CAM v1.2.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PANDOLFI, RONALD; KUMAR, DINESH; VENKATAKRISHNAN, SINGANALLUR

    Xi-CAM aims to provide a community driven platform for multimodal analysis in synchrotron science. The platform core provides a robust plugin infrastructure for extensibility, allowing continuing development to simply add further functionality. Current modules include tools for characterization with (GI)SAXS, Tomography, and XAS. This will continue to serve as a development base as algorithms for multimodal analysis develop. Seamless remote data access, visualization and analysis are key elements of Xi-CAM, and will become critical to synchrotron data infrastructure as expectations for future data volume and acquisition rates rise with continuously increasing throughputs. The highly interactive design elements of Xi-cam willmore » similarly support a generation of users which depend on immediate data quality feedback during high-throughput or burst acquisition modes.« less

  13. The Human-Robot Interaction Operating System

    NASA Technical Reports Server (NTRS)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  14. Mixed Methodology to Predict Social Meaning for Decision Support

    DTIC Science & Technology

    2013-09-01

    regular usage of Standard American English (SAE) that also ranges in use of stylistic features that identify users as members of certain street gangs...membership based solely on their use of language. While aspects of gang language, such as the stylistic tendencies of the language of graffiti (Adams and... stylistics of gang language online, as a mode of code switching that reflects the infrastructure of the larger gang community, has been little studied

  15. CMS distributed data analysis with CRAB3

    NASA Astrophysics Data System (ADS)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  16. Multi-Sensor Distributive On-line Processing, Visualization, and Analysis Infrastructure for an Agricultural Information System at the NASA Goddard Earth Sciences DAAC

    NASA Astrophysics Data System (ADS)

    Teng, W.; Berrick, S.; Leptoukh, G.; Liu, Z.; Rui, H.; Pham, L.; Shen, S.; Zhu, T.

    2004-12-01

    The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Archive Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM Online Visualization and Analysis System (TOVAS), which will operationally provide precipitation and other satellite data products and services. AIS outputs will be integrated into existing operational decision support systems for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical parameters, area of interest, and time period; and the system generates an output on screen in a matter of seconds. The currently available output options are (1) area plot - averaged or accumulated over any available data period for any rectangular area; (2) time plot - time series averaged over any rectangular area; (3) Hovmoller plots - longitude-time and latitude-time plots; (4) ASCII output - for all plot types; and (5) image animation - for area plot. Planned output options for the near-future include correlation plots and GIS-compatible outputs. The AIS will enable the remote, interoperable access to distributed data, because the current Giovanni implementation incorporates the GrADS-DODS Server (GDS), a stable, secure data server that provides subsetting and analysis services across the Internet, for any GrADS-readable data set. The subsetting capability allows users to retrieve a specified spatial region from a large data set, eliminating the need to first download the entire data set. The analysis capability allows users to retrieve the results of an operation applied to one or more data sets on the server. The Giovanni-GDS technology allows the serving of data, through convenient on-line analysis tools, from any location where GDS and a few GrADS scripts are installed. The GES-DISC implementation of this technology is unique in the way it enables multi-sensor processing and analysis.

  17. Geographic Hotspots of Critical National Infrastructure.

    PubMed

    Thacker, Scott; Barr, Stuart; Pant, Raghav; Hall, Jim W; Alderson, David

    2017-12-01

    Failure of critical national infrastructures can result in major disruptions to society and the economy. Understanding the criticality of individual assets and the geographic areas in which they are located is essential for targeting investments to reduce risks and enhance system resilience. Within this study we provide new insights into the criticality of real-life critical infrastructure networks by integrating high-resolution data on infrastructure location, connectivity, interdependence, and usage. We propose a metric of infrastructure criticality in terms of the number of users who may be directly or indirectly disrupted by the failure of physically interdependent infrastructures. Kernel density estimation is used to integrate spatially discrete criticality values associated with individual infrastructure assets, producing a continuous surface from which statistically significant infrastructure criticality hotspots are identified. We develop a comprehensive and unique national-scale demonstration for England and Wales that utilizes previously unavailable data from the energy, transport, water, waste, and digital communications sectors. The testing of 200,000 failure scenarios identifies that hotspots are typically located around the periphery of urban areas where there are large facilities upon which many users depend or where several critical infrastructures are concentrated in one location. © 2017 Society for Risk Analysis.

  18. The process of installing REDCap, a web based database supporting biomedical research: the first year.

    PubMed

    Klipin, M; Mare, I; Hazelhurst, S; Kramer, B

    2014-01-01

    Clinical and research data are essential for patient care, research and healthcare system planning. REDCapTM is a web-based tool for research data curatorship developed at Vanderbilt University in Nashville, USA. The Faculty of Health Sciences at the University of the Witwatersrand, Johannesburg South Africa identified the need for a cost effective data management instrument. REDCap was installed as per the user agreement with Vanderbilt University in August 2012. In order to assist other institutions that may lack the in-house Information Technology capacity, this paper describes the installation and support of REDCap and incorporates an analysis of user uptake over the first year of use. We reviewed the staffing requirements, costs of installation, process of installation and necessary infrastructure and end-user requests following the introduction of REDCap at Wits. The University Legal Office and Human Research Ethics Committee were consulted regarding the REDCap end-user agreement. Bi-monthly user meetings resulted in a training workshop in August 2013. We compared our REDCap software user numbers and records before and after the first training workshop. Human resources were recruited from existing staff. Installation costs were limited to servers and security certificates. The total costs to provide a functional REDCap platform was less than $9000. Eighty-one (81) users were registered in the first year. After the first training workshop the user numbers increased by 59 in one month and the total number of active users to 140 by the end of August 2013. Custom software applications for REDCap were created by collaboration between clinicians and software developers. REDCap was installed and maintained at limited cost. A small number of people with defined skills can support multiple REDCap users in two to four hours a week. End user training increased in the number of users, number of projects created and the number of projects moved to production.

  19. The Process of Installing REDCap, a Web Based Database Supporting Biomedical Research

    PubMed Central

    Mare, I.; Hazelhurst, S.; Kramer, B.

    2014-01-01

    Summary Background Clinical and research data are essential for patient care, research and healthcare system planning. REDCapTM is a web-based tool for research data curatorship developed at Vanderbilt University in Nashville, USA. The Faculty of Health Sciences at the University of the Witwatersrand, Johannesburg South Africa identified the need for a cost effective data management instrument. REDCap was installed as per the user agreement with Vanderbilt University in August 2012. Objectives In order to assist other institutions that may lack the in-house Information Technology capacity, this paper describes the installation and support of REDCap and incorporates an analysis of user uptake over the first year of use. Methods We reviewed the staffing requirements, costs of installation, process of installation and necessary infrastructure and end-user requests following the introduction of REDCap at Wits. The University Legal Office and Human Research Ethics Committee were consulted regarding the REDCap end-user agreement. Bi-monthly user meetings resulted in a training workshop in August 2013. We compared our REDCap software user numbers and records before and after the first training workshop. Results Human resources were recruited from existing staff. Installation costs were limited to servers and security certificates. The total costs to provide a functional REDCap platform was less than $9000. Eighty-one (81) users were registered in the first year. After the first training workshop the user numbers increased by 59 in one month and the total number of active users to 140 by the end of August 2013. Custom software applications for REDCap were created by collaboration between clinicians and software developers. Conclusion REDCap was installed and maintained at limited cost. A small number of people with defined skills can support multiple REDCap users in two to four hours a week. End user training increased in the number of users, number of projects created and the number of projects moved to production. PMID:25589907

  20. Lowering Entry Barriers for Multidisciplinary Cyber(e)-Infrastructures

    NASA Astrophysics Data System (ADS)

    Nativi, S.

    2012-04-01

    Multidisciplinarity is more and more important to study the Earth System and address Global Changes. To achieve that, multidisciplinary cyber(e)-infrastructures are an important instrument. In the last years, several European, US and international initiatives have been started to carry out multidisciplinary infrastructures, including: the Spatial Information in the European Community (INSPIRE), the Global Monitoring for Environment and Security (GMES), the Data Observation Network for Earth (DataOne), and the Global Earth Observation System of Systems (GEOSS). The majority of these initiatives are developing service-based digital infrastructures asking scientific Communities (i.e. disciplinary Users and data Producers) to implement a set of standards for information interoperability. For scientific Communities, this has represented an entry barrier which has proved to be high, in several cases. In fact, both data Producers and Users do not seem to be willing to invest precious resources to become expert on interoperability solutions -on the contrary, they are focused on developing disciplinary and thematic capacities. Therefore, an important research topic is lowering entry barriers for joining multidisciplinary cyber(e)-Infrastructures. This presentation will introduce a new approach to achieve multidisciplinary interoperability underpinning multidisciplinary infrastructures and lowering the present entry barriers for both Users and data Producers. This is called the Brokering approach: it extends the service-based paradigm by introducing a new a Brokering layer or cloud which is in charge of managing all the interoperability complexity (e.g. data discovery, access, and use) thus easing Users' and Producers' burden. This approach was successfully experimented in the framework of several European FP7 Projects and in GEOSS.

  1. The Swedish Research Infrastructure for Ecosystem Science - SITES

    NASA Astrophysics Data System (ADS)

    Lindroth, A.; Ahlström, M.; Augner, M.; Erefur, C.; Jansson, G.; Steen Jensen, E.; Klemedtsson, L.; Langenheder, S.; Rosqvist, G. N.; Viklund, J.

    2017-12-01

    The vision of SITES is to promote long-term field-based ecosystem research at a world class level by offering an infrastructure with excellent technical and scientific support and services attracting both national and international researchers. In addition, SITES will make data freely and easily available through an advanced data portal which will add value to the research. During the first funding period, three innovative joint integrating facilities were established through a researcher-driven procedure: SITES Water, SITES Spectral, and SITES AquaNet. These new facilities make it possible to study terrestrial and limnic ecosystem processes across a range of ecosystem types and climatic gradients, with common protocols and similar equipment. In addition, user-driven development at the nine individual stations has resulted in e.g. design of a long-term agricultural systems experiment, and installation of weather stations, flux systems, etc. at various stations. SITES, with its integrative approach and broad coverage of climate and ecosystem types across Sweden, constitutes an excellent platform for state-of-the-art research projects. SITES' support the development of: A better understanding of the way in which key ecosystems function and interact with each other at the landscape level and with the climate system in terms of mass and energy exchanges. A better understanding of the role of different organisms in controlling different processes and ultimately the functioning of ecosystems. New strategies for forest management to better meet the many and varied requirements from nature conservation, climate and wood, fibre, and energy supply points of view. Agricultural systems that better utilize resources and minimize adverse impacts on the environment. Collaboration with other similar infrastructures and networks is a high priority for SITES. This will enable us to make use of each others' experiences, harmonize metadata for easier exchange of data, and support each other to widen the user community.

  2. [ECRIN (European clinical research infrastructures network), a pan-European infrastructure for clinical research].

    PubMed

    Demotes-Mainard, Jacques

    2010-12-01

    Clinical research plays a key role both in the development of innovative health products and in the optimisation of medical strategies, leading to evidence-based practice and healthcare cost containment. ECRIN is a distributed ESFRI-roadmap pan-European infrastructure designed to support multinational clinical research, making Europe a single area for clinical studies, taking advantage of its population size to access patients, and unlocking latent scientific providing services to multinational. Servicing of multinational trials started during the preparatory phase, and ECRIN has applied for ERIC status in 2011. In parallel, ECRIN has also proposed an FP7 integrating activity project to further develop, upgrade and expand the ECRIN infrastructure built up during the past FP6 and FP7 projects, facilitating an efficient organization of clinical research in Europe, with ECRIN developing generic tools and providing generic services for multinational studies, and supporting the construction of pan-European disease-oriented networks that will in turn act as ECRIN users. This organization will improve Europe's attractiveness for industry trials, boost its scientific competitiveness, and result in better healthcare for European citizens. The three medical areas supported in this project (rare diseases, medical devices, and nutrition) will serve as pilots for other biomedical research fields. By creating a single area for clinical research in Europe, this structure will contribute to the implementation of the Europe flagship initiative 2020 'Innovation Union', whose objectives include defragmentation of research and educational capacities, tackling the major societal challenges (starting with healthy aging), and removing barriers to bringing ideas to the market.

  3. The Earth System Grid Federation (ESGF) Project

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Denvil, Sébastien; Greenslade, Mark

    2015-04-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) enterprise system is a collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of model output and observational data. ESGF's primary goal is to facilitate advancements in Earth System Science. It is an interagency and international effort led by the US Department of Energy (DOE), and co-funded by National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), National Science Foundation (NSF), Infrastructure for the European Network of Earth System Modelling (IS-ENES) and international laboratories such as the Max Planck Institute for Meteorology (MPI-M) german Climate Computing Centre (DKRZ), the Australian National University (ANU) National Computational Infrastructure (NCI), Institut Pierre-Simon Laplace (IPSL), and the British Atmospheric Data Center (BADC). Its main mission is to support current CMIP5 activities and prepare for future assesments. The ESGF architecture is based on a system of autonomous and distributed nodes, which interoperate through common acceptance of federation protocols and trust agreements. Data is stored at multiple nodes around the world, and served through local data and metadata services. Nodes exchange information about their data holdings and services, trust each other for registering users and establishing access control decisions. The net result is that a user can use a web browser, connect to any node, and seamlessly find and access data throughout the federation. This type of collaborative working organization and distributed architecture context en-lighted the need of integration and testing processes definition to ensure the quality of software releases and interoperability. This presentation will introduce the ESGF project and demonstrate the range of tools and processes that have been set up to support release management activities.

  4. Data Quality, Provenance and IPR Management services: their role in empowering geospatial data suppliers and users

    NASA Astrophysics Data System (ADS)

    Millard, Keiran

    2015-04-01

    This paper looks at current experiences of geospatial users and geospatial suppliers and how they have been limited by suitable frameworks for managing and communicating data quality, data provenance and intellectual property rights (IPR). Current political and technological drivers mean that increasing volumes of geospatial data are available through a plethora of different products and services, and whilst this is inherently a good thing it does create a new generation of challenges. This paper consider two examples of where these issues have been examined and looks at the challenges and possible solutions from a data user and data supplier perspective. The first example is the IQmulus project that is researching fusion environments for big geospatial point clouds and coverages. The second example is the EU Emodnet programme that is establishing thematic data portals for public marine and coastal data. IQmulus examines big geospatial data; the data from sources such as LIDAR, SONAR and numerical simulations; these data are simply too big for routine and ad-hoc analysis, yet they could realise a myriad of disparate, and readily useable, information products with the right infrastructure in place. IQmulus is researching how to deliver this infrastructure technically, but a financially sustainable delivery depends on being able to track and manage ownership and IPR across the numerous data sets being processed. This becomes complex when the data is composed of multiple overlapping coverages, however managing this allows for uses to be delivered highly-bespoke products to meet their budget and technical needs. The Emodnet programme delivers harmonised marine data at the EU scale across seven thematic portals. As part of the Emodnet programme a series of 'check points' have been initiated to examine how useful these services and other public data services actually are to solve real-world problems. One key finding is that users have been confused by the fact that often data from the same source appears across multiple platforms and that current 19115-style metadata catalogues do not help the vast majority of users in making data selections. To address this, we have looked at approaches used in the leisure industry. This industry has established tools to support users selecting the best hotel for their needs from the metadata available, supported by peer to peer rating. We have looked into how this approach can support users in selecting the best data to meet their needs.

  5. EUDAT: A New Cross-Disciplinary Data Infrastructure For Science

    NASA Astrophysics Data System (ADS)

    Lecarpentier, Damien; Michelini, Alberto; Wittenburg, Peter

    2013-04-01

    In recent years significant investments have been made by the European Commission and European member states to create a pan-European e-Infrastructure supporting multiple research communities. As a result, a European e-Infrastructure ecosystem is currently taking shape, with communication networks, distributed grids and HPC facilities providing European researchers from all fields with state-of-the-art instruments and services that support the deployment of new research facilities on a pan-European level. However, the accelerated proliferation of data - newly available from powerful new scientific instruments, simulations and the digitization of existing resources - has created a new impetus for increasing efforts and investments in order to tackle the specific challenges of data management, and to ensure a coherent approach to research data access and preservation. EUDAT is a pan-European initiative that started in October 2011 and which aims to help overcome these challenges by laying out the foundations of a Collaborative Data Infrastructure (CDI) in which centres offering community-specific support services to their users could rely on a set of common data services shared between different research communities. Although research communities from different disciplines have different ambitions and approaches - particularly with respect to data organization and content - they also share many basic service requirements. This commonality makes it possible for EUDAT to establish common data services, designed to support multiple research communities, as part of this CDI. During the first year, EUDAT has been reviewing the approaches and requirements of a first subset of communities from linguistics (CLARIN), solid earth sciences (EPOS), climate sciences (ENES), environmental sciences (LIFEWATCH), and biological and medical sciences (VPH), and shortlisted four generic services to be deployed as shared services on the EUDAT infrastructure. These services are data replication from site to site, data staging to compute facilities, metadata, and easy storage. A number of enabling services such as distributed authentication and authorization, persistent identifiers, hosting of services, workspaces and centre registry were also discussed. The services being designed in EUDAT will thus be of interest to a broad range of communities that lack their own robust data infrastructures, or that are simply looking for additional storage and/or computing capacities to better access, use, re-use, and preserve their data. The first pilots were completed in 2012 and a pre-production ready operational infrastructure, comprised of five sites (RZG, CINECA, SARA, CSC, FZJ), offering 480TB of online storage and 4PB of near-line (tape) storage, initially serving four user communities (ENES, EPOS, CLARIN, VPH) was established. These services shall be available to all communities in a production environment by 2014. Although EUDAT has initially focused on a subset of research communities, it aims to engage with other communities interested in adapting their solutions or contributing to the design of the infrastructure. Discussions with other research communities - belonging to the fields of environmental sciences, biomedical science, physics, social sciences and humanities - have already begun and are following a pattern similar to the one we adopted with the initial communities. The next step will consist of integrating representatives from these communities into the existing pilots and task forces so as to include them in the process of designing the services and, ultimately, shaping the future CDI.

  6. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  7. The Use of Spatial Data Infrastructure in Environmental Management:an Example from the Spatial Planning Practice in Poland.

    PubMed

    Zwirowicz-Rutkowska, Agnieszka; Michalik, Anna

    2016-10-01

    Today's technology plays a crucial role in the effective use of environmental information. This includes geographic information systems and infrastructures. The purpose of this research is to identify the way in which the Polish spatial data infrastructure (PSDI) supports policies and activities that may have an impact on the environment in relation to one group of users, namely urban planners, and their tasks concerning environmental management. The study is based on a survey conducted in July and August, 2014. Moreover, the authors' expert knowledge gained through urban development practice and the analysis of the environmental conservation regulations and spatial planning in Poland has been used to define the scope of environmental management in both spatial planning studies and spatial data sources. The research included assessment of data availability, infrastructure usability, and its impact on decision-making process. The results showed that the PSDI is valuable because it allows for the acquisition of data on environmental monitoring, agricultural and aquaculture facilities. It also has a positive impact on decision-making processes and improves numerous planners' activities concerning both the inclusion of environmental indicators in spatial plans and the support of nature conservation and environmental management in the process of working on future land use. However, even though the infrastructure solves certain problems with data accessibility, further improvements might be proposed. The importance of the SDI in environmental management is noticeable and could be considered from many standpoints: Data, communities engaged in policy or decision-making concerning environmental issues, and data providers.

  8. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB

    PubMed Central

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417

  9. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    PubMed

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/

  10. The Chandra Source Catalog: Processing and Infrastructure

    NASA Astrophysics Data System (ADS)

    Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.

  11. A game based virtual campus tour

    NASA Astrophysics Data System (ADS)

    Razia Sulthana, A.; Arokiaraj Jovith, A.; Saveetha, D.; Jaithunbi, A. K.

    2018-04-01

    The aim of the application is to create a virtual reality game, whose purpose is to showcase the facilities of SRM University, while doing so in an entertaining manner. The virtual prototype of the institution is deployed in a game engine which eases the students to look over the infrastructure, thereby reducing the resources utilization. Time and money are the resources in concern today. The virtual campus application assists the end user even from a remote location. The virtual world simulates the exact location and hence the effect is created. Thus, it virtually transports the user to the university, with the help of a VR Headset. This is a dynamic application wherein the user can move in any direction. The VR headset provides an interface to get gyro input and this is used to start and stop the movement. Virtual Campus is size efficient and occupies minimal space. It is scalable against mobile gadgets. This gaming application helps the end user to explore the campus, while having fun too. It is a user friendly application that supports users worldwide.

  12. Facilitating biomedical researchers' interrogation of electronic health record data: Ideas from outside of biomedical informatics.

    PubMed

    Hruby, Gregory W; Matsoukas, Konstantina; Cimino, James J; Weng, Chunhua

    2016-04-01

    Electronic health records (EHR) are a vital data resource for research uses, including cohort identification, phenotyping, pharmacovigilance, and public health surveillance. To realize the promise of EHR data for accelerating clinical research, it is imperative to enable efficient and autonomous EHR data interrogation by end users such as biomedical researchers. This paper surveys state-of-art approaches and key methodological considerations to this purpose. We adapted a previously published conceptual framework for interactive information retrieval, which defines three entities: user, channel, and source, by elaborating on channels for query formulation in the context of facilitating end users to interrogate EHR data. We show the current progress in biomedical informatics mainly lies in support for query execution and information modeling, primarily due to emphases on infrastructure development for data integration and data access via self-service query tools, but has neglected user support needed during iteratively query formulation processes, which can be costly and error-prone. In contrast, the information science literature has offered elaborate theories and methods for user modeling and query formulation support. The two bodies of literature are complementary, implying opportunities for cross-disciplinary idea exchange. On this basis, we outline the directions for future informatics research to improve our understanding of user needs and requirements for facilitating autonomous interrogation of EHR data by biomedical researchers. We suggest that cross-disciplinary translational research between biomedical informatics and information science can benefit our research in facilitating efficient data access in life sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. STUDY ON SUPPORTING FOR DRAWING UP THE BCP FOR URBAN EXPRESSWAY NETWORK USING BY TRAFFIC SIMULATION SYSTEM

    NASA Astrophysics Data System (ADS)

    Yamawaki, Masashi; Shiraki, Wataru; Inomo, Hitoshi; Yasuda, Keiichi

    The urban expressway network is an important infrastructure to execute a disaster restoration. Therefore, it is necessary to draw up the BCP (Business Continuity Plan) to enable securing of road user's safety and restoration of facilities, etc. It is important that each urban expressway manager execute decision and improvement of effective BCP countermeasures when disaster occurs by assuming various disaster situations. Then, in this study, we develop the traffic simulation system that can reproduce various disaster situations and traffic actions, and examine some methods supporting for drawing up the BCP for an urban expressway network. For disaster outside assumption such as tsunami generated by a huge earthquake, we examine some approaches securing safety of users and cars on the Hanshin Expressway Network as well as on general roads. And, we aim to propose a tsunami countermeasure not considered in the current urban expressway BCP.

  14. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  15. Digital divide, biometeorological data infrastructures and human vulnerability definition.

    PubMed

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  16. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2017-06-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  17. Building sustainable multi-functional prospective electronic clinical data systems.

    PubMed

    Randhawa, Gurvaneet S; Slutsky, Jean R

    2012-07-01

    A better alignment in the goals of the biomedical research enterprise and the health care delivery system can help fill the large gaps in our knowledge of the impact of clinical interventions on patient outcomes in the real world. There are several initiatives underway to align the research priorities of patients, providers, researchers, and policy makers. These include Agency for Healthcare Research and Quality (AHRQ)-supported projects to build flexible prospective clinical electronic data infrastructure that meet the needs of these diverse users. AHRQ has previously supported the creation of 2 distributed research networks as a new approach to conduct comparative effectiveness research (CER) while protecting a patient's confidential information and the proprietary needs of a clinical organization. It has applied its experience in building these networks in directing the American Recovery and Reinvestment Act funds for CER to support new clinical electronic infrastructure projects that can be used for several purposes including CER, quality improvement, clinical decision support, and disease surveillance. In addition, AHRQ has funded a new Electronic Data Methods forum to advance the methods in clinical informatics, research analytics, and governance by actively engaging investigators from the American Recovery and Reinvestment Act-funded projects and external stakeholders.

  18. Emerging Communication Technologies (ECT) Phase 4 Report

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Harris, William G.; Marin, Jose A.; Nelson, Richard A.

    2005-01-01

    The Emerging Communication Technology (ECT) project investigated three First Mile communication technologies in support of NASA s Crew Exploration Vehicle (CEV), Advanced Range Technology Working Group (ARTWG), and the Advanced Spaceport Technology Working Group (ASTWG). These First Mile technologies have the purpose of interconnecting mobile users with existing Range Communication infrastructures on a 24/7 basis. ECT is a continuation of the Range Information System Management (RISM) task started in 2002. This is the fourth year of the project.

  19. Partnerships form the basis for implementing a National Space Weather Plan

    NASA Astrophysics Data System (ADS)

    Spann, James F.; Giles, Barbara L.

    2017-08-01

    The 2017 Space Weather Enterprise Forum, held June 27, focused on the vital role of partnerships in order to establish an effective and successful national space weather program. Experts and users from the many government agencies, industry, academia, and policy makers gathered to discuss space weather impacts and mitigation strategies, the relevant services and supporting infrastructure, and the vital role cross-cutting partnerships must play for successful implementation of the National Space Weather Action Plan.

  20. The Ashore Infrastructure Requirments Needed to Support Mobile Maintenance Facilities (MMF) for Intermediate Maintenance on the Next Generation Aircraft Carrier (CVNX)

    DTIC Science & Technology

    1999-12-01

    GENERATION AIRCRAFT CARRIER (CVNX) 6. AUTHOR(S) Watt, Michael R. 5. FUNDING NUMBERS Contract Number 7. PERFORMING ORGANIZATION NAME(S) AND...ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND... ORGANIZATION OF STUDY 5 H. BACKGROUND INFORMATION 9 A. CURRENT MILITARY USERS OF MOBILE FACILITIES 10 1. United States Marine Corps (USMC) 11 2

  1. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network

    PubMed Central

    Schilling, Lisa M.; Kwan, Bethany M.; Drolshagen, Charles T.; Hosokawa, Patrick W.; Brandt, Elias; Pace, Wilson D.; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R.O.; Stephens, William E.; George, Joseph M.; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K.; Kahn, Michael G.

    2013-01-01

    Introduction: Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. Methods: The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. Discussion: SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions. PMID:25848567

  2. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) Technology Infrastructure for a Distributed Data Network.

    PubMed

    Schilling, Lisa M; Kwan, Bethany M; Drolshagen, Charles T; Hosokawa, Patrick W; Brandt, Elias; Pace, Wilson D; Uhrich, Christopher; Kamerick, Michael; Bunting, Aidan; Payne, Philip R O; Stephens, William E; George, Joseph M; Vance, Mark; Giacomini, Kelli; Braddy, Jason; Green, Mika K; Kahn, Michael G

    2013-01-01

    Distributed Data Networks (DDNs) offer infrastructure solutions for sharing electronic health data from across disparate data sources to support comparative effectiveness research. Data sharing mechanisms must address technical and governance concerns stemming from network security and data disclosure laws and best practices, such as HIPAA. The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) deploys TRIAD grid technology, a common data model, detailed technical documentation, and custom software for data harmonization to facilitate data sharing in collaboration with stakeholders in the care of safety net populations. Data sharing partners host TRIAD grid nodes containing harmonized clinical data within their internal or hosted network environments. Authorized users can use a central web-based query system to request analytic data sets. SAFTINet DDN infrastructure achieved a number of data sharing objectives, including scalable and sustainable systems for ensuring harmonized data structures and terminologies and secure distributed queries. Initial implementation challenges were resolved through iterative discussions, development and implementation of technical documentation, governance, and technology solutions.

  3. Assessing the uptake of persistent identifiers by research infrastructure users

    PubMed Central

    Maull, Keith E.

    2017-01-01

    Significant progress has been made in the past few years in the development of recommendations, policies, and procedures for creating and promoting citations to data sets, software, and other research infrastructures like computing facilities. Open questions remain, however, about the extent to which referencing practices of authors of scholarly publications are changing in ways desired by these initiatives. This paper uses four focused case studies to evaluate whether research infrastructures are being increasingly identified and referenced in the research literature via persistent citable identifiers. The findings of the case studies show that references to such resources are increasing, but that the patterns of these increases are variable. In addition, the study suggests that citation practices for data sets may change more slowly than citation practices for software and research facilities, due to the inertia of existing practices for referencing the use of data. Similarly, existing practices for acknowledging computing support may slow the adoption of formal citations for computing resources. PMID:28394907

  4. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Department of Commerce, Washington, DC. Information Infrastructure Task Force.

    The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…

  5. Building capacity for service user and carer involvement in research: the implications and impact of best research for best health.

    PubMed

    Minogue, Virginia; Girdlestone, John

    2010-01-01

    The purpose of this paper is to examine the role of service user and carer involvement in NHS research and describe the nature of this involvement in three specialist mental health Trusts. It also aims to discuss the value of service user and carer involvement and present the perspective of the service user and research manager. The paper reviews patient and public involvement policy and practice in the NHS and NHS research. It examines the effectiveness of involvement activity and utilises a case example to demonstrate the impact of patient/service user involvement on the NHS and the individuals who take part. The paper concludes that service user involvement is essential if research is to support the development of health services that clearly reflect the needs of the service user and impact positively on service quality. Service user involvement is an established element of NHS research and development at both national and local level. The Department of Health strategy for research, Best Research for Best Health, reiterates both the importance of research that benefits the patient and the involvement of the service user in the research process. Despite this, the changes in Department of Health support funding for research, introduced by the strategy, may inadvertently lead to some NHS Trusts experiencing difficulty in resourcing this important activity. The paper illustrates the effectiveness of successful patient and public involvement in research. It also identifies how involvement has developed in a fragmented and uncoordinated way and how it is threatened by a failure to embed it more consistently in research infrastructure.

  6. IoT Applications with 5G Connectivity in Medical Tourism Sector Management: Third-Party Service Scenarios.

    PubMed

    Psiha, Maria M; Vlamos, Panayiotis

    2017-01-01

    5G is the next generation of mobile communication technology. Current generation of wireless technologies is being evolved toward 5G for better serving end users and transforming our society. Supported by 5G cloud technology, personal devices will extend their capabilities to various applications, supporting smart life. They will have significant role in health, medical tourism, security, safety, and social life applications. The next wave of mobile communication is to mobilize and automate industries and industry processes via Machine-Type Communication (MTC) and Internet of Things (IoT). The current key performance indicators for the 5G infrastructure for the fully connected society are sufficient to satisfy most of the technical requirements in the healthcare sector. Thus, 5G can be considered as a door opener for new possibilities and use cases, many of which are as yet unknown. In this paper we present heterogeneous use cases in medical tourism sector, based on 5G infrastructure technologies and third-party cloud services.

  7. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures

    PubMed Central

    2010-01-01

    Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315

  8. Multi-Sector Sustainability Browser (MSSB) User Manual: A ...

    EPA Pesticide Factsheets

    EPA’s Sustainable and Healthy Communities (SHC) Research Program is developing methodologies, resources, and tools to assist community members and local decision makers in implementing policy choices that facilitate sustainable approaches in managing their resources affecting the built environment, natural environment, and human health. In order to assist communities and decision makers in implementing sustainable practices, EPA is developing computer-based systems including models, databases, web tools, and web browsers to help communities decide upon approaches that support their desired outcomes. Communities need access to resources that will allow them to achieve their sustainability objectives through intelligent decisions in four key sustainability areas: • Land Use • Buildings and Infrastructure • Transportation • Materials Management (i.e., Municipal Solid Waste [MSW] processing and disposal) The Multi-Sector Sustainability Browser (MSSB) is designed to support sustainable decision-making for communities, local and regional planners, and policy and decision makers. Document is an EPA Technical Report, which is the user manual for the Multi-Sector Sustainability Browser (MSSB) tool. The purpose of the document is to provide basic guidance on use of the tool for users

  9. Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)

    NASA Technical Reports Server (NTRS)

    Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu

    1993-01-01

    This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.

  10. Oceanids command and control (C2) data system - Marine autonomous systems data for vehicle piloting, scientific data users, operational data assimilation, and big data

    NASA Astrophysics Data System (ADS)

    Buck, J. J. H.; Phillips, A.; Lorenzo, A.; Kokkinaki, A.; Hearn, M.; Gardner, T.; Thorne, K.

    2017-12-01

    The National Oceanography Centre (NOC) operate a fleet of approximately 36 autonomous marine platforms including submarine gliders, autonomous underwater vehicles, and autonomous surface vehicles. Each platform effectivity has the capability to observe the ocean and collect data akin to a small research vessel. This is creating a growth in data volumes and complexity while the amount of resource available to manage data remains static. The OceanIds Command and Control (C2) project aims to solve these issues by fully automating the data archival, processing and dissemination. The data architecture being implemented jointly by NOC and the Scottish Association for Marine Science (SAMS) includes a single Application Programming Interface (API) gateway to handle authentication, forwarding and delivery of both metadata and data. Technicians and principle investigators will enter expedition data prior to deployment of vehicles enabling automated data processing when vehicles are deployed. The system will support automated metadata acquisition from platforms as this technology moves towards operational implementation. The metadata exposure to the web builds on a prototype developed by the European Commission supported SenseOCEAN project and is via open standards including World Wide Web Consortium (W3C) RDF/XML and the use of the Semantic Sensor Network ontology and Open Geospatial Consortium (OGC) SensorML standard. Data will be delivered in the marine domain Everyone's Glider Observatory (EGO) format and OGC Observations and Measurements. Additional formats will be served by implementation of endpoints such as the NOAA ERDDAP tool. This standardised data delivery via the API gateway enables timely near-real-time data to be served to Oceanids users, BODC users, operational users and big data systems. The use of open standards will also enable web interfaces to be rapidly built on the API gateway and delivery to European research infrastructures that include aligned reference models for data infrastructure.

  11. Bridging the Host-Network Divide: Survey, Taxonomy, and Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Duggirala, Vedavyas; Correa, Ricardo

    2007-04-17

    Abstract: "This paper presents a new direction in security awareness tools for system administration--the Host-Network (HoNe) Visualizer. Our requirements for the HoNe Visualizer come from needs system administrators expressed in interviews, from reviewing the literature, and from conducting usability studies with prototypes. We present a tool taxonomy that serves as a framework for our literature review, and we use the taxonomy to show what is missing in the administrator's arsenal. Then we unveil our tool and its supporting infrastructure that we believe will fill the empty niche. We found that most security tools provide either an internal view of amore » host or an external view of traffic on a network. Our interviewees revealed how they must construct a mental end-to-end view from separate tools that individually give an incomplete view, expending valuable time and mental effort. Because of limitations designed into TCP/IP [RFC-791, RFC-793], no tool can effectively correlate host and network data into an end-to-end view without kernel modifications. Currently, no other visualization exists to support end-to-end analysis. But HoNe's infrastructure overcomes TCP/IP's limitations bridging the network and transport layers in the network stack and making end-to-end correlation possible. The capstone is the HoNe Visualizer that amplifies the users' cognitive power and reduces their mental workload by illustrating the correlated data graphically. Users said HoNe would be particularly good for discovering day-zero exploits. Our usability study revealed that users performed better on intrusion detection tasks using our visualization than with tools they were accustomed to using regardless of their experience level."« less

  12. Intelligent Transportation Infrastructure Deployment Analysis System

    DOT National Transportation Integrated Search

    1997-01-01

    Much of the work on Intelligent Transportation Systems (ITS) to date has emphasized technologies, Standards/protocols, architecture, user services, core infrastructure requirements, and various other technical and institutional issues. ITS implementa...

  13. Cyber infrastructure for Fusarium: three integrated platforms supporting strain identification, phylogenetics, comparative genomics and knowledge sharing.

    PubMed

    Park, Bongsoo; Park, Jongsun; Cheong, Kyeong-Chae; Choi, Jaeyoung; Jung, Kyongyong; Kim, Donghan; Lee, Yong-Hwan; Ward, Todd J; O'Donnell, Kerry; Geiser, David M; Kang, Seogchan

    2011-01-01

    The fungal genus Fusarium includes many plant and/or animal pathogenic species and produces diverse toxins. Although accurate species identification is critical for managing such threats, it is difficult to identify Fusarium morphologically. Fortunately, extensive molecular phylogenetic studies, founded on well-preserved culture collections, have established a robust foundation for Fusarium classification. Genomes of four Fusarium species have been published with more being currently sequenced. The Cyber infrastructure for Fusarium (CiF; http://www.fusariumdb.org/) was built to support archiving and utilization of rapidly increasing data and knowledge and consists of Fusarium-ID, Fusarium Comparative Genomics Platform (FCGP) and Fusarium Community Platform (FCP). The Fusarium-ID archives phylogenetic marker sequences from most known species along with information associated with characterized isolates and supports strain identification and phylogenetic analyses. The FCGP currently archives five genomes from four species. Besides supporting genome browsing and analysis, the FCGP presents computed characteristics of multiple gene families and functional groups. The Cart/Favorite function allows users to collect sequences from Fusarium-ID and the FCGP and analyze them later using multiple tools without requiring repeated copying-and-pasting of sequences. The FCP is designed to serve as an online community forum for sharing and preserving accumulated experience and knowledge to support future research and education.

  14. Cyber infrastructure for Fusarium: three integrated platforms supporting strain identification, phylogenetics, comparative genomics and knowledge sharing

    PubMed Central

    Park, Bongsoo; Park, Jongsun; Cheong, Kyeong-Chae; Choi, Jaeyoung; Jung, Kyongyong; Kim, Donghan; Lee, Yong-Hwan; Ward, Todd J.; O'Donnell, Kerry; Geiser, David M.; Kang, Seogchan

    2011-01-01

    The fungal genus Fusarium includes many plant and/or animal pathogenic species and produces diverse toxins. Although accurate species identification is critical for managing such threats, it is difficult to identify Fusarium morphologically. Fortunately, extensive molecular phylogenetic studies, founded on well-preserved culture collections, have established a robust foundation for Fusarium classification. Genomes of four Fusarium species have been published with more being currently sequenced. The Cyber infrastructure for Fusarium (CiF; http://www.fusariumdb.org/) was built to support archiving and utilization of rapidly increasing data and knowledge and consists of Fusarium-ID, Fusarium Comparative Genomics Platform (FCGP) and Fusarium Community Platform (FCP). The Fusarium-ID archives phylogenetic marker sequences from most known species along with information associated with characterized isolates and supports strain identification and phylogenetic analyses. The FCGP currently archives five genomes from four species. Besides supporting genome browsing and analysis, the FCGP presents computed characteristics of multiple gene families and functional groups. The Cart/Favorite function allows users to collect sequences from Fusarium-ID and the FCGP and analyze them later using multiple tools without requiring repeated copying-and-pasting of sequences. The FCP is designed to serve as an online community forum for sharing and preserving accumulated experience and knowledge to support future research and education. PMID:21087991

  15. Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul; Hayden, Jeffrey L.

    2004-01-01

    NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.

  16. Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions

    NASA Astrophysics Data System (ADS)

    Bhasin, Kul; Hayden, Jeffrey L.

    2004-02-01

    NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.

  17. Users' perception as a tool to improve urban beach planning and management.

    PubMed

    Cervantes, Omar; Espejel, Ileana; Arellano, Evarista; Delhumeau, Sheila

    2008-08-01

    Four beaches that share physiographic characteristics (sandy, wide, and long) but differ in socioeconomic and cultural terms (three are located in northwestern Mexico and one in California, USA) were evaluated by beach users. Surveys (565) composed of 36 questions were handed out to beach users on weekends and holidays in 2005. The 25 questions that revealed the most information were selected by factor analysis and classified by cluster analysis. Beach users' preferences were assigned a value by comparing the present survey results with the characteristics of an "ideal" recreational urban beach. Cluster analysis separated three groups of questions: (a) services and infrastructure, (b) recreational activities, and (c) beach conditions. Cluster linkage distance (r=0.82, r=0.78, r=0.67) was used as a weight and multiplied by the value of beach descriptive factors. Mazatlán and Oceanside obtained the highest values because there are enough infrastructure and services; on the contrary, Ensenada and Rosarito were rated medium and low because infrastructure and services are lacking. The presently proposed method can contribute to improving current beach evaluations because the final score represents the beach users' evaluation of the quality of the beach. The weight considered in the present study marks the beach users' preferences among the studied beaches. Adding this weight to beach evaluation will contribute to more specific beach planning in which users' perception is considered.

  18. Approach to sustainable e-Infrastructures - The case of the Latin American Grid

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Diacovo, Ramon; Brasileiro, Francisco; Carvalho, Diego; Dutra, Inês; Faerman, Marcio; Gavillet, Philippe; Hoeger, Herbert; Lopez Pourailly, Maria Jose; Marechal, Bernard; Garcia, Rafael Mayo; Neumann Ciuffo, Leandro; Ramos Pollan, Paul; Scardaci, Diego; Stanton, Michael

    2010-05-01

    The EELA (E-Infrastructure shared between Europe and Latin America) and EELA-2 (E-science grid facility for Europe and Latin America) projects, co-funded by the European Commission under FP6 and FP7, respectively, have been successful in building a high capacity, production-quality, scalable Grid Facility for a wide spectrum of applications (e.g. Earth & Life Sciences, High energy physics, etc.) from several European and Latin American User Communities. This paper presents the 4-year experience of EELA and EELA-2 in: • Providing each Member Institution the unique opportunity to benefit of a huge distributed computing platform for its research activities, in particular through initiatives such as OurGrid which proposes a so-called Opportunistic Grid Computing well adapted to small and medium Research Laboratories such as most of those of Latin America and Africa; • Developing a realistic strategy to ensure the long-term continuity of the e-Infrastructure in the Latin American continent, beyond the term of the EELA-2 project, in association with CLARA and collaborating with EGI. Previous interactions between EELA and African Grid members at events such as the IST Africa'07, 08 and 09, the International Conference on Open Access'08 and EuroAfriCa-ICT'08, to which EELA and EELA-2 contributed, have shown that the e-Infrastructure situation in Africa compares well with the Latin American one. This means that African Grids are likely to face the same problems that EELA and EELA-2 experienced, especially in getting the necessary User and Decision Makers support to create NGIs and, later, a possible continent-wide African Grid Initiative (AGI). The hope is that the EELA-2 endeavour towards sustainability as described in this presentation could help the progress of African Grids.

  19. FIN-EPOS - Finnish national initiative of the European Plate Observing System: Bringing Finnish solid Earth infrastructures into EPOS

    NASA Astrophysics Data System (ADS)

    Vuorinen, Tommi; Korja, Annakaisa

    2017-04-01

    FIN-EPOS consortium is a joint community of Finnish national research institutes tasked with operating and maintaining solid-earth geophysical and geological observatories and laboratories in Finland. These national research infrastructures (NRIs) seek to join EPOS research infrastructure (EPOS RI) and further pursue Finland's participation as a founding member in EPOS ERIC (European Research Infrastructure Consortium). Current partners of FIN-EPOS are the University of Helsinki (UH), the University of and Oulu (UO), Finnish Geospatial Research Institute (FGI) of the National Land Survey (NLS), Finnish Meteorological Institute (FMI), Geological Survey of Finland (GTK), CSC - IT Center for Science and MIKES Metrology at VTT Technical Research Centre of Finland Ltd. The consortium is hosted by the Institute of Seismology, UH (ISUH). The primary purpose of the consortium is to act as a coordinating body between various NRIs and the EPOS RI. FIN-EPOS engages in planning and development of the national EPOS RI and will provide support in EPOS implementation phase (IP) for the partner NRIs. FIN-EPOS also promotes the awareness of EPOS in Finland and is open to new partner NRIs that would benefit from participating in EPOS. The consortium additionally seeks to advance solid Earth science education, technologies and innovations in Finland and is actively engaging in Nordic co-operation and collaboration of solid Earth RIs. The main short term objective of FIN-EPOS is to make Finnish geoscientific data provided by NRIs interoperable with the Thematic Core Services (TCS) in the EPOS IP. Consortium partners commit into applying and following metadata and data format standards provided by EPOS. FIN-EPOS will also provide a national Finnish language web portal where users are identified and their user rights for EPOS resources are defined.

  20. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis

    PubMed Central

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153

  1. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis.

    PubMed

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.

  2. The Anatomy of a Grid portal

    NASA Astrophysics Data System (ADS)

    Licari, Daniele; Calzolari, Federico

    2011-12-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  3. Instinctive analytics for coalition operations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    de Mel, Geeth R.; La Porta, Thomas; Pham, Tien; Pearson, Gavin

    2017-05-01

    The success of future military coalition operations—be they combat or humanitarian—will increasingly depend on a system's ability to share data and processing services (e.g. aggregation, summarization, fusion), and automatically compose services in support of complex tasks at the network edge. We call such an infrastructure instinctive—i.e., an infrastructure that reacts instinctively to address the analytics task at hand. However, developing such an infrastructure is made complex for the coalition environment due to its dynamism both in terms of user requirements and service availability. In order to address the above challenge, in this paper, we highlight our research vision and sketch some initial solutions into the problem domain. Specifically, we propose means to (1) automatically infer formal task requirements from mission specifications; (2) discover data, services, and their features automatically to satisfy the identified requirements; (3) create and augment shared domain models automatically; (4) efficiently offload services to the network edge and across coalition boundaries adhering to their computational properties and costs; and (5) optimally allocate and adjust services while respecting the constraints of operating environment and service fit. We envision that the research will result in a framework which enables self-description, discover, and assemble capabilities to both data and services in support of coalition mission goals.

  4. Interface methods for using intranet portal organizational memory information system.

    PubMed

    Ji, Yong Gu; Salvendy, Gavriel

    2004-12-01

    In this paper, an intranet portal is considered as an information infrastructure (organizational memory information system, OMIS) supporting organizational learning. The properties and the hierarchical structure of information and knowledge in an intranet portal OMIS was identified as a problem for navigation tools of an intranet portal interface. The problem relates to navigation and retrieval functions of intranet portal OMIS and is expected to adversely affect user performance, satisfaction, and usefulness. To solve the problem, a conceptual model for navigation tools of an intranet portal interface was proposed and an experiment using a crossover design was conducted with 10 participants. In the experiment, a separate access method (tabbed tree tool) was compared to an unified access method (single tree tool). The results indicate that each information/knowledge repository for which a user has a different structural knowledge should be handled separately with a separate access to increase user satisfaction and the usefulness of the OMIS and to improve user performance in navigation.

  5. Challenges to overcome: energy supply for remote consumers in the Russian Arctic

    NASA Astrophysics Data System (ADS)

    Morgunova, M. O.; Solovyev, D. A.

    2017-11-01

    The paper explores challenges of power supply for remote users through the case of the Northern Sea Route (NSR) supportive infrastructure development and specially nature protected areas (NPA) of the Russian Arctic. The study is based on a comprehensive analysis of relevant data of the state of renewable energy in the Russian Arctic. The paper gives policy recommendations on how to extend the use of renewable energy power plants in the region, optimize their input and increase cost-effectiveness and safety.

  6. Space Station Needs, Attributes and Architectural Options. Contractor orientation briefings

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Requirements are considered for user missions involving life sciences; astrophysics, environmental observation; Earth and planetary exploration; materials processing; Spacelab payloads; technology development; and communications are analyzed. Plans to exchange data with potential cooperating nations and ESA are reviewed. The capability of the space shuttle to support space station activities are discussed. The status of the OAST space station technology study, conceptual architectures for a space station, elements of the space-based infrastructure, and the use of the shuttle external tank are also considered.

  7. Enriching Spatial Data Infrastructure (sdi) by User Generated Contents for Transportation

    NASA Astrophysics Data System (ADS)

    Shakeri, M.; Alimohammadi, A.; Sadeghi-Niaraki, A.; Alesheikh, A. A.

    2013-09-01

    Spatial data is one of the most critical elements underpinning decision making for many disciplines. Accessing and sharing spatial data have always been a great struggle for researchers. Spatial data infrastructure (SDI) plays a key role in spatial data sharing by building a suitable platform for collaboration and cooperation among the different data producer organizations. In recent years, SDI vision has been moved toward a user-centric platform which has led to development of a new and enriched generation of SDI (third generation). This vision is to provide an environment where users can cooperate to handle spatial data in an effective and satisfactory way. User-centric SDI concentrates on users, their requirements and preferences while in the past, SDI initiatives were mainly concentrated on technological issues such as the data harmonization, standardized metadata models, standardized web services for data discovery, visualization and download. On the other hand, new technologies such as the GPS-equipped smart phones, navigation devices and Web 2.0 technologies have enabled citizens to actively participate in production and sharing of the spatial information. This has led to emergence of the new phenomenon called the Volunteered Geographic Information (VGI). VGI describes any type of content that has a geographic element which has been voluntarily collected. However, its distinctive element is the geographic information that can be collected and produced by citizens with different formal expertise and knowledge of the spatial or geographical concepts. Therefore, ordinary citizens can cooperate in providing massive sources of information that cannot be ignored. These can be considered as the valuable spatial information sources in SDI. These sources can be used for completing, improving and updating of the existing databases. Spatial information and technologies are an important part of the transportation systems. Planning, design and operation of the transportation systems requires the exchange of large volumes of spatial data and often close cooperation among the various organizations. However, there is no technical and organizational process to get a suitable data infrastructure to address diverse needs of the transportation. Hence, development of a common standards and a simple data exchange mechanism is strongly needed in the field of transportation for decision support. Since one of the main purposes of transportation projects is to improve the quality of services provided to users, it is necessary to involve the users themselves in the decision making processes. This should be done through a public participation and involvement in all stages of the transportation projects. In other words, using public knowledge and information as another source of information is very important to make better and more efficient decisions. Public participation in transportation projects can also help organizations to enhance their public supports; because the lack of public support can lead to failure of technically valid projects. However, due to complexity of the transportation tasks, lack of appropriate environment and methods for facilitation of the public participation, collection and analysis of the public information and opinions, public participation in this field has not been well considered so far. This paper reviews the previous researches based on the enriched SDI development and its movement toward the VGI by focusing on the public participation in transportation projects. To this end, methods and models that have been used in previous researches are studied and classified initially. Then, methods of the previous researchers on VGI and transportation are conceptualized in SDI. Finally, the suggested method for transportation projects is presented. Results indicate success of the new generation of SDI in integration with public participation for transportation projects.

  8. Assessing Socioeconomic Impacts of Cascading Infrastructure Disruptions in a Dynamic Human-Infrastructure Network

    DTIC Science & Technology

    2016-07-01

    CAC common access card DoD Department of Defense FOUO For Official Use Only GIS geographic information systems GUI graphical user interface HISA...as per requirements of this project, is UNCLASS/For Official Use Only (FOUO), with access re- stricted to DOD common access card (CAC) users. Key...Boko Haram Fuel Dump Discovered in Maiduguru.” Available: http://saharareporters.com/2015/10/01/another-boko-haram-fuel- dump - discovered-maiduguri

  9. A Possible Approach for Addressing Neglected Human Factors Issues of Systems Engineering

    NASA Technical Reports Server (NTRS)

    Johnson, Christopher W.; Holloway, C. Michael

    2011-01-01

    The increasing complexity of safety-critical applications has led to the introduction of decision support tools in the transportation and process industries. Automation has also been introduced to support operator intervention in safety-critical applications. These innovations help reduce overall operator workload, and filter application data to maximize the finite cognitive and perceptual resources of system operators. However, these benefits do not come without a cost. Increased computational support for the end-users of safety-critical applications leads to increased reliance on engineers to monitor and maintain automated systems and decision support tools. This paper argues that by focussing on the end-users of complex applications, previous research has tended to neglect the demands that are being placed on systems engineers. The argument is illustrated through discussing three recent accidents. The paper concludes by presenting a possible strategy for building and using highly automated systems based on increased attention by management and regulators, improvements in competency and training for technical staff, sustained support for engineering team resource management, and the development of incident reporting systems for infrastructure failures. This paper represents preliminary work, about which we seek comments and suggestions.

  10. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  11. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  12. NCBI GEO: mining tens of millions of expression profiles--database and tools update.

    PubMed

    Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Edgar, Ron

    2007-01-01

    The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely disseminates microarray and other forms of high-throughput data generated by the scientific community. The database has a minimum information about a microarray experiment (MIAME)-compliant infrastructure that captures fully annotated raw and processed data. Several data deposit options and formats are supported, including web forms, spreadsheets, XML and Simple Omnibus Format in Text (SOFT). In addition to data storage, a collection of user-friendly web-based interfaces and applications are available to help users effectively explore, visualize and download the thousands of experiments and tens of millions of gene expression patterns stored in GEO. This paper provides a summary of the GEO database structure and user facilities, and describes recent enhancements to database design, performance, submission format options, data query and retrieval utilities. GEO is accessible at http://www.ncbi.nlm.nih.gov/geo/

  13. Quantum secured gigabit optical access networks

    PubMed Central

    Fröhlich, Bernd; Dynes, James F.; Lucamarini, Marco; Sharpe, Andrew W.; Tam, Simon W.-B.; Yuan, Zhiliang; Shields, Andrew J.

    2015-01-01

    Optical access networks connect multiple endpoints to a common network node via shared fibre infrastructure. They will play a vital role to scale up the number of users in quantum key distribution (QKD) networks. However, the presence of power splitters in the commonly used passive network architecture makes successful transmission of weak quantum signals challenging. This is especially true if QKD and data signals are multiplexed in the passive network. The splitter introduces an imbalance between quantum signal and Raman noise, which can prevent the recovery of the quantum signal completely. Here we introduce a method to overcome this limitation and demonstrate coexistence of multi-user QKD and full power data traffic from a gigabit passive optical network (GPON) for the first time. The dual feeder implementation is compatible with standard GPON architectures and can support up to 128 users, highlighting that quantum protected GPON networks could be commonplace in the future. PMID:26656307

  14. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  15. ibex: An open infrastructure software platform to facilitate collaborative work in radiomics

    PubMed Central

    Zhang, Lifei; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation. PMID:25735289

  16. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    PubMed

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the IBEX software to be intuitive, powerful, and easy to use. IBEX can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone IBEX and IBEX's source code can be downloaded. The authors successfully implemented IBEX, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

  17. Progress of the European Assistive Technology Information Network.

    PubMed

    Gower, Valerio; Andrich, Renzo

    2015-01-01

    The European Assistive Technology Information Network (EASTIN), launched in 2005 as the result of a collaborative EU project, provides information on Assistive Technology products and related material through the website www.eastin.eu. In the past few years several advancements have been implemented on the EASTIN website thanks to the contribution of EU funded projects, including a multilingual query processing component for supporting non expert users, a user rating and comment facility, and a detailed taxonomy for the description of ICT based assistive products. Recently, within the framework of the EU funded project Cloud4All, the EASTIN information system has also been federated with the Unified Listing of assistive products, one of the building blocks of the Global Public Inclusive Infrastructure initiative.

  18. CMS distributed data analysis with CRAB3

    DOE PAGES

    Mascheroni, M.; Balcas, J.; Belforte, S.; ...

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks andmore » submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.« less

  19. Kwf-Grid workflow management system for Earth science applications

    NASA Astrophysics Data System (ADS)

    Tran, V.; Hluchy, L.

    2009-04-01

    In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.

  20. The IRI/LDEO Climate Data Library: Helping People use Climate Data

    NASA Astrophysics Data System (ADS)

    Blumenthal, M. B.; Grover-Kopec, E.; Bell, M.; del Corral, J.

    2005-12-01

    The IRI Climate Data Library (http://iridl.ldeo.columbia.edu/) is a library of datasets. By library we mean a collection of things, collected from both near and far, designed to make them more accessible for the library's users. Our datasets come from many different sources, many different "data cultures", many different formats. By dataset we mean a collection of data organized as multidimensional dependent variables, independent variables, and sub-datasets, along with the metadata (particularly use-metadata) that makes it possible to interpret the data in a meaningful manner. Ingrid, which provides the infrastructure for the Data Library, is an environment that lets one work with datasets: read, write, request, serve, view, select, calculate, transform, ... . It hides an extraordinary amount of technical detail from the user, letting the user think in terms of manipulations to datasets rather that manipulations of files of numbers. Among other things, this hidden technical detail could be accessing data on servers in other places, doing only the small needed portion of an enormous calculation, or translating to and from a variety of formats and between "data cultures". These operations are presented as a collection of virtual directories and documents on a web server, so that an ordinary web client can instantiate a calculation simply by requesting the resulting document or image. Building on this infrastructure, we (and others) have created collections of dynamically-updated images to faciliate monitoring aspects of the climate system, as well as linking these images to the underlying data. We have also created specialized interfaces to address the particular needs of user groups that IRI needs to support.

  1. Research Challenges in Managing and Using Service Level Agreements

    NASA Astrophysics Data System (ADS)

    Rana, Omer; Ziegler, Wolfgang

    A Service Level Agreement (SLA) represents an agreement between a service user and a provider in the context of a particular service provision. SLAs contain Quality of Service properties that must be maintained by a provider, and as agreed between a provider and a user/client. These are generally defined as a set of Service Level Objectives (SLOs). These properties need to be measurable and must be monitored during the provision of the service that has been agreed in the SLA. The SLA must also contain a set of penalty clauses specifying what happens when service providers fail to deliver the pre-agreed quality. Hence, an SLA may be used by both a user and a provider - from a user perspective, an SLA defines what is required - often defined using non-functional attributes of service provision. From a providers perspective, an SLA may be used to support capacity planning - especially if a provider is making it's capability available to multiple users. An SLA may be used by a client and provider to manage their behaviour over time - for instance, to optimise their long running revenue (cost) or QoS attributes (such as execution time), for instance. The lifecycle of an SLA is outlined, along with various uses of SLAs to support infrastructure management. A discussion about WS-Agreement - the emerging standard for specifying SLAs - is also provided.

  2. Primary care access barriers as reported by nonurgent emergency department users: implications for the US primary care infrastructure.

    PubMed

    Hefner, Jennifer L; Wexler, Randy; McAlearney, Ann Scheck

    2015-01-01

    The objective was to explore variation by insurance status in patient-reported barriers to accessing primary care. The authors fielded a brief, anonymous, voluntary survey of nonurgent emergency department (ED) visits at a large academic medical center and conducted descriptive analysis and thematic coding of 349 open-ended survey responses. The privately insured predominantly reported primary care infrastructure barriers-wait time in clinic and for an appointment, constraints related to conventional business hours, and difficulty finding a primary care provider (because of geography or lack of new patient openings). Half of those insured by Medicaid and/or Medicare also reported these infrastructure barriers. In contrast, the uninsured predominantly reported insurance, income, and transportation barriers. Given that insured nonurgent ED users frequently report infrastructure barriers, these should be the focus of patient-level interventions to reduce nonurgent ED use and of health system-level policies to enhance the capacity of the US primary care infrastructure. © 2014 by the American College of Medical Quality.

  3. Supporting NEESPI with Data Services - The SIB-ESS-C e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Gerlach, R.; Schmullius, C.; Frotscher, K.

    2009-04-01

    Data discovery and retrieval is commonly among the first steps performed for any Earth science study. The way scientific data is searched and accessed has changed significantly over the past two decades. Especially the development of the World Wide Web and the technologies that evolved along shortened the data discovery and data exchange process. On the other hand the amount of data collected and distributed by earth scientists has increased exponentially requiring new concepts for data management and sharing. One such concept to meet the demand is to build up Spatial Data Infrastructures (SDI) or e-Infrastructures. These infrastructures usually contain components for data discovery allowing users (or other systems) to query a catalogue or registry and retrieve metadata information on available data holdings and services. Data access is typically granted using FTP/HTTP protocols or, more advanced, through Web Services. A Service Oriented Architecture (SOA) approach based on standardized services enables users to benefit from interoperability among different systems and to integrate distributed services into their application. The Siberian Earth System Science Cluster (SIB-ESS-C) being established at the University of Jena (Germany) is such a spatial data infrastructure following these principles and implementing standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO). The prime objective is to provide researchers with focus on Siberia with the technical means for data discovery, data access, data publication and data analysis. The region of interest covers the entire Asian part of the Russian Federation from the Ural to the Pacific Ocean including the Ob-, Lena- and Yenissey river catchments. The aim of SIB-ESS-C is to provide a comprehensive set of data products for Earth system science in this region. Although SIB-ESS-C will be equipped with processing capabilities for in-house data generation (mainly from Earth Observation), current data holdings of SIB-ESS-C have been created in collaboration with a number of partners in previous and ongoing research projects (e.g. SIBERIA-II, SibFORD, IRIS). At the current development stage the SIB-ESS-C system comprises a federated metadata catalogue accessible through the SIB-ESS-C Web Portal or from any OGC-CSW compliant client. Due to full interoperability with other metadata catalogues users of the SIB-ESS-C Web Portal are able to search external metadata repositories. The Web Portal contains also a simple visualization component which will be extended to a comprehensive visualization and analysis tool in the near future. All data products are already accessible as a Web Mapping Service and will be made available as Web Feature and Web Coverage Services soon allowing users to directly incorporate the data into their application. The SIB-ESS-C infrastructure will be further developed as one node in a network of similar systems (e.g. NASA GIOVANNI) in the NEESPI region.

  4. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  5. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  6. A European Federated Cloud: Innovative distributed computing solutions by EGI

    NASA Astrophysics Data System (ADS)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research Environments - with the EGI Federated Cloud as long as these services access cloud resources through the user-facing interfaces selected by the EGI community. The Task Force will be closed in May 2013. It already • Identified key enabling technologies by which a multinational, federated 'Infrastructure as a Service' (IaaS) type cloud can be built from the NGIs' resources; • Deployed a test bed to evaluate the integration of virtualised resources within EGI and to engage with early adopter use cases from different scientific domains; • Integrated cloud resources into the EGI production infrastructure through cloud specific bindings of the EGI information system, monitoring system, authentication system, etc.; • Collected and catalogued requirements concerning the federated cloud services from the feedback of early adopter use cases; • Provided feedback and requirements to relevant technology providers on their implementations and worked with these providers to address those requirements; • Identified issues that need to be addressed by other areas of EGI (such as portal solutions, resource allocation policies, marketing and user support) to reach a production system. The Task Force will publish a blueprint in April 2013. The blueprint will drive the establishment of a production level EGI Federated Cloud service after May 2013.

  7. Extensible Adaptable Simulation Systems: Supporting Multiple Fidelity Simulations in a Common Environment

    NASA Technical Reports Server (NTRS)

    McLaughlin, Brian J.; Barrett, Larry K.

    2012-01-01

    Common practice in the development of simulation systems is meeting all user requirements within a single instantiation. The Joint Polar Satellite System (JPSS) presents a unique challenge to establish a simulation environment that meets the needs of a diverse user community while also spanning a multi-mission environment over decades of operation. In response, the JPSS Flight Vehicle Test Suite (FVTS) is architected with an extensible infrastructure that supports the operation of multiple observatory simulations for a single mission and multiple mission within a common system perimeter. For the JPSS-1 satellite, multiple fidelity flight observatory simulations are necessary to support the distinct user communities consisting of the Common Ground System development team, the Common Ground System Integration & Test team, and the Mission Rehearsal Team/Mission Operations Team. These key requirements present several challenges to FVTS development. First, the FVTS must ensure all critical user requirements are satisfied by at least one fidelity instance of the observatory simulation. Second, the FVTS must allow for tailoring of the system instances to function in diverse operational environments from the High-security operations environment at NOAA Satellite Operations Facility (NSOF) to the ground system factory floor. Finally, the FVTS must provide the ability to execute sustaining engineering activities on a subset of the system without impacting system availability to parallel users. The FVTS approach of allowing for multiple fidelity copies of observatory simulations represents a unique concept in simulator capability development and corresponds to the JPSS Ground System goals of establishing a capability that is flexible, extensible, and adaptable.

  8. Geovisualization applications to examine and explore high-density and hierarchical critical infrastructure data

    NASA Astrophysics Data System (ADS)

    Edsall, Robert; Hembree, Harvey

    2018-05-01

    The geospatial research and development team in the National and Homeland Security Division at Idaho National Laboratory was tasked with providing tools to derive insight from the substantial amount of data currently available - and continuously being produced - associated with the critical infrastructure of the US. This effort is in support of the Department of Homeland Security, whose mission includes the protection of this infrastructure and the enhancement of its resilience to hazards, both natural and human. We present geovisual-analytics-based approaches for analysis of vulnerabilities and resilience of critical infrastructure, designed so that decision makers, analysts, and infrastructure owners and managers can manage risk, prepare for hazards, and direct resources before and after an incident that might result in an interruption in service. Our designs are based on iterative discussions with DHS leadership and analysts, who in turn will use these tools to explore and communicate data in partnership with utility providers, law enforcement, and emergency response and recovery organizations, among others. In most cases these partners desire summaries of large amounts of data, but increasingly, our users seek the additional capability of focusing on, for example, a specific infrastructure sector, a particular geographic region, or time period, or of examining data in a variety of generalization or aggregation levels. These needs align well with tenets of in-formation-visualization design; in this paper, selected applications among those that we have designed are described and positioned within geovisualization, geovisual analytical, and information visualization frameworks.

  9. Databases for multilevel biophysiology research available at Physiome.jp.

    PubMed

    Asai, Yoshiyuki; Abe, Takeshi; Li, Li; Oka, Hideki; Nomura, Taishin; Kitano, Hiroaki

    2015-01-01

    Physiome.jp (http://physiome.jp) is a portal site inaugurated in 2007 to support model-based research in physiome and systems biology. At Physiome.jp, several tools and databases are available to support construction of physiological, multi-hierarchical, large-scale models. There are three databases in Physiome.jp, housing mathematical models, morphological data, and time-series data. In late 2013, the site was fully renovated, and in May 2015, new functions were implemented to provide information infrastructure to support collaborative activities for developing models and performing simulations within the database framework. This article describes updates to the databases implemented since 2013, including cooperation among the three databases, interactive model browsing, user management, version management of models, management of parameter sets, and interoperability with applications.

  10. Building a Cloud Infrastructure for a Virtual Environmental Observatory

    NASA Astrophysics Data System (ADS)

    El-khatib, Y.; Blair, G. S.; Gemmell, A. L.; Gurney, R. J.

    2012-12-01

    Environmental science is often fragmented: data is collected by different organizations using mismatched formats and conventions, and models are misaligned and run in isolation. Cloud computing offers a lot of potential in the way of resolving such issues by supporting data from different sources and at various scales, and integrating models to create more sophisticated and collaborative software services. The Environmental Virtual Observatory pilot (EVOp) project, funded by the UK Natural Environment Research Council, aims to demonstrate how cloud computing principles and technologies can be harnessed to develop more effective solutions to pressing environmental issues. The EVOp infrastructure is a tailored one constructed from resources in both private clouds (owned and managed by us) and public clouds (leased from third party providers). All system assets are accessible via a uniform web service interface in order to enable versatile and transparent resource management, and to support fundamental infrastructure properties such as reliability and elasticity. The abstraction that this 'everything as a service' principle brings also supports mashups, i.e. combining different web services (such as models) and data resources of different origins (in situ gauging stations, warehoused data stores, external sources, etc.). We adopt the RESTful style of web services in order to draw a clear line between client and server (i.e. cloud host) and also to keep the server completely stateless. This significantly improves the scalability of the infrastructure and enables easy infrastructure management. For instance, tasks such as load balancing and failure recovery are greatly simplified without the need for techniques such as advance resource reservation or shared block devices. Upon this infrastructure, we developed a web portal composed of a bespoke collection of web-based visualization tools to help bring out relationships or patterns within the data. The portal was designed for use without any programming prerequisites by stakeholders from different backgrounds such as scientists, policy makers, local communities, and the general public. The development of the portal was carried out using an iterative behaviour-driven approach. We have developed six distinct storyboards to determine the requirements of different users. From these, we identified two storyboards to implement during the pilot phase. The first explores flooding at a local catchment scale for farmers and the public. We simulate hydrological interactions to determine where saturated land-surface areas develop. Model parameter values resembling catchment characteristics could be specified either explicitly (for domain specialists) or indirectly using one of several predefined land use scenarios (for less familiar audiences). The second storyboard investigates the diffuse of agricultural pollution at a national level, with regulators as users. We study the flux of Nitrogen and Phosphorus from land to rivers and coastal regions at various scales of drainage and reporting units. This is particularly useful to uncover the impact of existing policy instruments or risk from future environmental changes on the levels of N and P flux.

  11. Cloud computing applications for biomedical science: A perspective.

    PubMed

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  12. Cloud computing applications for biomedical science: A perspective

    PubMed Central

    2018-01-01

    Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176

  13. LifeWatch - a Large-scale eScience Infrastructure to Assist in Understanding and Managing our Planet's Biodiversity

    NASA Astrophysics Data System (ADS)

    Hernández Ernst, Vera; Poigné, Axel; Los, Walter

    2010-05-01

    Understanding and managing the complexity of the biodiversity system in relation to global changes concerning land use and climate change with their social and economic implications is crucial to mitigate species loss and biodiversity changes in general. The sustainable development and exploitation of existing biodiversity resources require flexible and powerful infrastructures offering, on the one hand, the access to large-scale databases of observations and measures, to advanced analytical and modelling software, and to high performance computing environments and, on the other hand, the interlinkage of European scientific communities among each others and with national policies. The European Strategy Forum on Research Infrastructures (ESFRI) selected the "LifeWatch e-science and technology infrastructure for biodiversity research" as a promising development to construct facilities to contribute to meet those challenges. LifeWatch collaborates with other selected initiatives (e.g. ICOS, ANAEE, NOHA, and LTER-Europa) to achieve the integration of the infrastructures at landscape and regional scales. This should result in a cooperating cluster of such infrastructures supporting an integrated approach for data capture and transmission, data management and harmonisation. Besides, facilities for exploration, forecasting, and presentation using heterogeneous and distributed data and tools should allow the interdisciplinary scientific research at any spatial and temporal scale. LifeWatch is an example of a new generation of interoperable research infrastructures based on standards and a service-oriented architecture that allow for linkage with external resources and associated infrastructures. External data sources will be established data aggregators as the Global Biodiversity Information Facility (GBIF) for species occurrences and other EU Networks of Excellence like the Long-Term Ecological Research Network (LTER), GMES, and GEOSS for terrestrial monitoring, the MARBEF network for marine data, and the Consortium for European Taxonomic Facilities (CETAF) and its European Distributed Institute for Taxonomy (EDIT) for taxonomic data. But also "smaller" networks and "volunteer scientists" may send data (e.g. GPS supported species observations) to a LifeWatch repository. Autonomous operating wireless environmental sensors and other smart hand-held devices will contribute to increase data capture activities. In this way LifeWatch will directly underpin the development of GEOBON, the biodiversity component if GEOSS, the Global Earth observation System. To overcome all major technical difficulties imposed by the variety of currently and future technologies, protocols, data formats, etc., LifeWatch will define and use common open interfaces. For this purpose, the LifeWatch Reference Model was developed during the preparatory phase specifying the service-oriented architecture underlying the ICT-infrastructure. The Reference Model identifies key requirements and key architectural concepts to support workflows for scientific in-silico experiments, tracking of provenance, and semantic enhancement, besides meeting the functional requirements mentioned before. It provides guidelines for the specification and implementation of services and information models, defining as well a number of generic services and models. Another key issue addressed by the Reference Model is that the cooperation of many developer teams residing in many European countries has to be organized to obtain compatible results in that conformance with the specifications and policies of the Reference Model will be required. The LifeWatch Reference Model is based on the ORCHESTRA Reference Model for geospatial-oriented architectures and services networks that provides a generic framework and has been endorsed as best practice by the Open Geospatial Consortium (OGC). The LifeWatch Infrastructure will allow (interdisciplinary) scientific researchers to collaborate by creating e-Laboratories or by composing e-Services which can be shared and jointly developed. For it a long-term vision for the LifeWatch Biodiversity Workbench Portal has been developed as a one-stop application for the LifeWatch infrastructure based on existing and emerging technologies. There the user can find all available resources such as data, workflows, tools, etc. and access LifeWatch applications that integrate different resource and provides key capabilities like resource discovery and visualisation, creation of workflows, creation and management of provenance, and the support of collaborative activities. While LifeWatch developers will construct components for solving generic LifeWatch tasks, users may add their own facilities to fulfil individual needs. Examples for application of the LifeWatch Reference Model and the LifeWatch Biodiversity Workbench Portal will be given.

  14. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less

  15. ATLAS user analysis on private cloud resources at GoeGrid

    NASA Astrophysics Data System (ADS)

    Glaser, F.; Nadal Serrano, J.; Grabowski, J.; Quadt, A.

    2015-12-01

    User analysis job demands can exceed available computing resources, especially before major conferences. ATLAS physics results can potentially be slowed down due to the lack of resources. For these reasons, cloud research and development activities are now included in the skeleton of the ATLAS computing model, which has been extended by using resources from commercial and private cloud providers to satisfy the demands. However, most of these activities are focused on Monte-Carlo production jobs, extending the resources at Tier-2. To evaluate the suitability of the cloud-computing model for user analysis jobs, we developed a framework to launch an ATLAS user analysis cluster in a cloud infrastructure on demand and evaluated two solutions. The first solution is entirely integrated in the Grid infrastructure by using the same mechanism, which is already in use at Tier-2: A designated Panda-Queue is monitored and additional worker nodes are launched in a cloud environment and assigned to a corresponding HTCondor queue according to the demand. Thereby, the use of cloud resources is completely transparent to the user. However, using this approach, submitted user analysis jobs can still suffer from a certain delay introduced by waiting time in the queue and the deployed infrastructure lacks customizability. Therefore, our second solution offers the possibility to easily deploy a totally private, customizable analysis cluster on private cloud resources belonging to the university.

  16. VRE4EIC: A Reference Architecture and Components for Research Access

    NASA Astrophysics Data System (ADS)

    Bailo, Daniele; Jeffery, Keith G.; Atakan, Kuvvet; Harrison, Matt

    2017-04-01

    VRE4EIC (www. Vre4eic.eu) is a EC H2020 project with the objective of providing a reference architecture and components for a VRE (Virtual Research Environment). SGs (Science gateways) in North America and VLs (Virtual Laboratories) in Australasia are similar - but significantly different - concepts. A VRE provides not only access to ICT services, data, software components and equipment but also provides a collaborative working environment for cooperation and supports the research lifecycle from idea to publication. Europe has a large number of RIs (Research infrastructures); the major ones are coordinated and planned through the ESFRI (European Strategy Forum on Research Infrastructures) roadmap. Most RIs - such as EPOS - provide a user interface portal function, ranging from (1) a simple list of assets (such as services, datasets, software components, workflows, equipment, experts.. although many provide only information about data) with URLs upon which the user can click to download; (2) to an end-user facility for constructing queries to find relevant assets and subsets of them more-or-less integrated as a downloaded combined dataset; (3) in a few cases - for constructing workflows to achieve the scientific objective. The portal has the scope of the individual RI. The aim of VRE4EIC is to provide a reference architecture, software components and a prototype implementation VRE which allows user access and all the portal functions (and more) not only to an individual RI - such as EPOS - but across RIs thus encouraging multidisciplinary research. Two RIs: EPOS and ENVRIplus (itself spanning 21 RIs) are represented within the project as requirements stakeholders , validators of the architecture and evaluators of the prototype system developed. The characterisation of many more RIs - and their requirements - has been done to ensure wide applicability. The virtualisation across RIs is achieved by using a rich metadata catalog based on CERIF (Common European Research Information Format: a EU Recommendation to Member States and supported, developed and promoted by euroCRIS www.eurocris.org ). The VRE4EIC catalog system harvests from individual RI catalogs (with conversion since they use many different metadata formats) to give the user of VRE4IC a 'canonical view' over the RIs and their assets. The VRE4IC user interface provides portal functions for each and all RIs but also a workflow construction facility. The project expects the RIs to use middleware developed in other projects to facilitate workflow deployment across the eIs (e-Infrastructures) such as GEANT, EUDAT, EGI, OpenAIRE and will itself use the same mechanisms. After 15 months of the project we have validated the requirements from the RIs, defined the architecture and started work on the metadata mapping and conversion. The intention is to have the prototype at M24 for evaluation by the RI partners (and some external Ris) leading to a refined architecture and software stack for production use after M36.

  17. Spatial Knowledge Infrastructures - Creating Value for Policy Makers and Benefits the Community

    NASA Astrophysics Data System (ADS)

    Arnold, L. M.

    2016-12-01

    The spatial data infrastructure is arguably one of the most significant advancements in the spatial sector. It's been a game changer for governments, providing for the coordination and sharing of spatial data across organisations and the provision of accessible information to the broader community of users. Today however, end-users such as policy-makers require far more from these spatial data infrastructures. They want more than just data; they want the knowledge that can be extracted from data and they don't want to have to download, manipulate and process data in order to get the knowledge they seek. It's time for the spatial sector to reduce its focus on data in spatial data infrastructures and take a more proactive step in emphasising and delivering the knowledge value. Nowadays, decision-makers want to be able to query at will the data to meet their immediate need for knowledge. This is a new value proposal for the decision-making consumer and will require a shift in thinking. This paper presents a model for a Spatial Knowledge Infrastructure and underpinning methods that will realise a new real-time approach to delivering knowledge. The methods embrace the new capabilities afforded through the sematic web, domain and process ontologies and natural query language processing. Semantic Web technologies today have the potential to transform the spatial industry into more than just a distribution channel for data. The Semantic Web RDF (Resource Description Framework) enables meaning to be drawn from data automatically. While pushing data out to end-users will remain a central role for data producers, the power of the semantic web is that end-users have the ability to marshal a broad range of spatial resources via a query to extract knowledge from available data. This can be done without actually having to configure systems specifically for the end-user. All data producers need do is make data accessible in RDF and the spatial analytics does the rest.

  18. Data and Models as Social Objects in the HydroShare System for Collaboration in the Hydrology Community and Beyond

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Idaszak, R.; Horsburgh, J. S.; Ames, D. P.; Goodall, J. L.; Band, L. E.; Merwade, V.; Couch, A.; Hooper, R. P.; Maidment, D. R.; Dash, P. K.; Stealey, M.; Yi, H.; Gan, T.; Castronova, A. M.; Miles, B.; Li, Z.; Morsy, M. M.; Crawley, S.; Ramirez, M.; Sadler, J.; Xue, Z.; Bandaragoda, C.

    2016-12-01

    How do you share and publish hydrologic data and models for a large collaborative project? HydroShare is a new, web-based system for sharing hydrologic data and models with specific functionality aimed at making collaboration easier. HydroShare has been developed with U.S. National Science Foundation support under the auspices of the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) to support the collaboration and community cyberinfrastructure needs of the hydrology research community. Within HydroShare, we have developed new functionality for creating datasets, describing them with metadata, and sharing them with collaborators. We cast hydrologic datasets and models as "social objects" that can be shared, collaborated around, annotated, published and discovered. In addition to data and model sharing, HydroShare supports web application programs (apps) that can act on data stored in HydroShare, just as software programs on your PC act on your data locally. This can free you from some of the limitations of local computing capacity and challenges in installing and maintaining software on your own PC. HydroShare's web-based cyberinfrastructure can take work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This presentation will describe HydroShare's collaboration functionality that enables both public and private sharing with individual users and collaborative user groups, and makes it easier for collaborators to iterate on shared datasets and models, creating multiple versions along the way, and publishing them with a permanent landing page, metadata description, and citable Digital Object Identifier (DOI) when the work is complete. This presentation will also describe the web app architecture that supports interoperability with third party servers functioning as application engines for analysis and processing of big hydrologic datasets. While developed to support the cyberinfrastructure needs of the hydrology community, the informatics infrastructure for programmatic interoperability of web resources has a generality beyond the solution of hydrology problems that will be discussed.

  19. Frequency Count Attribute Oriented Induction of Corporate Network Data for Mapping Business Activity

    NASA Astrophysics Data System (ADS)

    Tanutama, Lukas

    2014-03-01

    Companies increasingly rely on Internet for effective and efficient business communication. As Information Technology infrastructure backbone for business activities, corporate network connects the company to Internet and enables its activities globally. It carries data packets generated by the activities of the users performing their business tasks. Traditionally, infrastructure operations mainly maintain data carrying capacity and network devices performance. It would be advantageous if a company knows what activities are running in its network. The research provides a simple method of mapping the business activity reflected by the network data. To map corporate users' activities, a slightly modified Attribute Oriented Induction (AOI) approach to mine the network data was applied. The frequency of each protocol invoked were counted to show what the user intended to do. The collected data was samples taken within a certain sampling period. Samples were taken due to the enormous data packets generated. Protocols of interest are only Internet related while intranet protocols are ignored. It can be concluded that the method could provide the management a general overview of the usage of its infrastructure and lead to efficient, effective and secure ICT infrastructure.

  20. Transportation Infrastructure Robustness : Joint Engineering and Economic Analysis

    DOT National Transportation Integrated Search

    2017-11-01

    The objectives of this study are to develop a methodology for assessing the robustness of transportation infrastructure facilities and assess the effect of damage to such facilities on travel demand and the facilities users welfare. The robustness...

  1. Applications of connected vehicle infrastructure technologies to enhance transit service efficiency and safety.

    DOT National Transportation Integrated Search

    2016-09-30

    Implementing Connected Vehicle Infrastructure (CVI) applications for handheld devices into public transportation transit systems would provide transit agencies and their users with two-directional information flow from traveler-to-agencies, agencies-...

  2. The WMO RA VI Regional Climate Centre Network - a support to users in Europe

    NASA Astrophysics Data System (ADS)

    Rösner, S.

    2012-04-01

    Climate, like weather, has no limits. Therefore the World Meteorological Organization (WMO), a specialized United Nations organization, has established a three-level infrastructure to better serve its member countries. This structure comprises Global Producing Centres for Long-range Forecasts (GPCs), Regional Climate Centres (RCCs) and National Meteorological or Hydrometeorological Services (NMHSs), in most cases representing their countries in WMO governance bodies. The elements of this infrastructure are also part of and contribute to the Global Framework for Climate Services (GFCS) agreed to be established by World Climate Conference 3 (WCC-3) and last year's Sixteenth World Meteorological Congress (WMO Cg-XVI). RCCs are the core element of this infrastructure at the regional level and are being establish in all WMO Regional Associations (RAs), i.e. Africa (RA I); Asia (II); South America (III); North America, Central America and the Caribbean (IV); South-West Pacific (V); Europe (VI). Addressing inter-regional areas of common interest like the Mediterranean or the Polar Regions may require inter-regional RCCs. For each region the RCCs follow a user driven approach with regard to governance and structure as well as products generated for the users in the respective region. However, there are common guidelines all RCCs do have to follow. This is to make sure that services are provided based on best scientific standards, are routinely and reliably generated and made available in an operational mode. These guidelines are being developed within WMO and make use of decade-long experience gained in the business of operational weather forecast. Based on the requirements of the 50 member countries of WMO RA VI it was agreed to establish the WMO RCC as a network of centres of excellence that create regional products including long-range forecasts that support regional and national climate activities, and thereby strengthen the capacity of WMO Members in the region to deliver better climate services to national users. On 1 June 2009 the WMO RA VI Pilot RCC-Network started its pilot phase to demonstrate its capability to provide, on an operational day-to-day basis, the products agreed upon by the member countries of RA VI. On 5 October 2011 the process to become formally designated WMO RA VI RCC-Network was initiated, and it is expected that the designation will happen mid to end 2012. The presentation will describe the global and regional activities related to RCCs and explain in more details the situation in WMO RA VI (Europe).

  3. [Development of a secure and cost-effective infrastructure for the access of arbitrary web-based image distribution systems].

    PubMed

    Hackländer, T; Kleber, K; Schneider, H; Demabre, N; Cramer, B M

    2004-08-01

    To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90 % of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. Project goals were realized by means of an infrastructure that works vendor independently with any HTML-based image distribution systems. The requirements of data security were realized using state-of-the-art web techniques. Adequate access and transfer speed lead to a widespread acceptance of the system on the part of external users.

  4. A social science data-fusion tool and the Data Management through e-Social Science (DAMES) infrastructure.

    PubMed

    Warner, Guy C; Blum, Jesse M; Jones, Simon B; Lambert, Paul S; Turner, Kenneth J; Tan, Larry; Dawson, Alison S F; Bell, David N F

    2010-08-28

    The last two decades have seen substantially increased potential for quantitative social science research. This has been made possible by the significant expansion of publicly available social science datasets, the development of new analytical methodologies, such as microsimulation, and increases in computing power. These rich resources do, however, bring with them substantial challenges associated with organizing and using data. These processes are often referred to as 'data management'. The Data Management through e-Social Science (DAMES) project is working to support activities of data management for social science research. This paper describes the DAMES infrastructure, focusing on the data-fusion process that is central to the project approach. It covers: the background and requirements for provision of resources by DAMES; the use of grid technologies to provide easy-to-use tools and user front-ends for several common social science data-management tasks such as data fusion; the approach taken to solve problems related to data resources and metadata relevant to social science applications; and the implementation of the architecture that has been designed to achieve this infrastructure.

  5. Transformational Spaceport and Range Concept of Operations: A Vision to Transform Ground and Launch Operations

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.

  6. NOAA Big Data Partnership RFI

    NASA Astrophysics Data System (ADS)

    de la Beaujardiere, J.

    2014-12-01

    In February 2014, the US National Oceanic and Atmospheric Administration (NOAA) issued a Big Data Request for Information (RFI) from industry and other organizations (e.g., non-profits, research laboratories, and universities) to assess capability and interest in establishing partnerships to position a copy of NOAA's vast data holdings in the Cloud, co-located with easy and affordable access to analytical capabilities. This RFI was motivated by a number of concerns. First, NOAA's data facilities do not necessarily have sufficient network infrastructure to transmit all available observations and numerical model outputs to all potential users, or sufficient infrastructure to support simultaneous computation by many users. Second, the available data are distributed across multiple services and data facilities, making it difficult to find and integrate data for cross-domain analysis and decision-making. Third, large datasets require users to have substantial network, storage, and computing capabilities of their own in order to fully interact with and exploit the latent value of the data. Finally, there may be commercial opportunities for value-added products and services derived from our data. Putting a working copy of data in the Cloud outside of NOAA's internal networks and infrastructures should reduce demands and risks on our systems, and should enable users to interact with multiple datasets and create new lines of business (much like the industries built on government-furnished weather or GPS data). The NOAA Big Data RFI therefore solicited information on technical and business approaches regarding possible partnership(s) that -- at no net cost to the government and minimum impact on existing data facilities -- would unleash the commercial potential of its environmental observations and model outputs. NOAA would retain the master archival copy of its data. Commercial partners would not be permitted to charge fees for access to the NOAA data they receive, but would be able to develop and sell value-added products and services. This effort is still very much in the initial market research phase and has complexity in technical, business, and technical domains. This paper will discuss the current status of the activity and potential next steps.

  7. Freva - Freie Univ Evaluation System Framework for Scientific Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Schartner, Thomas; Kirchner, Ingo; Rust, Henning W.; Cubasch, Ulrich; Ulbrich, Uwe

    2016-04-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science. Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitation of the provision and usage of tools and climate data automatically increases the number of scientists working with the data sets and identifying discrepancies. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Therefore, plugged-in tools benefit from transparency and reproducibility. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  8. Freva - Freie Univ Evaluation System Framework for Scientific HPC Infrastructures in Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Schartner, T.; Grieger, J.; Kirchner, I.; Rust, H.; Cubasch, U.; Ulbrich, U.

    2017-12-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science (e.g. www-miklip.dkrz.de, cmip-eval.dkrz.de). Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Furthermore, if configurations match while starting an evaluation plugin, the system suggests to use results already produced by other users - saving CPU/h, I/O, disk space and time. The efficient interaction between different technologies improves the Earth system modeling science framed by Freva.

  9. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Astrophysics Data System (ADS)

    Behnke, J.; Lindsay, F. E.; Lowe, D. R.; Mitchell, A. E.; Lynnes, C.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been a central component of the NASA Earth observation program since the 1990's. The data collected by NASA's remote sensing instruments represent a significant public investment in research. EOSDIS provides free and open access to this data to a worldwide public research community. From the very beginning, EOSDIS was conceived as a system built on partnerships between NASA Centers, US agencies and academia. EOSDIS manages a wide range of Earth science discipline data that include cryosphere, land cover change, polar processes, field campaigns, ocean surface, digital elevation, atmosphere dynamics and composition, and inter-disciplinary research, among many others. Over the years, EOSDIS has evolved to support increasingly complex and diverse NASA Earth Science data collections. EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities/connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. . EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users. While the separation into domain-specific science archives helps to manage the wide variety of missions and datasets, the common services and practices serve to knit the overall system together into a coherent whole, with sharing of data, metadata, information and software making EOSDIS more than the simple sum of its parts. This paper will describe those parts and how the whole system works together to deliver Earth science data to millions of users.

  10. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  11. Utilization of Multimedia Laboratory: An Acceptance Analysis using TAM

    NASA Astrophysics Data System (ADS)

    Modeong, M.; Palilingan, V. R.

    2018-02-01

    Multimedia is often utilized by teachers to present a learning materials. Learning that delivered by multimedia enables people to understand the information of up to 60% of the learning in general. To applying the creative learning to the classroom, multimedia presentation needs a laboratory as a space that provides multimedia needs. This study aims to reveal the level of student acceptance on the multimedia laboratories, by explaining the direct and indirect effect of internal support and technology infrastructure. Technology Acceptance Model (TAM) is used as the basis of measurement on this research, through the perception of usefulness, ease of use, and the intention, it’s recognized capable of predicting user acceptance about technology. This study used the quantitative method. The data analysis using path analysis that focuses on trimming models, it’s performed to improve the model of path analysis structure by removing exogenous variables that have insignificant path coefficients. The result stated that Internal Support and Technology Infrastructure are well mediated by TAM variables to measure the level of technology acceptance. The implications suggest that TAM can measure the success of multimedia laboratory utilization in Faculty of Engineering UNIMA.

  12. The Role of Social Media in the Civic Co-Management of Urban Infrastructure Resilience

    NASA Astrophysics Data System (ADS)

    Turpin, E.; Holderness, T.; Wickramasuriya, R.

    2014-12-01

    As cities evolve to become increasingly complex systems of people and interconnected infrastructure the impacts of extreme events and long term climatological change are significantly heightened (Walsh et al. 2011). Understanding the resilience of urban systems and the impacts of infrastructure failure is therefore key to understanding the adaptability of cities to climate change (Rosenzweig 2011). Such information is particularly critical in developing nations which are predicted to bear the brunt of climate change (Douglas et al., 2008), but often lack the resources and data required to make informed decisions regarding infrastructure and societal resilience (e.g. Paar & Rekittke 2011). We propose that mobile social media in a people-as-sensors paradigm provides a means of monitoring the response of a city to cascading infrastructure failures induced by extreme weather events. Such an approach is welcomed in developing nations where crowd-sourced data are increasingly being used as an alternative to missing or incomplete formal data sources to help solve infrastructure challenges (Holderness 2014). In this paper we present PetaJakarta.org as a case study that harnesses the power of social media to gather, sort and display information about flooding for residents of Jakarta, Indonesia in real time, recuperating the failures of infrastructure and monitoring systems through a web of social media connections. Our GeoSocial Intelligence Framework enables the capture and comprehension of significant time-critical information to support decision-making, and as a means of transparent communication, while maintaining user privacy, to enable civic co-management processes to aid city-scale climate adaptation and resilience. PetaJakarta empowers community residents to collect and disseminate situational information about flooding, via the social media network Twitter, to provide city-scale decision support for Jakarta's Emergency Management Team, and a neighbourhood-scale public information service for individuals and communities to alert them of nearby flood events. Douglas I., et al. 2008 ENVIRONMENT & URBANIZATION Holderness T. 2014 IEEE TECHNOLOGY & SOCIETY MAGAZINE Paar P. & Rekittke J. 2011 FUTURE INTERNET Rosenzweig C. 2011 SCIENTIFIC AMERICAN Walsh C. L., et al. 2011 URBAN DESIGN & PLANNING

  13. Towards usable and interdisciplinary e-infrastructure (Invited)

    NASA Astrophysics Data System (ADS)

    de Roure, D.

    2010-12-01

    e-Science and cyberinfrastucture at their outset tended to focus on ‘big science’ and cross-organisational infrastructures, demonstrating complex engineering with the promise of high returns. It soon became evident that the key to researchers harnessing new technology for everyday use is a user-centric approach which empowers the user - both from a developer and an end user viewpoint. For example, this philosophy is demonstrated in workflow systems for systematic data processing and in the Web 2.0 approach as exemplified by the myExperiment social web site for sharing workflows, methods and ‘research objects’. Hence the most disruptive aspect of Cloud and virtualisation is perhaps that they make new computational resources and applications usable, creating a flourishing ecosystem for routine processing and innovation alike - and in this we must consider software sustainability. This talk will discuss the changing nature of e-Science digital ecosystem, focus on the e-infrastructure for cross-disciplinary work, and highlight issues in sustainable software development in this context.

  14. Distributed observing facility for remote access to multiple telescopes

    NASA Astrophysics Data System (ADS)

    Callegari, Massimo; Panciatici, Antonio; Pasian, Fabio; Pucillo, Mauro; Santin, Paolo; Aro, Simo; Linde, Peter; Duran, Maria A.; Rodriguez, Jose A.; Genova, Francoise; Ochsenbein, Francois; Ponz, J. D.; Talavera, Antonio

    2000-06-01

    The REMOT (Remote Experiment Monitoring and conTrol) project was financed by 1996 by the European Community in order to investigate the possibility of generalizing the remote access to scientific instruments. After the feasibility of this idea was demonstrated, the DYNACORE (DYNAmically, COnfigurable Remote Experiment monitoring and control) project was initiated as a REMOT follow-up. Its purpose is to develop software technology to support scientists in two different domains, astronomy and plasma physics. The resulting system allows (1) simultaneous multiple user access to different experimental facilities, (2) dynamic adaptability to different kinds of real instruments, (3) exploitation of the communication infrastructures features, (4) ease of use through intuitive graphical interfaces, and (5) additional inter-user communication using off-the-shelf projects such as video-conference tools, chat programs and shared blackboards.

  15. Building a Prototype of LHC Analysis Oriented Computing Centers

    NASA Astrophysics Data System (ADS)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  16. Interactive analysis of geographically distributed population imaging data collections over light-path data networks

    NASA Astrophysics Data System (ADS)

    van Lew, Baldur; Botha, Charl P.; Milles, Julien R.; Vrooman, Henri A.; van de Giessen, Martijn; Lelieveldt, Boudewijn P. F.

    2015-03-01

    The cohort size required in epidemiological imaging genetics studies often mandates the pooling of data from multiple hospitals. Patient data, however, is subject to strict privacy protection regimes, and physical data storage may be legally restricted to a hospital network. To enable biomarker discovery, fast data access and interactive data exploration must be combined with high-performance computing resources, while respecting privacy regulations. We present a system using fast and inherently secure light-paths to access distributed data, thereby obviating the need for a central data repository. A secure private cloud computing framework facilitates interactive, computationally intensive exploration of this geographically distributed, privacy sensitive data. As a proof of concept, MRI brain imaging data hosted at two remote sites were processed in response to a user command at a third site. The system was able to automatically start virtual machines, run a selected processing pipeline and write results to a user accessible database, while keeping data locally stored in the hospitals. Individual tasks took approximately 50% longer compared to a locally hosted blade server but the cloud infrastructure reduced the total elapsed time by a factor of 40 using 70 virtual machines in the cloud. We demonstrated that the combination light-path and private cloud is a viable means of building an analysis infrastructure for secure data analysis. The system requires further work in the areas of error handling, load balancing and secure support of multiple users.

  17. Incentivizing biodiversity conservation in artisanal fishing communities through territorial user rights and business model innovation.

    PubMed

    Gelcich, Stefan; Donlan, C Josh

    2015-08-01

    Territorial user rights for fisheries are being promoted to enhance the sustainability of small-scale fisheries. Using Chile as a case study, we designed a market-based program aimed at improving fishers' livelihoods while incentivizing the establishment and enforcement of no-take areas within areas managed with territorial user right regimes. Building on explicit enabling conditions (i.e., high levels of governance, participation, and empowerment), we used a place-based, human-centered approach to design a program that will have the necessary support and buy-in from local fishers to result in landscape-scale biodiversity benefits. Transactional infrastructure must be complex enough to capture the biodiversity benefits being created, but simple enough so that the program can be scaled up and is attractive to potential financiers. Biodiversity benefits created must be commoditized, and desired behavioral changes must be verified within a transactional context. Demand must be generated for fisher-created biodiversity benefits in order to attract financing and to scale the market model. Important design decisions around these 3 components-supply, transactional infrastructure, and demand-must be made based on local social-ecological conditions. Our market model, which is being piloted in Chile, is a flexible foundation on which to base scalable opportunities to operationalize a scheme that incentivizes local, verifiable biodiversity benefits via conservation behaviors by fishers that could likely result in significant marine conservation gains and novel cross-sector alliances. © 2015, Society for Conservation Biology.

  18. Hazards and accessibility: combining and visualizing threat and open infrastructure data for disaster management

    NASA Astrophysics Data System (ADS)

    Tost, Jordi; Ehmel, Fabian; Heidmann, Frank; Olen, Stephanie M.; Bookhagen, Bodo

    2018-05-01

    The assessment of natural hazards and risk has traditionally been built upon the estimation of threat maps, which are used to depict potential danger posed by a particular hazard throughout a given area. But when a hazard event strikes, infrastructure is a significant factor that can determine if the situation becomes a disaster. The vulnerability of the population in a region does not only depend on the area's local threat, but also on the geographical accessibility of the area. This makes threat maps by themselves insufficient for supporting real-time decision-making, especially for those tasks that involve the use of the road network, such as management of relief operations, aid distribution, or planning of evacuation routes, among others. To overcome this problem, this paper proposes a multidisciplinary approach divided in two parts. First, data fusion of satellite-based threat data and open infrastructure data from OpenStreetMap, introducing a threat-based routing service. Second, the visualization of this data through cartographic generalization and schematization. This emphasizes critical areas along roads in a simple way and allows users to visually evaluate the impact natural hazards may have on infrastructure. We develop and illustrate this methodology with a case study of landslide threat for an area in Colombia.

  19. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  20. The National Information Infrastructure: Agenda for Action.

    ERIC Educational Resources Information Center

    Microcomputers for Information Management, 1995

    1995-01-01

    Discusses the National Information Infrastructure and the role of the government. Topics include private sector investment; universal service; technological innovation; user orientation; information security and network reliability; management of the radio frequency spectrum; intellectual property rights; coordination with other levels of…

  1. Provenance Storage, Querying, and Visualization in PBase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kianmajd, Parisa; Ludascher, Bertram; Missier, Paolo

    2015-01-01

    We present PBase, a repository for scientific workflows and their corresponding provenance information that facilitates the sharing of experiments among the scientific community. PBase is interoperable since it uses ProvONE, a standard provenance model for scientific workflows. Workflows and traces are stored in RDF, and with the support of SPARQL and the tree cover encoding, the repository provides a scalable infrastructure for querying the provenance data. Furthermore, through its user interface, it is possible to: visualize workflows and execution traces; visualize reachability relations within these traces; issue SPARQL queries; and visualize query results.

  2. An aircraft Earth station for general aviation

    NASA Technical Reports Server (NTRS)

    Matyas, R.; Boughton, J.; Lyons, R.; Spenler, S.; Rigley, J.

    1990-01-01

    While the focus has been international commercial air traffic, an opportunity exists to provide satellite communications to smaller aircraft. For these users equipment cost and weight critically impact the decision to install satellite communications equipment. Less apparent to the operator is the need for a system infrastructure that will be supported both regionally and internationally and that is compatible with the ground segment being installed for commercial aeronautical satellite communications. A system concept is described as well as a low cost terminal that are intended to satisfy the small aircraft market.

  3. Generating a Corpus of Mobile Forensic Images for Masquerading user Experimentation.

    PubMed

    Guido, Mark; Brooks, Marc; Grover, Justin; Katz, Eric; Ondricek, Jared; Rogers, Marcus; Sharpe, Lauren

    2016-11-01

    The Periodic Mobile Forensics (PMF) system investigates user behavior on mobile devices. It applies forensic techniques to an enterprise mobile infrastructure, utilizing an on-device agent named TractorBeam. The agent collects changed storage locations for later acquisition, reconstruction, and analysis. TractorBeam provides its data to an enterprise infrastructure that consists of a cloud-based queuing service, relational database, and analytical framework for running forensic processes. During a 3-month experiment with Purdue University, TractorBeam was utilized in a simulated operational setting across 34 users to evaluate techniques to identify masquerading users (i.e., users other than the intended device user). The research team surmises that all masqueraders are undesirable to an enterprise, even when a masquerader lacks malicious intent. The PMF system reconstructed 821 forensic images, extracted one million audit events, and accurately detected masqueraders. Evaluation revealed that developed methods reduced storage requirements 50-fold. This paper describes the PMF architecture, performance of TractorBeam throughout the protocol, and results of the masquerading user analysis. © 2016 American Academy of Forensic Sciences.

  4. Virtual Hubs for facilitating access to Open Data

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Latre, Miguel Á.; Ernst, Julia; Brumana, Raffaella; Brauman, Stefan; Nativi, Stefano

    2015-04-01

    In October 2014 the ENERGIC-OD (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data) project, funded by the European Union under the Competitiveness and Innovation framework Programme (CIP), has started. In response to the EU call, the general objective of the project is to "facilitate the use of open (freely available) geographic data from different sources for the creation of innovative applications and services through the creation of Virtual Hubs". In ENERGIC-OD, Virtual Hubs are conceived as information systems supporting the full life cycle of Open Data: publishing, discovery and access. They facilitate the use of Open Data by lowering and possibly removing the main barriers which hampers geo-information (GI) usage by end-users and application developers. Data and data services heterogeneity is recognized as one of the major barriers to Open Data (re-)use. It imposes end-users and developers to spend a lot of effort in accessing different infrastructures and harmonizing datasets. Such heterogeneity cannot be completely removed through the adoption of standard specifications for service interfaces, metadata and data models, since different infrastructures adopt different standards to answer to specific challenges and to address specific use-cases. Thus, beyond a certain extent, heterogeneity is irreducible especially in interdisciplinary contexts. ENERGIC-OD Virtual Hubs address heterogeneity adopting a mediation and brokering approach: specific components (brokers) are dedicated to harmonize service interfaces, metadata and data models, enabling seamless discovery and access to heterogeneous infrastructures and datasets. As an innovation project, ENERGIC-OD will integrate several existing technologies to implement Virtual Hubs as single points of access to geospatial datasets provided by new or existing platforms and infrastructures, including INSPIRE-compliant systems and Copernicus services. ENERGIC OD will deploy a set of five Virtual Hubs (VHs) at national level in France, Germany, Italy, Poland, Spain and an additional one at the European level. VHs will be provided according to the cloud Software-as-a-Services model. The main expected impact of VHs is the creation of new business opportunities opening up access to Research Data and Public Sector Information. Therefore, ENERGIC-OD addresses not only end-users, who will have the opportunity to access the VH through a geo-portal, but also application developers who will be able to access VH functionalities through simple Application Programming Interfaces (API). ENERGIC-OD Consortium will develop ten different applications on top of the deployed VHs. They aim to demonstrate how VHs facilitate the development of new and multidisciplinary applications based on the full exploitation of (open) GI, hence stimulating innovation and business activities.

  5. The Contribution of the Geodetic Community (WG4) to EPOS

    NASA Astrophysics Data System (ADS)

    Fernandes, R. M. S.; Bastos, L. C.; Bruyninx, C.; D'Agostino, N.; Dousa, J.; Ganas, A.; Lidberg, M.; Nocquet, J.-M.

    2012-04-01

    WG4 - "EPOS Geodetic Data and Infrastructure" is the Working Group of the EPOS project responsible to define and prepare the integration of the existing Pan-European Geodetic Infrastructures into a unique future consistent infrastructure that supports the European Geosciences, which is the ultimate goal of the EPOS project. The WG4 is formed by representatives of the participating EPOS countries and from EUREF (European Reference Frame), which also ensures the inclusion and the contact with countries that formally are not part of the current phase of EPOS. In reality, the fact that Europe is formed by many countries (having different laws and policies) lacking an infrastructure similar to UNAVCO (which concentrates the effort of the local geo-science community) raises the difficulties to create a common geodetic infrastructure serving not only the entire geo-science community, but also many other areas of great social-economic impact. The benefits of the creation of such infrastructure (shared and easily accessed by all) are evident in order to optimize the existing and future geodetic resources. This presentation intends to detail the work being produced within the working group WG4 related with the definition of strategies towards the implementation of the best solutions that will permit to the end-users, and in particular geo-scientists, to access the geodetic data, derived solutions, and associated metadata using transparent and uniform processes. Discussed issues include the access to high-rate data in near real-time, storage and backup of historical and future data, the sustainability of the networks in order to achieve long-term stability in the observation infrastructure, seamless access to the data, open data policies, and processing tools.

  6. Nuclear Energy Infrastructure Database Fitness and Suitability Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation (NE-4) initiated the Nuclear Energy-Infrastructure Management Project by tasking the Nuclear Science User Facilities (NSUF) to create a searchable and interactive database of all pertinent NE supported or related infrastructure. This database will be used for analyses to establish needs, redundancies, efficiencies, distributions, etc. in order to best understand the utility of NE’s infrastructure and inform the content of the infrastructure calls. The NSUF developed the database by utilizing data and policy direction from a wide variety of reports from the Department of Energy, the National Research Council, themore » International Atomic Energy Agency and various other federal and civilian resources. The NEID contains data on 802 R&D instruments housed in 377 facilities at 84 institutions in the US and abroad. A Database Review Panel (DRP) was formed to review and provide advice on the development, implementation and utilization of the NEID. The panel is comprised of five members with expertise in nuclear energy-associated research. It was intended that they represent the major constituencies associated with nuclear energy research: academia, industry, research reactor, national laboratory, and Department of Energy program management. The Nuclear Energy Infrastructure Database Review Panel concludes that the NSUF has succeeded in creating a capability and infrastructure database that identifies and documents the major nuclear energy research and development capabilities across the DOE complex. The effort to maintain and expand the database will be ongoing. Detailed information on many facilities must be gathered from associated institutions added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements.« less

  7. Assistive Awareness in Smart Grids

    NASA Astrophysics Data System (ADS)

    Bourazeri, Aikaterini; Almajano, Pablo; Rodriguez, Inmaculada; Lopez-Sanchez, Maite

    The following sections are included: * Introduction * Background * The User-Infrastructure Interface * User Engagement through Assistive Awareness * Research Impact * Serious Games for Smart Grids * Serious Game Technology * Game scenario * Game mechanics * Related Work * Summary and Conclusions

  8. The Falcon Telescope Network

    NASA Astrophysics Data System (ADS)

    Chun, F.; Tippets, R.; Dearborn, M.; Gresham, K.; Freckleton, R.; Douglas, M.

    2014-09-01

    The Falcon Telescope Network (FTN) is a global network of small aperture telescopes developed by the Center for Space Situational Awareness Research in the Department of Physics at the United States Air Force Academy (USAFA). Consisting of commercially available equipment, the FTN is a collaborative effort between USAFA and other educational institutions ranging from two- and four-year colleges to major research universities. USAFA provides the equipment (e.g. telescope, mount, camera, filter wheel, dome, weather station, computers and storage devices) while the educational partners provide the building and infrastructure to support an observatory. The user base includes USAFA along with K-12 and higher education faculty and students. Since the FTN has a general use purpose, objects of interest include satellites, astronomical research, and STEM support images. The raw imagery, all in the public domain, will be accessible to FTN partners and will be archived at USAFA in the Cadet Space Operations Center. FTN users will be able to submit observational requests via a web interface. The requests will then be prioritized based on the type of user, the object of interest, and a user-defined priority. A network wide schedule will be developed every 24 hours and each FTN site will autonomously execute its portion of the schedule. After an observational request is completed, the FTN user will receive notification of collection and a link to the data. The Falcon Telescope Network is an ambitious endeavor, but demonstrates the cooperation that can be achieved by multiple educational institutions.

  9. A National contribution to the GEO Science and Technology roadmap: GIIDA Project

    NASA Astrophysics Data System (ADS)

    Nativi, Stefano; Mazzetti, Paolo; Guzzetti, Fausto; Oggioni, Alessandro; Pirrone, Nicola; Santolieri, Rosalia; Viola, Angelo; Tartari, Gianni; Santoro, Mattia

    2010-05-01

    The GIIDA (Gestione Integrata e Interoperativa dei Dati Ambientali) project is an initiative of the Italian National Research Council (CNR) launched in 2008 as an inter-departmental project, aiming to design and develop a multidisciplinary e-infrastructure (cyber-infrastructure) for the management, processing, and evaluation of Earth and Environmental resources -i.e. data, services, models, sensors, best practices. GIIDA has been contributing to the implementation of the GEO (Group of Earth Observation) Science and Technology (S&T) roadmap by: (a) linking relevant S&T communities to GEOSS (GEO System of Systems); (b) ensuring that GEOSS is built based on state-of-the-art science and technology. GIIDA co-ordinates the CNR's digital infrastructure development for Earth Observation resources sharing and cooperates with other national agencies and existing projects pursuing the same objective. For the CNR, GIIDA provides an interface to European and international interoperability programmes (e.g. INSPIRE, and GMES). It builds a national network for dialogue and resolution of issues at varying scientific and technical levels. To achieve such goals, GIIDA introduced a set of guidance principles: • To shift from a "traditional" data centric approach to a more advanced service-based solution for Earth System Science and Environmental information. • To shift the focus from Data to Information Spatial Infrastructures in order to support decision-making. • To be interoperable with analogous National (e.g. SINAnet, and the INSPIRE National Infrastructure) and international initiatives (e.g. INSPIRE, GMES, SEIS, and GEOSS). • To reinforce the Italian presence in the European and international programmes concerning digital infrastructures, geospatial information, and the Mega-Science approach. • To apply the National and International Information Technology (IT) standards for achieving multi-disciplinary interoperability in the Earth and Space Sciences (e.g. ISO, OGC, CEN, CNIPA) In keeping with GEOSS, GIIDA infrastructure adopts a System of Systems architectural approach in order to federate the existing systems managed by a set of recognized Thematic Areas (i.e. Risks, Biodiversity, Climate Change, Air Quality, Land and Water Quality, Ocean and Marine resources, Joint Research and Public Administration infrastructures). GIIDA system of systems will contribute to develop multidisciplinary teams studying the global Earth systems in order to address the needs coming from the GEO Societal Benefit Areas (SBAs). GIIDA issued a Call For Pilots receiving more than 20 high-level projects which are contributing to the GIIDA system development. A national-wide research environmental infrastructure must be interconnected with analogous digital infrastructures operated by other important stakeholders, such as public users and private companies. In fact, the long-term sustainability of a "System of Systems" requires synergies between all the involved stakeholders' domains: Users, Governance, Capacity provision, and Research. Therefore, in order to increase the effectiveness of the GIIDA contribution process to a national environmental e-infrastructure, collaborations were activated with relevant actors of the other stakeholders' domains at the national level (e.g. ISPRA SINAnet).

  10. Psychological Usability of Layered Application Software Platforms

    NASA Technical Reports Server (NTRS)

    Uhiarik, John

    1999-01-01

    This grant provided Graduate Research Fellowship Program support to James Michael Herold to obtain a graduate degree from the Department of Psychology at Kansas State University and conduct usability testing of graphical user interfaces the Kennedy Space Center. The student independently took an additional internship at Boll Laboratories without informing his graduate advisor or the Department of Psychology. Because he was NOT making progress toward his degree, he elected not to pursue his graduate studies at Kansas State University and self-terminated from the program (spin without informing his advisor or the Department of Psychology]. What he accomplished for NASA in terms of usability testing at the Kennedy Space Center is unclear. NASA terminated support for the project: 07/30/99, including a $4,000 commitment to provide infrastructure support to the Department of Psychology.

  11. Development of a web service for analysis in a distributed network.

    PubMed

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.

  12. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  13. A Battery-Aware Algorithm for Supporting Collaborative Applications

    NASA Astrophysics Data System (ADS)

    Rollins, Sami; Chang-Yit, Cheryl

    Battery-powered devices such as laptops, cell phones, and MP3 players are becoming ubiquitous. There are several significant ways in which the ubiquity of battery-powered technology impacts the field of collaborative computing. First, applications such as collaborative data gathering, become possible. Also, existing applications that depend on collaborating devices to maintain the system infrastructure must be reconsidered. Fundamentally, the problem lies in the fact that collaborative applications often require end-user computing devices to perform tasks that happen in the background and are not directly advantageous to the user. In this work, we seek to better understand how laptop users use the batteries attached to their devices and analyze a battery-aware alternative to Gnutella’s ultrapeer selection algorithm. Our algorithm provides insight into how system maintenance tasks can be allocated to battery-powered nodes. The most significant result of our study indicates that a large portion of laptop users can participate in system maintenance without sacrificing any of their battery. These results show great promise for existing collaborative applications as well as new applications, such as collaborative data gathering, that rely upon battery-powered devices.

  14. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less

  15. Developing standards for a national spatial data infrastructure

    USGS Publications Warehouse

    Wortman, Kathryn C.

    1994-01-01

    The concept of a framework for data and information linkages among producers and users, known as a National Spatial Data Infrastructure (NSDI), is built upon four corners: data, technology, institutions, and standards. Standards are paramount to increase the efficiency and effectiveness of the NSDI. Historically, data standards and specifications have been developed with a very limited scope - they were parochial, and even competitive in nature, and promoted the sharing of data and information within only a small community at the expense of more open sharing across many communities. Today, an approach is needed to grow and evolve standards to support open systems and provide consistency and uniformity among data producers. There are several significant ongoing activities in geospatial data standards: transfer or exchange, metadata, and data content. In addition, standards in other areas are under discussion, including data quality, data models, and data collection.

  16. IGI (the Italian Grid initiative) and its impact on the Astrophysics community

    NASA Astrophysics Data System (ADS)

    Pasian, F.; Vuerli, C.; Taffoni, G.

    IGI - the Association for the Italian Grid Infrastructure - has been established as a consortium of 14 different national institutions to provide long term sustainability to the Italian Grid. Its formal predecessor, the Grid.it project, has come to a close in 2006; to extend the benefits of this project, IGI has taken over and acts as the national coordinator for the different sectors of the Italian e-Infrastructure present in EGEE. IGI plans to support activities in a vast range of scientificdisciplines - e.g. Physics, Astrophysics, Biology, Health, Chemistry, Geophysics, Economy, Finance - and any possible extensions to other sectors such as Civil Protection, e-Learning, dissemination in Universities and secondary schools. Among these, the Astrophysics community is active as a user, by porting applications of various kinds, but also as a resource provider in terms of computing power and storage, and as middleware developer.

  17. The European seismological waveform framework EIDA

    NASA Astrophysics Data System (ADS)

    Trani, Luca; Koymans, Mathijs; Quinteros, Javier; Heinloo, Andres; Euchner, Fabian; Strollo, Angelo; Sleeman, Reinoud; Clinton, John; Stammler, Klaus; Danecek, Peter; Pedersen, Helle; Ionescu, Constantin; Pinar, Ali; Evangelidis, Christos

    2017-04-01

    The ORFEUS1 European Integrated Data Archive (EIDA2) federates (currently) 11 major European seismological data centres into a common organisational and operational framework which offers: (a) transparent and uniform access tools, advanced services and products for seismological waveform data; (b) a platform for establishing common policies for the curation of seismological waveform data and the description of waveform data by standardised quality metrics; (c) proper attribution and citation (e.g. data ownership). After its establishment in 2013, EIDA has been collecting and distributing seamlessly large amounts of seismological data and products to the research community and beyond. A major task of EIDA is the on-going improvement of the services, tools and products portfolio in order to meet the increasingly demanding users' requirements. At present EIDA is entering a new operational phase and will become the reference infrastructure for seismological waveform data in the pan-European infrastructure for solid-Earth science: EPOS (European Plate Observing System)3. The EIDA Next Generation developments, initiated within the H2020 project EPOS-IP, will provide a new infrastructure that will support the seismological and multidisciplinary EPOS community facilitating interoperability in a broader context. EIDA NG comprises a number of new services and products e.g.: Routing Service, Authentication Service, WFCatalog, Mediator, Station Book and more in the near future. In this contribution we present the current status of the EIDA NG developments and provide an overview of the usage of the new services and their impact on the user community. 1 www.orfeus-eu.org/ 2 www.orfeus-eu.org/eida/eida.html 3 www.epos-ip.org

  18. An Integrated Web-based Decision Support System in Disaster Risk Management

    NASA Astrophysics Data System (ADS)

    Aye, Z. C.; Jaboyedoff, M.; Derron, M. H.

    2012-04-01

    Nowadays, web based decision support systems (DSS) play an essential role in disaster risk management because of their supporting abilities which help the decision makers to improve their performances and make better decisions without needing to solve complex problems while reducing human resources and time. Since the decision making process is one of the main factors which highly influence the damages and losses of society, it is extremely important to make right decisions at right time by combining available risk information with advanced web technology of Geographic Information System (GIS) and Decision Support System (DSS). This paper presents an integrated web-based decision support system (DSS) of how to use risk information in risk management efficiently and effectively while highlighting the importance of a decision support system in the field of risk reduction. Beyond the conventional systems, it provides the users to define their own strategies starting from risk identification to the risk reduction, which leads to an integrated approach in risk management. In addition, it also considers the complexity of changing environment from different perspectives and sectors with diverse stakeholders' involvement in the development process. The aim of this platform is to contribute a part towards the natural hazards and geosciences society by developing an open-source web platform where the users can analyze risk profiles and make decisions by performing cost benefit analysis, Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) with the support of others tools and resources provided. There are different access rights to the system depending on the user profiles and their responsibilities. The system is still under development and the current version provides maps viewing, basic GIS functionality, assessment of important infrastructures (e.g. bridge, hospital, etc.) affected by landslides and visualization of the impact-probability matrix in terms of socio-economic dimension.

  19. Scientific Infrastructure To Support Manned And Unmanned Aircraft, Tethered Balloons, And Related Aerial Activities At Doe Arm Facilities On The North Slope Of Alaska

    NASA Astrophysics Data System (ADS)

    Ivey, M.; Dexheimer, D.; Hardesty, J.; Lucero, D. A.; Helsel, F.

    2015-12-01

    The U.S. Department of Energy (DOE), through its scientific user facility, the Atmospheric Radiation Measurement (ARM) facilities, provides scientific infrastructure and data to the international Arctic research community via its research sites located on the North Slope of Alaska. DOE has recently invested in improvements to facilities and infrastructure to support operations of unmanned aerial systems for science missions in the Arctic and North Slope of Alaska. A new ground facility, the Third ARM Mobile Facility, was installed at Oliktok Point Alaska in 2013. Tethered instrumented balloons were used to make measurements of clouds in the boundary layer including mixed-phase clouds. A new Special Use Airspace was granted to DOE in 2015 to support science missions in international airspace in the Arctic. Warning Area W-220 is managed by Sandia National Laboratories for DOE Office of Science/BER. W-220 was successfully used for the first time in July 2015 in conjunction with Restricted Area R-2204 and a connecting Altitude Reservation Corridor (ALTRV) to permit unmanned aircraft to operate north of Oliktok Point. Small unmanned aircraft (DataHawks) and tethered balloons were flown at Oliktok during the summer and fall of 2015. This poster will discuss how principal investigators may apply for use of these Special Use Airspaces, acquire data from the Third ARM Mobile Facility, or bring their own instrumentation for deployment at Oliktok Point, Alaska. The printed poster will include the standard DOE funding statement.

  20. Concept of a spatial data infrastructure for web-mapping, processing and service provision for geo-hazards

    NASA Astrophysics Data System (ADS)

    Weinke, Elisabeth; Hölbling, Daniel; Albrecht, Florian; Friedl, Barbara

    2017-04-01

    Geo-hazards and their effects are distributed geographically over wide regions. The effective mapping and monitoring is essential for hazard assessment and mitigation. It is often best achieved using satellite imagery and new object-based image analysis approaches to identify and delineate geo-hazard objects (landslides, floods, forest fires, storm damages, etc.). At the moment, several local/national databases and platforms provide and publish data of different types of geo-hazards as well as web-based risk maps and decision support systems. Also, the European commission implemented the Copernicus Emergency Management Service (EMS) in 2015 that publishes information about natural and man-made disasters and risks. Currently, no platform for landslides or geo-hazards as such exists that enables the integration of the user in the mapping and monitoring process. In this study we introduce the concept of a spatial data infrastructure for object delineation, web-processing and service provision of landslide information with the focus on user interaction in all processes. A first prototype for the processing and mapping of landslides in Austria and Italy has been developed within the project Land@Slide, funded by the Austrian Research Promotion Agency FFG in the Austrian Space Applications Program ASAP. The spatial data infrastructure and its services for the mapping, processing and analysis of landslides can be extended to other regions and to all types of geo-hazards for analysis and delineation based on Earth Observation (EO) data. The architecture of the first prototypical spatial data infrastructure includes four main areas of technical components. The data tier consists of a file storage system and the spatial data catalogue for the management of EO-data, other geospatial data on geo-hazards, as well as descriptions and protocols for the data processing and analysis. An interface to extend the data integration from external sources (e.g. Sentinel-2 data) is planned for the possibility of rapid mapping. The server tier consists of java based web and GIS server. Sub and main services are part of the service tier. Sub services are for example map services, feature editing services, geometry services, geoprocessing services and metadata services. For (meta)data provision and to support data interoperability, web standards of the OGC and the rest-interface is used. Four central main services are designed and developed: (1) a mapping service (including image segmentation and classification approaches), (2) a monitoring service to monitor changes over time, (3) a validation service to analyze landslide delineations from different sources and (4) an infrastructure service to identify affected landslides. The main services use and combine parts of the sub services. Furthermore, a series of client applications based on new technology standards making use of the data and services offered by the spatial data infrastructure. Next steps include the design to extend the current spatial data infrastructure to other areas and geo-hazard types to develop a spatial data infrastructure that can assist targeted mapping and monitoring of geo-hazards on a global context.

  1. Surface transportation : clear federal role and criteria-based selection process could improve three national and regional infrastructure programs.

    DOT National Transportation Integrated Search

    2009-02-01

    To help meet increasing transportation demands, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) created three programs to invest federal funds in national and regional transportation infrastructur...

  2. The JASMIN Cloud: specialised and hybrid to meet the needs of the Environmental Sciences Community

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Lawrence, Bryan; Churchill, Jonathan; Pritchard, Matt

    2014-05-01

    Cloud computing provides enormous opportunities for the research community. The large public cloud providers provide near-limitless scaling capability. However, adapting Cloud to scientific workloads is not without its problems. The commodity nature of the public cloud infrastructure can be at odds with the specialist requirements of the research community. Issues such as trust, ownership of data, WAN bandwidth and costing models make additional barriers to more widespread adoption. Alongside the application of public cloud for scientific applications, a number of private cloud initiatives are underway in the research community of which the JASMIN Cloud is one example. Here, cloud service models are being effectively super-imposed over more established services such as data centres, compute cluster facilities and Grids. These have the potential to deliver the specialist infrastructure needed for the science community coupled with the benefits of a Cloud service model. The JASMIN facility based at the Rutherford Appleton Laboratory was established in 2012 to support the data analysis requirements of the climate and Earth Observation community. In its first year of operation, the 5PB of available storage capacity was filled and the hosted compute capability used extensively. JASMIN has modelled the concept of a centralised large-volume data analysis facility. Key characteristics have enabled success: peta-scale fast disk connected via low latency networks to compute resources and the use of virtualisation for effective management of the resources for a range of users. A second phase is now underway funded through NERC's (Natural Environment Research Council) Big Data initiative. This will see significant expansion to the resources available with a doubling of disk-based storage to 12PB and an increase of compute capacity by a factor of ten to over 3000 processing cores. This expansion is accompanied by a broadening in the scope for JASMIN, as a service available to the entire UK environmental science community. Experience with the first phase demonstrated the range of user needs. A trade-off is needed between access privileges to resources, flexibility of use and security. This has influenced the form and types of service under development for the new phase. JASMIN will deploy a specialised private cloud organised into "Managed" and "Unmanaged" components. In the Managed Cloud, users have direct access to the storage and compute resources for optimal performance but for reasons of security, via a more restrictive PaaS (Platform-as-a-Service) interface. The Unmanaged Cloud is deployed in an isolated part of the network but co-located with the rest of the infrastructure. This enables greater liberty to tenants - full IaaS (Infrastructure-as-a-Service) capability to provision customised infrastructure - whilst at the same time protecting more sensitive parts of the system from direct access using these elevated privileges. The private cloud will be augmented with cloud-bursting capability so that it can exploit the resources available from public clouds, making it effectively a hybrid solution. A single interface will overlay the functionality of both the private cloud and external interfaces to public cloud providers giving users the flexibility to migrate resources between infrastructures as requirements dictate.

  3. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  4. Towards an EO-based Landslide Web Mapping and Monitoring Service

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Weinke, Elisabeth; Albrecht, Florian; Eisank, Clemens; Vecchiotti, Filippo; Friedl, Barbara; Kociu, Arben

    2017-04-01

    National and regional authorities and infrastructure maintainers in mountainous regions require accurate knowledge of the location and spatial extent of landslides for hazard and risk management. Information on landslides is often collected by a combination of ground surveying and manual image interpretation following landslide triggering events. However, the high workload and limited time for data acquisition result in a trade-off between completeness, accuracy and detail. Remote sensing data offers great potential for mapping and monitoring landslides in a fast and efficient manner. While facing an increased availability of high-quality Earth Observation (EO) data and new computational methods, there is still a lack in science-policy interaction and in providing innovative tools and methods that can easily be used by stakeholders and users to support their daily work. Taking up this issue, we introduce an innovative and user-oriented EO-based web service for landslide mapping and monitoring. Three central design components of the service are presented: (1) the user requirements definition, (2) the semi-automated image analysis methods implemented in the service, and (3) the web mapping application with its responsive user interface. User requirements were gathered during semi-structured interviews with regional authorities. The potential users were asked if and how they employ remote sensing data for landslide investigation and what their expectations to a landslide web mapping service regarding reliability and usability are. The interviews revealed the capability of our service for landslide documentation and mapping as well as monitoring of selected landslide sites, for example to complete and update landslide inventory maps. In addition, the users see a considerable potential for landslide rapid mapping. The user requirements analysis served as basis for the service concept definition. Optical satellite imagery from different high resolution (HR) and very high resolution (VHR) sensors, e.g. Landsat, Sentinel-2, SPOT-5, WorldView-2/3, was acquired for different study areas in the Alps. Object-based image analysis (OBIA) methods were used for semi-automated mapping of landslides. Selected mapping routines and results, including a step-by-step guidance, are integrated in the service by means of a web processing chain. This allows the user to gain insights into the service idea, the potential of semi-automated mapping methods, and the applicability of various satellite data for specific landslide mapping tasks. Moreover, an easy-to use and guided classification workflow, which includes image segmentation, statistical classification and manual editing options, enables the user to perform his/her own analyses. For validation, the classification results can be downloaded or compared against uploaded reference data using the implemented tools. Furthermore, users can compare the classification results to freely available data such as OpenStreetMap to identify landslide-affected infrastructure (e.g. roads, buildings). They also can upload infrastructure data available at their organization for specific assessments or monitor the evolution of selected landslides over time. Further actions will include the validation of the service in collaboration with stakeholders, decision makers and experts, which is essential to produce landslide information products that can assist the targeted management of natural hazards, and the evaluation of the potential towards the development of an operational Copernicus downstream service.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Ching-Yen; Shepelev, Aleksey; Qiu, Charlie

    With an increased number of Electric Vehicles (EVs) on the roads, charging infrastructure is gaining an ever-more important role in simultaneously meeting the needs of the local distribution grid and of EV users. This paper proposes a mesh network RFID system for user identification and charging authorization as part of a smart charging infrastructure providing charge monitoring and control. The Zigbee-based mesh network RFID provides a cost-efficient solution to identify and authorize vehicles for charging and would allow EV charging to be conducted effectively while observing grid constraints and meeting the needs of EV drivers

  6. Using Integrated Earth and Social Science Data for Disaster Risk Assessment

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Chen, R. S.; Yetman, G.

    2016-12-01

    Society faces many different risks from both natural and technological hazards. In some cases, disaster risk managers focus on only a few risks, e.g., in regions where a single hazard such as earthquakes dominate. More often, however, disaster risk managers deal with multiple hazards that pose diverse threats to life, infrastructure, and livelihoods. From the viewpoint of scientists, hazards are often studied based on traditional disciplines such as seismology, hydrology, climatology, and epidemiology. But from the viewpoint of disaster risk managers, data are needed on all hazards in a specific region and on the exposure and vulnerability of population, infrastructure, and economic resources and activity. Such managers also need to understand how hazards, exposures, and vulnerabilities may interact, and human and environmental systems respond, to hazard events, as in the case of the Fukushima nuclear disaster that followed from the Sendai earthquake and tsunami. In this regard, geospatial tools that enable visualization and analysis of both Earth and social science data can support the use case of disaster risk managers who need to quickly assess where specific hazard events occur relative to population and critical infrastructure. Such information can help them assess the potential severity of actual or predicted hazard events, identify population centers or key infrastructure at risk, and visualize hazard dynamics, e.g., earthquakes and their aftershocks or the paths of severe storms. This can then inform efforts to mitigate risks across multiple hazards, including reducing exposure and vulnerability, strengthening system resiliency, improving disaster response mechanisms, and targeting mitigation resources to the highest or most critical risks. We report here on initial efforts to develop hazard mapping tools that draw on open web services and support simple spatial queries about population exposure. The NASA Socioeconomic Data and Applications Center (SEDAC) Hazards Mapper, a web-based mapping tool, enables users to estimate population living in areas subject to flood or tornado warnings, near recent earthquakes, or around critical infrastructure. The HazPop mobile app, implemented for iOS devices, utilizes location services to support disaster risk managers working in field conditions.

  7. "Science SQL" as a Building Block for Flexible, Standards-based Data Infrastructures

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2016-04-01

    We have learnt to live with the pain of separating data and metadata into non-interoperable silos. For metadata, we enjoy the flexibility of databases, be they relational, graph, or some other NoSQL. Contrasting this, users still "drown in files" as an unstructured, low-level archiving paradigm. It is time to bridge this chasm which once was technologically induced, but today can be overcome. One building block towards a common re-integrated information space is to support massive multi-dimensional spatio-temporal arrays. These "datacubes" appear as sensor, image, simulation, and statistics data in all science and engineering domains, and beyond. For example, 2-D satellilte imagery, 2-D x/y/t image timeseries and x/y/z geophysical voxel data, and 4-D x/y/z/t climate data contribute to today's data deluge in the Earth sciences. Virtual observatories in the Space sciences routinely generate Petabytes of such data. Life sciences deal with microarray data, confocal microscopy, human brain data, which all fall into the same category. The ISO SQL/MDA (Multi-Dimensional Arrays) candidate standard is extending SQL with modelling and query support for n-D arrays ("datacubes") in a flexible, domain-neutral way. This heralds a new generation of services with new quality parameters, such as flexibility, ease of access, embedding into well-known user tools, and scalability mechanisms that remain completely transparent to users. Technology like the EU rasdaman ("raster data manager") Array Database system can support all of the above examples simultaneously, with one technology. This is practically proven: As of today, rasdaman is in operational use on hundreds of Terabytes of satellite image timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Therefore, Array Databases offering SQL/MDA constitute a natural common building block for next-generation data infrastructures. Being initiator and editor of the standard we present principles, implementation facets, and application examples as a basis for further discussion. Further, we highlight recent implementation progress in parallelization, data distribution, and query optimization showing their effects on real-life use cases.

  8. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality

    PubMed Central

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R.; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure (https://hubzero.org), an open source software platform. The hub database components support: (1) data management – the “databases” component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection – the “forms” component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration – the “dataviewer” component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child–caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team. PMID:26616131

  9. Spatial Data Infrastructures (SDIs): Improving the Scientific Environmental Data Management and Visualization with ArcGIS Platform

    NASA Astrophysics Data System (ADS)

    Shrestha, S. R.; Hogeweg, M.; Rose, B.; Turner, A.

    2017-12-01

    The requirement for quality, authoritatively sourced data can often be challenging when working with scientific data. In addition, the lack of standard mechanism to discover, access, and use such data can be cumbersome. This results in slow research, poor dissemination and missed opportunities for research to positively impact policy and knowledge. There is widespread recognition that authoritative datasets are maintained by multiple organizations following various standards, and addressing these challenges will involve multiple stakeholders as well. The bottom line is that organizations need a mechanism to efficiently create, share, catalog, and discover data, and the ability to apply these to create an authoritative information products and applications is powerful and provides value. In real-world applications, individual organizations develop, modify, finalize, and support foundational data for distributed users across the system and thus require an efficient method of data management. For this, the SDI (Spatial Data Infrastructure) framework can be applied for GIS users to facilitate efficient and powerful decision making based on strong visualization and analytics. Working with research institutions, governments, and organizations across the world, we have developed a Hub framework for data and analysis sharing that is driven by outcome-centric goals which apply common methodologies and standards. SDI are an operational capability that should be equitably accessible to policy-makers, analysts, departments and public communities. These SDI need to align with operational workflows and support social communications and collaboration. The Hub framework integrates data across agencies, projects and organizations to support interoperability and drive coordination. We will present and share how Esri has been supporting the development of local, state, and national SDIs for many years and show some use cases for applications of planetary SDI. We will also share what makes an SDI successful, how organizations have used the ArcGIS platform to quickly stand up key SDI products and applications, and describe some typical SDI scenarios.

  10. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality.

    PubMed

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.

  11. Interactive Model-Centric Systems Engineering (IMCSE) Phase 1

    DTIC Science & Technology

    2014-09-30

    and supporting infrastructure ...testing. 4. Supporting MPTs. During Phase 1, the opportunity to develop several MPTs to support IMCSE arose, including supporting infrastructure ...Analysis will be completed and tested with a case application, along with preliminary supporting infrastructure , which will then be used to inform the

  12. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics

    PubMed Central

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A.; Caron, Christophe

    2015-01-01

    Summary: The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. Availability and implementation: http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). Contact: contact@workflow4metabolomics.org PMID:25527831

  13. Workflow4Metabolomics: a collaborative research infrastructure for computational metabolomics.

    PubMed

    Giacomoni, Franck; Le Corguillé, Gildas; Monsoor, Misharl; Landi, Marion; Pericard, Pierre; Pétéra, Mélanie; Duperier, Christophe; Tremblay-Franco, Marie; Martin, Jean-François; Jacob, Daniel; Goulitquer, Sophie; Thévenot, Etienne A; Caron, Christophe

    2015-05-01

    The complex, rapidly evolving field of computational metabolomics calls for collaborative infrastructures where the large volume of new algorithms for data pre-processing, statistical analysis and annotation can be readily integrated whatever the language, evaluated on reference datasets and chained to build ad hoc workflows for users. We have developed Workflow4Metabolomics (W4M), the first fully open-source and collaborative online platform for computational metabolomics. W4M is a virtual research environment built upon the Galaxy web-based platform technology. It enables ergonomic integration, exchange and running of individual modules and workflows. Alternatively, the whole W4M framework and computational tools can be downloaded as a virtual machine for local installation. http://workflow4metabolomics.org homepage enables users to open a private account and access the infrastructure. W4M is developed and maintained by the French Bioinformatics Institute (IFB) and the French Metabolomics and Fluxomics Infrastructure (MetaboHUB). contact@workflow4metabolomics.org. © The Author 2014. Published by Oxford University Press.

  14. Corelli: a peer-to-peer dynamic replication service for supporting latency-dependent content in community networks

    NASA Astrophysics Data System (ADS)

    Tyson, Gareth; Mauthe, Andreas U.; Kaune, Sebastian; Mu, Mu; Plagemann, Thomas

    2009-01-01

    The quality of service for latency dependent content, such as video streaming, largely depends on the distance and available bandwidth between the consumer and the content. Poor provision of these qualities results in reduced user experience and increased overhead. To alleviate this, many systems operate caching and replication, utilising dedicated resources to move the content closer to the consumer. Latency-dependent content creates particular issues for community networks, which often display the property of strong internal connectivity yet poor external connectivity. However, unlike traditional networks, communities often cannot deploy dedicated infrastructure for both monetary and practical reasons. To address these issues, this paper proposes Corelli, a peer-to-peer replication infrastructure designed for use in community networks. In Corelli, high capacity peers in communities autonomously build a distributed cache to dynamically pre-fetch content early on in its popularity lifecycle. By exploiting the natural proximity of peers in the community, users can gain extremely low latency access to content whilst reducing egress utilisation. Through simulation, it is shown that Corelli considerably increases accessibility and improves performance for latency dependent content. Further, Corelli is shown to offer adaptive and resilient mechanisms that ensure that it can respond to variations in churn, demand and popularity.

  15. Services and the National Information Infrastructure. Report of the Information Infrastructure Task Force Committee on Applications and Technology, Technology Policy Working Group. Draft for Public Comment.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    In this report, the National Information Infrastructure (NII) services issue is addressed, and activities to advance the development of NII services are recommended. The NII is envisioned to grow into a seamless web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at users'…

  16. Enabling Research without Geographical Boundaries via Collaborative Research Infrastructures

    NASA Astrophysics Data System (ADS)

    Gesing, S.

    2016-12-01

    Collaborative research infrastructures on global scale for earth and space sciences face a plethora of challenges from technical implementations to organizational aspects. Science gateways - also known as virtual research environments (VREs) or virtual laboratories - address part of such challenges by providing end-to-end solutions to aid researchers to focus on their specific research questions without the need to become acquainted with the technical details of the complex underlying infrastructures. In general, they provide a single point of entry to tools and data irrespective of organizational boundaries and thus make scientific discoveries easier and faster. The importance of science gateways has been recognized on national as well as on international level by funding bodies and by organizations. For example, the US NSF has just funded a Science Gateways Community Institute, which offers support, consultancy and open accessible software repositories for users and developers; Horizon 2020 provides funding for virtual research environments in Europe, which has led to projects such as VRE4EIC (A Europe-wide Interoperable Virtual Research Environment to Empower Multidisciplinary Research Communities and Accelerate Innovation and Collaboration); national or continental research infrastructures such as XSEDE in the USA, Nectar in Australia or EGI in Europe support the development and uptake of science gateways; the global initiatives International Coalition on Science Gateways, the RDA Virtual Research Environment Interest Group as well as the IEEE Technical Area on Science Gateways have been founded to provide global leadership on future directions for science gateways in general and facilitate awareness for science gateways. This presentation will give an overview on these projects and initiatives aiming at supporting domain researchers and developers with measures for the efficient creation of science gateways, for increasing their usability and sustainability under consideration of the breadth of topics in the context of science gateways. It will go into detail for the challenges the community faces for collaborative research on global scale without geographical boundaries and will provide suggestions for further enhancing the outreach to domain researchers.

  17. Developing a Web-based system by integrating VGI and SDI for real estate management and marketing

    NASA Astrophysics Data System (ADS)

    Salajegheh, J.; Hakimpour, F.; Esmaeily, A.

    2014-10-01

    Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.

  18. Lowering the barriers to computational modeling of Earth's surface: coupling Jupyter Notebooks with Landlab, HydroShare, and CyberGIS for research and education.

    NASA Astrophysics Data System (ADS)

    Bandaragoda, C.; Castronova, A. M.; Phuong, J.; Istanbulluoglu, E.; Strauch, R. L.; Nudurupati, S. S.; Tarboton, D. G.; Wang, S. W.; Yin, D.; Barnhart, K. R.; Tucker, G. E.; Hutton, E.; Hobley, D. E. J.; Gasparini, N. M.; Adams, J. M.

    2017-12-01

    The ability to test hypotheses about hydrology, geomorphology and atmospheric processes is invaluable to research in the era of big data. Although community resources are available, there remain significant educational, logistical and time investment barriers to their use. Knowledge infrastructure is an emerging intellectual framework to understand how people are creating, sharing and distributing knowledge - which has been dramatically transformed by Internet technologies. In addition to the technical and social components in a cyberinfrastructure system, knowledge infrastructure considers educational, institutional, and open source governance components required to advance knowledge. We are designing an infrastructure environment that lowers common barriers to reproducing modeling experiments for earth surface investigation. Landlab is an open-source modeling toolkit for building, coupling, and exploring two-dimensional numerical models. HydroShare is an online collaborative environment for sharing hydrologic data and models. CyberGIS-Jupyter is an innovative cyberGIS framework for achieving data-intensive, reproducible, and scalable geospatial analytics using the Jupyter Notebook based on ROGER - the first cyberGIS supercomputer, so that models that can be elastically reproduced through cloud computing approaches. Our team of geomorphologists, hydrologists, and computer geoscientists has created a new infrastructure environment that combines these three pieces of software to enable knowledge discovery. Through this novel integration, any user can interactively execute and explore their shared data and model resources. Landlab on HydroShare with CyberGIS-Jupyter supports the modeling continuum from fully developed modelling applications, prototyping new science tools, hands on research demonstrations for training workshops, and classroom applications. Computational geospatial models based on big data and high performance computing can now be more efficiently developed, improved, scaled, and seamlessly reproduced among multidisciplinary users, thereby expanding the active learning curriculum and research opportunities for students in earth surface modeling and informatics.

  19. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|Speedshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-06-30

    Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less

  20. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    PubMed

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.

  1. IT Infrastructure Components for Biobanking

    PubMed Central

    Prokosch, H.U.; Beck, A.; Ganslandt, T.; Hummel, M.; Kiehntopf, M.; Sax, U.; Ückert, F.; Semler, S.

    2010-01-01

    Objective Within translational research projects in the recent years large biobanks have been established, mostly supported by homegrown, proprietary software solutions. No general requirements for biobanking IT infrastructures have been published yet. This paper presents an exemplary biobanking IT architecture, a requirements specification for a biorepository management tool and exemplary illustrations of three major types of requirements. Methods We have pursued a comprehensive literature review for biobanking IT solutions and established an interdisciplinary expert panel for creating the requirements specification. The exemplary illustrations were derived from a requirements analysis within two university hospitals. Results The requirements specification comprises a catalog with more than 130 detailed requirements grouped into 3 major categories and 20 subcategories. Special attention is given to multitenancy capabilities in order to support the project-specific definition of varying research and bio-banking contexts, the definition of workflows to track sample processing, sample transportation and sample storage and the automated integration of preanalytic handling and storage robots. Conclusion IT support for biobanking projects can be based on a federated architectural framework comprising primary data sources for clinical annotations, a pseudonymization service, a clinical data warehouse with a flexible and user-friendly query interface and a biorepository management system. Flexibility and scalability of all such components are vital since large medical facilities such as university hospitals will have to support biobanking for varying monocentric and multicentric research scenarios and multiple medical clients. PMID:23616851

  2. IT Infrastructure Components for Biobanking.

    PubMed

    Prokosch, H U; Beck, A; Ganslandt, T; Hummel, M; Kiehntopf, M; Sax, U; Uckert, F; Semler, S

    2010-01-01

    Within translational research projects in the recent years large biobanks have been established, mostly supported by homegrown, proprietary software solutions. No general requirements for biobanking IT infrastructures have been published yet. This paper presents an exemplary biobanking IT architecture, a requirements specification for a biorepository management tool and exemplary illustrations of three major types of requirements. We have pursued a comprehensive literature review for biobanking IT solutions and established an interdisciplinary expert panel for creating the requirements specification. The exemplary illustrations were derived from a requirements analysis within two university hospitals. The requirements specification comprises a catalog with more than 130 detailed requirements grouped into 3 major categories and 20 subcategories. Special attention is given to multitenancy capabilities in order to support the project-specific definition of varying research and bio-banking contexts, the definition of workflows to track sample processing, sample transportation and sample storage and the automated integration of preanalytic handling and storage robots. IT support for biobanking projects can be based on a federated architectural framework comprising primary data sources for clinical annotations, a pseudonymization service, a clinical data warehouse with a flexible and user-friendly query interface and a biorepository management system. Flexibility and scalability of all such components are vital since large medical facilities such as university hospitals will have to support biobanking for varying monocentric and multicentric research scenarios and multiple medical clients.

  3. Mapping the Human Planet: Integrating Settlement, Infrastructure, and Population Data to Support Sustainable Development, Climate, and Disaster Data Needs

    NASA Astrophysics Data System (ADS)

    Chen, R. S.; de Sherbinin, A. M.; Yetman, G.; Downs, R. R.

    2017-12-01

    A central issue in international efforts to address climate change, large-scale disaster risk, and overall sustainable development is the exposure of human settlements and population to changing climate patterns and a range of geological, climatological, technological, and other hazards. The present and future location of human activities is also important in mitigation and adaptation to climate change, and to ensuring that we "leave no one behind" in achieving the Sustainable Development Goals adopted by the international community in September 2015. The extent and quality of built infrastructure are key factors in the mortality, morbidity, and economic impacts of disasters, and are simultaneously essential to sustainable development. Earth observations have great potential to improve the coverage, consistency, timeliness, and richness of data on settlements, infrastructure, and population, in ways that complement existing and emerging forms of socioeconomic data collection such as censuses, surveys, and cell phone and Internet traffic. Night-time lights from the Suomi-NPP satellite may be able to provide near real-time data on occupance and economic activity. New "big data" capabilities make it possible to rapidly process high-resolution (50-cm) imagery to detect structures and changes in structures, especially in rural areas where other data are limited. A key challenge is to ensure that these types of data can be translated into forms useful in a range of applications and for diverse user communities, including national statistical offices, local government planners, development and humanitarian organizations, community groups, and the private sector. We report here on efforts, in coordination with the GEO Human Planet Initiative, to develop new data on settlements, infrastructure, and population, together with open data services and tools, to support disaster risk assessment, climate vulnerability analysis, and sustainable development decision making.

  4. ISTIMES project: status and outcomes

    NASA Astrophysics Data System (ADS)

    Cuomo, V.; Proto, M.; Soldovieri, F.

    2012-04-01

    ISTIMES is a project approved in the Seventh Framework Programme of the European Union under the Joint Call FP7-ICT-SEC-2007-1. It has a three years duration and will be completed within June 2012. According to the aims of the proposal, ISTIMES project has designed, assessed and developed a prototypical modular and scalable ICT-based system, exploiting distributed and local sensors, for non-destructive electromagnetic monitoring; the specific application field was the reliability and safety of critical transport infrastructures, even if the modularity of the ISTIMES approach has permitted to extend it successfully to other critical infrastructures as dams. The continuous and fruitful involvement of end users (as Italian Civil Protection) allowed to develop applications focused on users needs. ISTIMES couples current monitoring of infrastructures with a high situational awareness during crises management, providing updated and detailed real and near real time information about the infrastructure status to improve decision support for emergency and disasters stakeholders. The system exploits an open network architecture that can accommodate a wide range of heterogeneous sensors, static and mobile, and can be easily scaled up to allow the integration of additional sensors and interfaces with other networks. It relies on state-of-the-art electromagnetic sensors, enabling a networking of terrestrial sensors, supported by specific satellite and airborne measurements. The integration of electromagnetic technologies with new ICT information and telecommunications systems enables remotely controlled monitoring and surveillance at different temporal and spatial scales, providing indexes and images of the critical transport infrastructures. The project has exploited, assessed and improved many different non-invasive technologies based on electromagnetic sensing as: Optic Fiber Sensors, Synthetic Aperture; Radar (SAR) satellite platform; Hyperspectral Spectroscopy; Infrared Thermography; Ground Penetrating Radar; low-frequency Geophysical Techniques; ground based SAR and optical cameras for the assessment of the dynamical behaviour of the infrastructure. A great effort has been devoted to "transfer" these novel and state-of art technologies from the laboratory experience to actual on field applications by adapting/improving them and developing prototypes for the specific application domain of the monitoring of transport and critical infrastructures. Sensor synergy, data cross correlation and novel concepts of information fusion have permitted to carry out a multi-method, multi-resolution and multi-scale electromagnetic detection and monitoring of the infrastructure, including surface and subsurface aspects. The project has allowed to develop an ICT architecture based on web sensors and serviceoriented- technologies that comply with specific end-user requirements, including interoperability, economical convenience, exportability, efficiency and reliability. The efforts have focussed mainly to the creation of web based interfaces able to control "not standard" sensors, as the ones proposed in the project, and to the standardization necessary to have a full interoperability and modularity of the monitoring system. In addition, the system is able to provide a more easily accessible and transparent scheme for use by different end-users and to integrate the monitoring results and images with other kind of information such as GIS layer and historical datasets relating to the site. The ISTIMES system has been evaluated at two test sites and two test beds. At the two test sites of Montagnole rock-fall station (Chambery, France) and Hydrogeosite Laboratory (Potenza, Italy), the attention was posed to a thorough analysis of the performances of the in situ sensing techniques, by investigating, with good outcomes, also the possibility to correlate and have a synergy from the different sensors. In particular, it is worth noting that the experiment realized at Montagnole is unique, at least at European level, regarding both the high mechanical impact on a real scale elements of civil engineering structure, and also for the exploitation of all sensor techniques set up in a cooperative way. The effectiveness of the overall monitoring system has been assessed by the experiments at real test beds as Sihlhochstrasse bridge, a 1.5 km bridge representing one of the main entrance road to Zurich city (Switzerland), Varco Izzo railway tunnel and Musmeci motorway bridge located in the area of Potenza city in Basilicata region (Italy) affected by a high seismic risk. In particular, for the Musmeci bridge, the main entrance road to Potenza city and a masterpiece of architectural/civil engineering realized by Sergio Musmeci in 60' years, all the sensing technologies involved in the project have been exploited to perform a monitoring/diagnostics; the Musmeci bridge results have been correlated and tested also by the comparison with the sensors mostly used by civil engineers for this kind of infrastructures (Proto et al., 2010). Acknowledgment The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n. 225663.

  5. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  6. Evaluation of the Responsiveness Index of the Family Health Strategy in rural areas.

    PubMed

    Shimizu, Helena Eri; Trindade, Josélia de Souza; Mesquita, Monique Santos de; Ramos, Maíra Catharina

    2018-01-01

    Objective To evaluate the responsiveness of Family Health Strategy units in the rural area of the Federal District registered in the National Program for Improvement of Access and Quality of Basic Care. Method Descriptive study, which used a questionnaire to evaluate the following dimensions: a) respect for people: dignity, confidentiality of information, autonomy, communication; b) customer orientation: facilities, choice of the professional, agile service and social support. Results The users' assessment of responsiveness was 0.755. The dimensions related to respect for people received an index of 0.814 and customer orientation was 0.599. Conclusion Care is given that shows respect for human dignity, but progress needs to be made in building confidentiality and the autonomy of users. Infrastructure is poor and care is not agile, highlighting the need for greater investments in rural areas.

  7. Infrastructure sensing.

    PubMed

    Soga, Kenichi; Schooling, Jennifer

    2016-08-06

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.

  8. Infrastructure sensing

    PubMed Central

    Soga, Kenichi; Schooling, Jennifer

    2016-01-01

    Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845

  9. A national-scale authentication infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, R.; Engert, D.; Foster, I.

    2000-12-01

    Today, individuals and institutions in science and industry are increasingly forming virtual organizations to pool resources and tackle a common goal. Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks - resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine if the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish andmore » change resource-sharing arrangements. However, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure. GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.« less

  10. Boosting a Low-Cost Smart Home Environment with Usage and Access Control Rules.

    PubMed

    Barsocchi, Paolo; Calabrò, Antonello; Ferro, Erina; Gennaro, Claudio; Marchetti, Eda; Vairo, Claudio

    2018-06-08

    Smart Home has gained widespread attention due to its flexible integration into everyday life. Pervasive sensing technologies are used to recognize and track the activities that people perform during the day, and to allow communication and cooperation of physical objects. Usually, the available infrastructures and applications leveraging these smart environments have a critical impact on the overall cost of the Smart Home construction, require to be preferably installed during the home construction and are still not user-centric. In this paper, we propose a low cost, easy to install, user-friendly, dynamic and flexible infrastructure able to perform runtime resources management by decoupling the different levels of control rules. The basic idea relies on the usage of off-the-shelf sensors and technologies to guarantee the regular exchange of critical information, without the necessity from the user to develop accurate models for managing resources or regulating their access/usage. This allows us to simplify the continuous updating and improvement, to reduce the maintenance effort and to improve residents’ living and security. A first validation of the proposed infrastructure on a case study is also presented.

  11. Training NOAA Staff on Effective Communication Methods with Local Climate Users

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Mayes, B.

    2011-12-01

    Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate information; (2) Leveraging partnership with climate services providers; and, (3) Applying 3H training approach, where the first H stands for Head (trusted science), the second H stands for Heart (make it easy), and the third H for Hand (support with applications).

  12. Planning: supporting and optimizing clinical guidelines execution.

    PubMed

    Anselma, Luca; Montani, Stefania

    2008-01-01

    A crucial feature of computerized clinical guidelines (CGs) lies in the fact that they may be used not only as conventional documents (as if they were just free text) describing general procedures that users have to follow. In fact, thanks to a description of their actions and control flow in some semiformal representation language, CGs can also take advantage of Computer Science methods and Information Technology infrastructures and techniques, to become executable documents, in the sense that they may support clinical decision making and clinical procedures execution. In order to reach this goal, some advanced planning techniques, originally developed within the Artificial Intelligence (AI) community, may be (at least partially) resorted too, after a proper adaptation to the specific CG needs has been carried out.

  13. Web-Based Integrated Research Environment for Aerodynamic Analyses and Design

    NASA Astrophysics Data System (ADS)

    Ahn, Jae Wan; Kim, Jin-Ho; Kim, Chongam; Cho, Jung-Hyun; Hur, Cinyoung; Kim, Yoonhee; Kang, Sang-Hyun; Kim, Byungsoo; Moon, Jong Bae; Cho, Kum Won

    e-AIRS[1,2], an abbreviation of ‘e-Science Aerospace Integrated Research System,' is a virtual organization designed to support aerodynamic flow analyses in aerospace engineering using the e-Science environment. As the first step toward a virtual aerospace engineering organization, e-AIRS intends to give a full support of aerodynamic research process. Currently, e-AIRS can handle both the computational and experimental aerodynamic research on the e-Science infrastructure. In detail, users can conduct a full CFD (Computational Fluid Dynamics) research process, request wind tunnel experiment, perform comparative analysis between computational prediction and experimental measurement, and finally, collaborate with other researchers using the web portal. The present paper describes those services and the internal architecture of the e-AIRS system.

  14. Energy Recovery Hydropower: Prospects for Off-Setting Electricity Costs for Agricultural, Municipal, and Industrial Water Providers and Users; July 2017 - September 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levine, Aaron L.; Curtis, Taylor L.; Johnson, Kurt

    Energy recovery hydropower is one of the most cost-effective types of new hydropower development because it is constructed utilizing existing infrastructure, and it is typically able to complete Federal Energy Regulatory Commission (FERC) review in 60 days. Recent changes in federal and state policy have supported energy recovery hydropower. In addition, some states have developed programs and policies to support energy recovery hydropower, including resource assessments, regulatory streamlining initiatives, and grant and loan programs to reduce project development costs. This report examines current federal and state policy drivers for energy recovery hydropower, reviews market trends, and looks ahead at futuremore » federal resource assessments and hydropower reform legislation.« less

  15. CrossTalk: The Journal of Defense Software Engineering. Volume 27, Number 5, September/October 2014

    DTIC Science & Technology

    2014-10-01

    CMSP Infrastructure . 24. CMSP Infrastructure sends message via broadcast to mobile devices in the designated area(s). 25. Mobile device users... infrastructure could potentially threaten our way of life. Given the swiftness of technological change, it is excusable that organizations might...system, which is diagramed in Fig. 1, would expand these op- tions to mobile devices. FEMA established the message struc- ture and the approvals needed to

  16. Cultured Construction: Global Evidence of the Impact of National Values on Piped-to-Premises Water Infrastructure Development.

    PubMed

    Kaminsky, Jessica A

    2016-07-19

    In 2016, the global community undertook the Sustainable Development Goals. One of these goals seeks to achieve universal and equitable access to safe and affordable drinking water for all people by the year 2030. In support of this undertaking, this paper seeks to discover the cultural work done by piped water infrastructure across 33 nations with developed and developing economies that have experienced change in the percentage of population served by piped-to-premises water infrastructure at the national level of analysis. To do so, I regressed the 1990-2012 change in piped-to-premises water infrastructure coverage against Hofstede's cultural dimensions, controlling for per capita GDP, the 1990 baseline level of coverage, percent urban population, overall 1990-2012 change in improved sanitation (all technologies), and per capita freshwater resources. Separate analyses were carried out for the urban, rural, and aggregate national contexts. Hofstede's dimensions provide a measure of cross-cultural difference; high or low scores are not in any way intended to represent better or worse but rather serve as a quantitative way to compare aggregate preferences for ways of being and doing. High scores in the cultural dimensions of Power Distance, Individualism-Collectivism, and Uncertainty Avoidance explain increased access to piped-to-premises water infrastructure in the rural context. Higher Power Distance and Uncertainty Avoidance scores are also statistically significant for increased coverage in the urban and national aggregate contexts. These results indicate that, as presently conceived, piped-to-premises water infrastructure fits best with spatial contexts that prefer hierarchy and centralized control. Furthermore, water infrastructure is understood to reduce uncertainty regarding the provision of individually valued benefits. The results of this analysis identify global trends that enable engineers and policy makers to design and manage more culturally appropriate and socially sustainable water infrastructure by better fitting technologies to user preferences.

  17. Informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness.

    PubMed

    Herasevich, Vitaly; Pickering, Brian W; Dong, Yue; Peters, Steve G; Gajic, Ognjen

    2010-03-01

    To develop and validate an informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness. Using open-schema data feeds imported from electronic medical records (EMRs), we developed a near-real-time relational database (Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart). Imported data domains included physiologic monitoring, medication orders, laboratory and radiologic investigations, and physician and nursing notes. Open database connectivity supported the use of Boolean combinations of data that allowed authorized users to develop syndrome surveillance, decision support, and reporting (data "sniffers") routines. Random samples of database entries in each category were validated against corresponding independent manual reviews. The Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart accommodates, on average, 15,000 admissions to the intensive care unit (ICU) per year and 200,000 vital records per day. Agreement between database entries and manual EMR audits was high for sex, mortality, and use of mechanical ventilation (kappa, 1.0 for all) and for age and laboratory and monitored data (Bland-Altman mean difference +/- SD, 1(0) for all). Agreement was lower for interpreted or calculated variables, such as specific syndrome diagnoses (kappa, 0.5 for acute lung injury), duration of ICU stay (mean difference +/- SD, 0.43+/-0.2), or duration of mechanical ventilation (mean difference +/- SD, 0.2+/-0.9). Extraction of essential ICU data from a hospital EMR into an open, integrative database facilitates process control, reporting, syndrome surveillance, decision support, and outcome research in the ICU.

  18. Digital Libraries: Situating Use in Changing Information Infrastructure.

    ERIC Educational Resources Information Center

    Bishop, Ann Peterson; Neumann, Laura J.; Star, Susan Leigh; Merkel, Cecelia; Ignacio, Emily; Sandusky, Robert J.

    2000-01-01

    Reviews empirical studies about how digital libraries evolve for use in scientific and technical work based on the Digital Libraries Initiative (DLI) at the University of Illinois. Discusses how users meet infrastructure and document disaggregation; describes use of the DLI testbed of full text journal articles; and explains research methodology.…

  19. The EVER-EST portal as support for the Sea Monitoring Virtual Research Community, through the sharing of resources, enabling dynamic collaboration and promoting community engagement

    NASA Astrophysics Data System (ADS)

    Foglini, Federica; Grande, Valentina; De Leo, Francesco; Mantovani, Simone; Ferraresi, Sergio

    2017-04-01

    EVER-EST offers a framework based on advanced services delivered both at the e-infrastructure and domain-specific level, with the objective of supporting each phase of the Earth Science Research and Information Lifecycle. It provides innovative e-research services to Earth Science user communities for communication, cross-validation and the sharing of knowledge and science outputs. The project follows a user-centric approach: real use cases taken from pre-selected Virtual Research Communities (VRC) covering different Earth Science research scenarios drive the implementation of the Virtual Research Environment (VRE) services and capabilities. The Sea Monitoring community is involved in the evaluation of the EVER-EST infrastructure. The community of potential users is wide and heterogeneous including both multi-disciplinary scientists and national/international agencies and authorities (e.g. MPAs directors, technicians from regional agencies like ARPA in Italy, the technicians working for the Ministry of the Environment) dealing with the adoption of a better way of measuring the quality of the environment. The scientific community has the main role of assessing the best criteria and indicators for defining the Good Environmental Status (GES) in their own sub regions, and implementing methods, protocols and tools for monitoring the GES descriptors. According to the Marine Strategy Framework Directive (MSFD), the environmental status of marine waters is defined by 11 descriptors, and forms a proposed set of 29 associated criteria and 56 different indicators. The objective of the Sea Monitoring VRC is to provide useful and applicable contributions to the evaluation of the descriptors: D1.Biodiversity, D2.Non-indigenous species and D6.Seafloor Integrity (http://ec.europa.eu/environment/marine/good-environmental-status/index_en.htm). The main challenges for the community members are: 1. discovery of existing data and products distributed among different infrastructures; 2. sharing methodologies about the GES evaluation and monitoring; 3. working on the same workflows and data; 4. adopting shared powerful tools for data processing (e.g. software and servers). The Sea Monitoring portal provides the VRC users with tools and services aimed at enhancing their ability to interoperate and share knowledge, experience and methods for GES assessment and monitoring, such as: •digital information services for data management, exploitation and preservation (accessibility of heterogeneous data sources including associated documentation); •e-collaboration services to communicate and share knowledge, ideas, protocols and workflows; •e-learning services to facilitate the use of common workflows for assessing GES indicators; •e-research services for workflow management, validation and verification, as well as visualization and interactive services. The current study is co-financed by the European Union's Horizon 2020 research and innovation programme under the EVER-EST project (Grant Agreement No. 674907).

  20. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters.

    PubMed

    Dahlö, Martin; Scofield, Douglas G; Schaal, Wesley; Spjuth, Ola

    2018-05-01

    Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases.

  1. Perception of Urban Environmental Risks and the Effects of Urban Green Infrastructures (UGIs) on Human Well-being in Four Public Green Spaces of Guangzhou, China.

    PubMed

    Duan, Junya; Wang, Yafei; Fan, Chen; Xia, Beicheng; de Groot, Rudolf

    2018-05-28

    Cities face many challenging environmental problems that affect human well-being. Environmental risks can be reduced by Urban Green Infrastructures (UGIs). The effects of UGIs on the urban environment have been widely studied, but less attention has been given to the public perception of these effects. This paper presents the results of a study in Guangzhou, China, on UGI users' perceptions of these effects and their relationship with sociodemographic variables. A questionnaire survey was conducted in four public green spaces. Descriptive statistics, a binary logistic regression model and cross-tabulation analysis were applied on the data from 396 valid questionnaires. The results show that UGI users were more concerned about poor air quality and high temperature than about flooding events. Their awareness of environmental risks was partly in accordance with official records. Regarding the perception of the impacts of environmental risks on human well-being, elderly and female respondents with higher education levels were the most sensitive to these impacts. The respondents' perceptions of these impacts differed among the different green spaces. The effects of UGIs were well perceived and directly observed by the UGI users, but were not significantly influenced by most sociodemographic variables. Moreover, tourists had a lower perception of the impacts of environmental risks and the effects of UGI than residents did. This study provides strong support for UGIs as an effective tool to mitigate environmental risks. Local governments should consider the role of UGIs in environmental risk mitigation and human well-being with regard to urban planning and policy making.

  2. Tracking the NGS revolution: managing life science research on shared high-performance computing clusters

    PubMed Central

    2018-01-01

    Abstract Background Next-generation sequencing (NGS) has transformed the life sciences, and many research groups are newly dependent upon computer clusters to store and analyze large datasets. This creates challenges for e-infrastructures accustomed to hosting computationally mature research in other sciences. Using data gathered from our own clusters at UPPMAX computing center at Uppsala University, Sweden, where core hour usage of ∼800 NGS and ∼200 non-NGS projects is now similar, we compare and contrast the growth, administrative burden, and cluster usage of NGS projects with projects from other sciences. Results The number of NGS projects has grown rapidly since 2010, with growth driven by entry of new research groups. Storage used by NGS projects has grown more rapidly since 2013 and is now limited by disk capacity. NGS users submit nearly twice as many support tickets per user, and 11 more tools are installed each month for NGS projects than for non-NGS projects. We developed usage and efficiency metrics and show that computing jobs for NGS projects use more RAM than non-NGS projects, are more variable in core usage, and rarely span multiple nodes. NGS jobs use booked resources less efficiently for a variety of reasons. Active monitoring can improve this somewhat. Conclusions Hosting NGS projects imposes a large administrative burden at UPPMAX due to large numbers of inexperienced users and diverse and rapidly evolving research areas. We provide a set of recommendations for e-infrastructures that host NGS research projects. We provide anonymized versions of our storage, job, and efficiency databases. PMID:29659792

  3. Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.

  4. Mobile Support For Logistics

    DTIC Science & Technology

    2016-03-01

    Infrastructure to Support Mobile Devices (Takai, 2012, p. 2). The objectives needed in order to meet this goal are to: evolve spectrum management, expand... infrastructure to support wireless capabilities, and establish a mobile device security architecture (Takai, 2012, p. 2). By expanding infrastructure to...often used on Mobile Ad-Hoc Networks (MANETs). MANETS are infrastructure -less networks that include, but are not limited to, mobile devices. These

  5. Small scale green infrastructure design to meet different urban hydrological criteria.

    PubMed

    Jia, Z; Tang, S; Luo, W; Li, S; Zhou, M

    2016-04-15

    As small scale green infrastructures, rain gardens have been widely advocated for urban stormwater management in the contemporary low impact development (LID) era. This paper presents a simple method that consists of hydrological models and the matching plots of nomographs to provide an informative and practical tool for rain garden sizing and hydrological evaluation. The proposed method considers design storms, infiltration rates and the runoff contribution area ratio of the rain garden, allowing users to size a rain garden for a specific site with hydrological reference and predict overflow of the rain garden under different storms. The nomographs provide a visual presentation on the sensitivity of different design parameters. Subsequent application of the proposed method to a case study conducted in a sub-humid region in China showed that, the method accurately predicted the design storms for the existing rain garden, the predicted overflows under large storm events were within 13-50% of the measured volumes. The results suggest that the nomographs approach is a practical tool for quick selection or assessment of design options that incorporate key hydrological parameters of rain gardens or other infiltration type green infrastructure. The graphic approach as displayed by the nomographs allow urban planners to demonstrate the hydrological effect of small scale green infrastructure and gain more support for promoting low impact development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Multiphysics Application Coupling Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Michael T.

    2013-12-02

    This particular consortium implementation of the software integration infrastructure will, in large part, refactor portions of the Rocstar multiphysics infrastructure. Development of this infrastructure originated at the University of Illinois DOE ASCI Center for Simulation of Advanced Rockets (CSAR) to support the center's massively parallel multiphysics simulation application, Rocstar, and has continued at IllinoisRocstar, a small company formed near the end of the University-based program. IllinoisRocstar is now licensing these new developments as free, open source, in hopes to help improve their own and others' access to infrastructure which can be readily utilized in developing coupled or composite software systems;more » with particular attention to more rapid production and utilization of multiphysics applications in the HPC environment. There are two major pieces to the consortium implementation, the Application Component Toolkit (ACT), and the Multiphysics Application Coupling Toolkit (MPACT). The current development focus is the ACT, which is (will be) the substrate for MPACT. The ACT itself is built up from the components described in the technical approach. In particular, the ACT has the following major components: 1.The Component Object Manager (COM): The COM package provides encapsulation of user applications, and their data. COM also provides the inter-component function call mechanism. 2.The System Integration Manager (SIM): The SIM package provides constructs and mechanisms for orchestrating composite systems of multiply integrated pieces.« less

  7. A Secure and Efficient Communications Architecture for Global Information Grid Users Via Cooperating Space Assets

    DTIC Science & Technology

    2008-06-19

    ground troop component of a deployed contingency, and not a stationary infrastructure. With respect to fast- moving vehicles and aircraft, troops...the rapidly- moving user. In fact, the Control Group users could have been randomly assigned the Stationary , Sea, or 134 Ground Mobility Category...additional re-keying on the non- stationary users, just as they induce no re-keying on the Stationary users (assuming those fast- moving aircraft have the

  8. Assessment of the Adequacy of U.S.-Canadian Infrastructure to Accommodate Trade through Eastern Border Crossings. Appendix 1. Descriptive Profiles of Maine Frontier

    DOT National Transportation Integrated Search

    1999-06-08

    This document describes the process used in developing a list of rural Intelligent Transportation Systems (ITS) user needs. It gives information on a workshop focusing on rural ITS user needs, and it also presents a list of rural ITS user needs based...

  9. Virtual Induction Loops Based on Cooperative Vehicular Communications

    PubMed Central

    Gramaglia, Marco; Bernardos, Carlos J.; Calderon, Maria

    2013-01-01

    Induction loop detectors have become the most utilized sensors in traffic management systems. The gathered traffic data is used to improve traffic efficiency (i.e., warning users about congested areas or planning new infrastructures). Despite their usefulness, their deployment and maintenance costs are expensive. Vehicular networks are an emerging technology that can support novel strategies for ubiquitous and more cost-effective traffic data gathering. In this article, we propose and evaluate VIL (Virtual Induction Loop), a simple and lightweight traffic monitoring system based on cooperative vehicular communications. The proposed solution has been experimentally evaluated through simulation using real vehicular traces. PMID:23348033

  10. KSC-2011-7851

    NASA Image and Video Library

    2011-11-21

    CAPE CANAVERAL, Fla. – Members of the media tour several facilities, including the Multi-Payload Processing Facility, during the 21st Century Ground Systems Program Tour at Kennedy Space Center in Florida. Other tour stops were the Launch Equipment Test Facility, the Operations & Checkout Building and the Canister Rotation Facility. NASA’s 21st Century Ground Systems Program was initiated at Kennedy Space Center to establish the needed launch and processing infrastructure to support the Space Launch System Program and to work toward transforming the landscape of the launch site for a multi-faceted user community. Photo credit: NASA/Jim Grossmann

  11. KSC-2011-7846

    NASA Image and Video Library

    2011-11-21

    CAPE CANAVERAL, Fla. – Members of the media tour several facilities, including the Launch Equipment Test Facility in the Industrial Area, during the 21st Century Ground Systems Program Tour at Kennedy Space Center in Florida. Other tour stops were the Operations & Checkout Building, the Multi-Payload Processing Facility and the Canister Rotation Facility. NASA’s 21st Century Ground Systems Program was initiated at Kennedy Space Center to establish the needed launch and processing infrastructure to support the Space Launch System Program and to work toward transforming the landscape of the launch site for a multi-faceted user community. Photo credit: NASA/Jim Grossmann

  12. KSC-2011-7847

    NASA Image and Video Library

    2011-11-21

    CAPE CANAVERAL, Fla. – Members of the media tour several facilities, including the Launch Equipment Test Facility in the Industrial Area, during the 21st Century Ground Systems Program Tour at Kennedy Space Center in Florida. Other tour stops were the Operations & Checkout Building, the Multi-Payload Processing Facility and the Canister Rotation Facility. NASA’s 21st Century Ground Systems Program was initiated at Kennedy Space Center to establish the needed launch and processing infrastructure to support the Space Launch System Program and to work toward transforming the landscape of the launch site for a multi-faceted user community. Photo credit: NASA/Jim Grossmann

  13. Demonstration and field trial of a resilient hybrid NG-PON test-bed

    NASA Astrophysics Data System (ADS)

    Prat, Josep; Polo, Victor; Schrenk, Bernhard; Lazaro, Jose A.; Bonada, Francesc; Lopez, Eduardo T.; Omella, Mireia; Saliou, Fabienne; Le, Quang T.; Chanclou, Philippe; Leino, Dmitri; Soila, Risto; Spirou, Spiros; Costa, Liliana; Teixeira, Antonio; Tosi-Beleffi, Giorgio M.; Klonidis, Dimitrios; Tomkos, Ioannis

    2014-10-01

    A multi-layer next generation PON prototype has been built and tested, to show the feasibility of extended hybrid DWDM/TDM-XGPON FTTH networks with resilient optically-integrated ring-trees architecture, supporting broadband multimedia services. It constitutes a transparent common platform for the coexistence of multiple operators sharing the optical infrastructure of the central metro ring, passively combining the access and the metropolitan network sections. It features 32 wavelength connections at 10 Gbps, up to 1000 users distributed in 16 independent resilient sub-PONs over 100 km. This paper summarizes the network operation, demonstration and field trial results.

  14. GEOSS authentication/authorization services: a Broker-based approach

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Nativi, S.

    2014-12-01

    The vision of the Global Earth Observation System of Systems (GEOSS) is the achievement of societal benefits through voluntary contribution and sharing of resources to better understand the relationships between the society and the environment where we live. The GEOSS Common Infrastructure (GCI) allows users to search, access, and use the resources contributed by the GEOSS members. The GEO DAB (Discovery and Access Broker) is the GCI component in charge of interconnecting the heterogeneous data systems contributing to GEOSS. Client applications (i.e. the portals and apps) can connect to GEO DAB as a unique entry point to discover and access resources available through GCI, with no need to implement the many service protocols and models applied by the GEOSS data providers. The GEO DAB implements the brokering approach (Nativi et al., 2013) to build a flexible and scalable System of Systems. User authentication/authorization functionality is becoming more and more important for GEOSS data providers and users. The Providers ask for information about who accessed their resources and, in some cases, want to limit the data download. The Users ask for a profiled interaction with the system based on their needs and expertise level. Besides, authentication and authorization is necessary for GEOSS to provide moderated social services - e.g. feedback messages, data "fit for use" comments, etc. In keeping with the GEOSS principles of building on existing systems and lowering entry-barriers for users, an objective of the authentication/authorization development was to support existing and well-used users' credentials (e.g. Google, Twitter, etc.). Due to the heterogeneity of technologies used by the different providers and applications, a broker-based approach for the authentication/authorization was introduced as a new functionality of GEO DAB. This new capability will be demonstrated at the next GEO XI Plenary (November 2014). This work will be presented and discussed. Refenrences Nativi, S.; Craglia, M.; Pearlman, J., "Earth Science Infrastructures Interoperability: The Brokering Approach," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of , vol.6, no.3, pp.1118,1129, June 2013

  15. Literature review of methods to determine road user costs in construction zones

    DOT National Transportation Integrated Search

    1997-01-01

    As freeway construction increases with the need to expand, repair and maintain the existing infrastructure, the desire to quantify the inconvenience or delay costs to the user of the freeway undergoing construction has increased and become neccessary...

  16. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  17. A concept for ubiquitous robotics in industrial environment

    NASA Astrophysics Data System (ADS)

    Sallinen, Mikko; Heilala, Juhani; Kivikunnas, Sauli

    2007-09-01

    In this paper a concept for industrial ubiquitous robotics is presented. The concept combines two different approaches to manage agile, adaptable production: firstly the human operator is strongly in the production loop and secondly, the robot workcell will be more autonomous and smarter to manage production. This kind of autonomous robot cell can be called production island. Communication to the human operator working in this kind of smart industrial environment can be divided into two levels: body area communication and operator-infrastructure communication including devices, machines and infra. Body area communication can be supportive in two directions: data is recorded by means of measuring physical actions, such as hand movements, body gestures or supportive when it will provide information to user such as guides or manuals for operation. Body area communication can be carried out using short range communication technologies such as NFC (Near Field communication) which is RFID type of communication. In the operator-infrastructure communication, WLAN or Bluetooth -communication can be used. Beyond the current Human Machine interaction HMI systems, the presented system concept is designed to fulfill the requirements for hybrid, knowledge intensive manufacturing in the future, where humans and robots operate in close co-operation.

  18. A decadal view of biodiversity informatics: challenges and priorities

    PubMed Central

    2013-01-01

    Biodiversity informatics plays a central enabling role in the research community's efforts to address scientific conservation and sustainability issues. Great strides have been made in the past decade establishing a framework for sharing data, where taxonomy and systematics has been perceived as the most prominent discipline involved. To some extent this is inevitable, given the use of species names as the pivot around which information is organised. To address the urgent questions around conservation, land-use, environmental change, sustainability, food security and ecosystem services that are facing Governments worldwide, we need to understand how the ecosystem works. So, we need a systems approach to understanding biodiversity that moves significantly beyond taxonomy and species observations. Such an approach needs to look at the whole system to address species interactions, both with their environment and with other species. It is clear that some barriers to progress are sociological, basically persuading people to use the technological solutions that are already available. This is best addressed by developing more effective systems that deliver immediate benefit to the user, hiding the majority of the technology behind simple user interfaces. An infrastructure should be a space in which activities take place and, as such, should be effectively invisible. This community consultation paper positions the role of biodiversity informatics, for the next decade, presenting the actions needed to link the various biodiversity infrastructures invisibly and to facilitate understanding that can support both business and policy-makers. The community considers the goal in biodiversity informatics to be full integration of the biodiversity research community, including citizens’ science, through a commonly-shared, sustainable e-infrastructure across all sub-disciplines that reliably serves science and society alike. PMID:23587026

  19. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web services standards could be performed using framework tools.

  20. A decadal view of biodiversity informatics: challenges and priorities.

    PubMed

    Hardisty, Alex; Roberts, Dave; Addink, Wouter; Aelterman, Bart; Agosti, Donat; Amaral-Zettler, Linda; Ariño, Arturo H; Arvanitidis, Christos; Backeljau, Thierry; Bailly, Nicolas; Belbin, Lee; Berendsohn, Walter; Bertrand, Nic; Caithness, Neil; Campbell, David; Cochrane, Guy; Conruyt, Noël; Culham, Alastair; Damgaard, Christian; Davies, Neil; Fady, Bruno; Faulwetter, Sarah; Feest, Alan; Field, Dawn; Garnier, Eric; Geser, Guntram; Gilbert, Jack; Grosche; Grosser, David; Hardisty, Alex; Herbinet, Bénédicte; Hobern, Donald; Jones, Andrew; de Jong, Yde; King, David; Knapp, Sandra; Koivula, Hanna; Los, Wouter; Meyer, Chris; Morris, Robert A; Morrison, Norman; Morse, David; Obst, Matthias; Pafilis, Evagelos; Page, Larry M; Page, Roderic; Pape, Thomas; Parr, Cynthia; Paton, Alan; Patterson, David; Paymal, Elisabeth; Penev, Lyubomir; Pollet, Marc; Pyle, Richard; von Raab-Straube, Eckhard; Robert, Vincent; Roberts, Dave; Robertson, Tim; Rovellotti, Olivier; Saarenmaa, Hannu; Schalk, Peter; Schaminee, Joop; Schofield, Paul; Sier, Andy; Sierra, Soraya; Smith, Vince; van Spronsen, Edwin; Thornton-Wood, Simon; van Tienderen, Peter; van Tol, Jan; Tuama, Éamonn Ó; Uetz, Peter; Vaas, Lea; Vignes Lebbe, Régine; Vision, Todd; Vu, Duong; De Wever, Aaike; White, Richard; Willis, Kathy; Young, Fiona

    2013-04-15

    Biodiversity informatics plays a central enabling role in the research community's efforts to address scientific conservation and sustainability issues. Great strides have been made in the past decade establishing a framework for sharing data, where taxonomy and systematics has been perceived as the most prominent discipline involved. To some extent this is inevitable, given the use of species names as the pivot around which information is organised. To address the urgent questions around conservation, land-use, environmental change, sustainability, food security and ecosystem services that are facing Governments worldwide, we need to understand how the ecosystem works. So, we need a systems approach to understanding biodiversity that moves significantly beyond taxonomy and species observations. Such an approach needs to look at the whole system to address species interactions, both with their environment and with other species.It is clear that some barriers to progress are sociological, basically persuading people to use the technological solutions that are already available. This is best addressed by developing more effective systems that deliver immediate benefit to the user, hiding the majority of the technology behind simple user interfaces. An infrastructure should be a space in which activities take place and, as such, should be effectively invisible.This community consultation paper positions the role of biodiversity informatics, for the next decade, presenting the actions needed to link the various biodiversity infrastructures invisibly and to facilitate understanding that can support both business and policy-makers. The community considers the goal in biodiversity informatics to be full integration of the biodiversity research community, including citizens' science, through a commonly-shared, sustainable e-infrastructure across all sub-disciplines that reliably serves science and society alike.

  1. Open-source mobile digital platform for clinical trial data collection in low-resource settings.

    PubMed

    van Dam, Joris; Omondi Onyango, Kevin; Midamba, Brian; Groosman, Nele; Hooper, Norman; Spector, Jonathan; Pillai, Goonaseelan Colin; Ogutu, Bernhards

    2017-02-01

    Governments, universities and pan-African research networks are building durable infrastructure and capabilities for biomedical research in Africa. This offers the opportunity to adopt from the outset innovative approaches and technologies that would be challenging to retrofit into fully established research infrastructures such as those regularly found in high-income countries. In this context we piloted the use of a novel mobile digital health platform, designed specifically for low-resource environments, to support high-quality data collection in a clinical research study. Our primary aim was to assess the feasibility of a using a mobile digital platform for clinical trial data collection in a low-resource setting. Secondarily, we sought to explore the potential benefits of such an approach. The investigative site was a research institute in Nairobi, Kenya. We integrated an open-source platform for mobile data collection commonly used in the developing world with an open-source, standard platform for electronic data capture in clinical trials. The integration was developed using common data standards (Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model), maximising the potential to extend the approach to other platforms. The system was deployed in a pharmacokinetic study involving healthy human volunteers. The electronic data collection platform successfully supported conduct of the study. Multidisciplinary users reported high levels of satisfaction with the mobile application and highlighted substantial advantages when compared with traditional paper record systems. The new system also demonstrated a potential for expediting data quality review. This pilot study demonstrated the feasibility of using a mobile digital platform for clinical research data collection in low-resource settings. Sustainable scientific capabilities and infrastructure are essential to attract and support clinical research studies. Since many research structures in Africa are being developed anew, stakeholders should consider implementing innovative technologies and approaches.

  2. BioenergyKDF: Enabling Spatiotemporal Data Synthesis and Research Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, Aaron T; Movva, Sunil; Karthik, Rajasekar

    2014-01-01

    The Bioenergy Knowledge Discovery Framework (BioenergyKDF) is a scalable, web-based collaborative environment for scientists working on bioenergy related research in which the connections between data, literature, and models can be explored and more clearly understood. The fully-operational and deployed system, built on multiple open source libraries and architectures, stores contributions from the community of practice and makes them easy to find, but that is just its base functionality. The BioenergyKDF provides a national spatiotemporal decision support capability that enables data sharing, analysis, modeling, and visualization as well as fosters the development and management of the U.S. bioenergy infrastructure, which ismore » an essential component of the national energy infrastructure. The BioenergyKDF is built on a flexible, customizable platform that can be extended to support the requirements of any user community especially those that work with spatiotemporal data. While there are several community data-sharing software platforms available, some developed and distributed by national governments, none of them have the full suite of capabilities available in BioenergyKDF. For example, this component-based platform and database independent architecture allows it to be quickly deployed to existing infrastructure and to connect to existing data repositories (spatial or otherwise). As new data, analysis, and features are added; the BioenergyKDF will help lead research and support decisions concerning bioenergy into the future, but will also enable the development and growth of additional communities of practice both inside and outside of the Department of Energy. These communities will be able to leverage the substantial investment the agency has made in the KDF platform to quickly stand up systems that are customized to their data and research needs.« less

  3. Supporting open collaboration in science through explicit and linked semantic description of processes

    USGS Publications Warehouse

    Gil, Yolanda; Michel, Felix; Ratnakar, Varun; Read, Jordan S.; Hauder, Matheus; Duffy, Christopher; Hanson, Paul C.; Dugan, Hilary

    2015-01-01

    The Web was originally developed to support collaboration in science. Although scientists benefit from many forms of collaboration on the Web (e.g., blogs, wikis, forums, code sharing, etc.), most collaborative projects are coordinated over email, phone calls, and in-person meetings. Our goal is to develop a collaborative infrastructure for scientists to work on complex science questions that require multi-disciplinary contributions to gather and analyze data, that cannot occur without significant coordination to synthesize findings, and that grow organically to accommodate new contributors as needed as the work evolves over time. Our approach is to develop an organic data science framework based on a task-centered organization of the collaboration, includes principles from social sciences for successful on-line communities, and exposes an open science process. Our approach is implemented as an extension of a semantic wiki platform, and captures formal representations of task decomposition structures, relations between tasks and users, and other properties of tasks, data, and other relevant science objects. All these entities are captured through the semantic wiki user interface, represented as semantic web objects, and exported as linked data.

  4. Building an Intelligent Water Information System - American River Prototype

    NASA Astrophysics Data System (ADS)

    Glaser, S. D.; Bales, R. C.; Conklin, M. H.

    2013-12-01

    With better management, California's existing water supplies could go further to meeting the needs of the state's urban and agricultural uses. For example, California's water reservoirs are currently controlled and regulated using forecasts based upon more than 75 years of historical data. In the face of global climate change, these forecasts are becoming increasingly inadequate to precisely manage water resources. We propose implementing Leveraging the newest frontiers of information technology, we are developing a basin-scale real-time intelligent water infrastructure system that enables more information-intensive decision support. The complete system is made up of four key components. First, a strategically deployed ground-observation system will complement satellite measurements and provide continuous and accurate estimates of snowpack, soil moisture, vegetation state and energy balance across watersheds. Using our recently developed but mature technologies, we deliver measurements of hydrologic variables over a multi- tiered network of wireless sensor arrays, with a granularity of time and space previously unheard of. Second, satellite and aircraft remote sensing provide the only practical means of spatially continuous basin-wide measurement and monitoring of snow properties, vegetation characteristics and other watershed conditions. The ground-based system is designed to blend with remote sensing data on Sierra Nevada snow properties, and provide value-added products of unprecedented spatial detail and accuracy that are useable on a watershed level. Third, together the satellite and ground-based data make possible the updating of forecast tools, and routine use of physically based hydrologic models. The decision-support framework will provide tools to extract and visualize information of interest from the measured and modeled data, to assess uncertainties, and to optimize operations. Fourth, the advanced cyber infrastructure blends and transforms the numbers recorded by sensors into information in the form that is useful for decision-making. In a sense it 'monetizes' the data. It is the cyber infrastructure that links measurements, data processing, models and users. System software must provide flexibility for multiple types of access from user queries to automated and direct links with analysis tools and decision-support systems. We are currently installing a basin-scale ground-based sensor network focusing on measurements of snowpack, solar radiation, temperature, rH and soil moisture across the American River basin. Although this is a research network, it also provides core elements of a full ground-based operational system.

  5. Multi-Level Data-Security and Data-Protection in a Distributed Search Infrastructure for Digital Medical Samples.

    PubMed

    Witt, Michael; Krefting, Dagmar

    2016-01-01

    Human sample data is stored in biobanks with software managing digital derived sample data. When these stand-alone components are connected and a search infrastructure is employed users become able to collect required research data from different data sources. Data protection, patient rights, data heterogeneity and access control are major challenges for such an infrastructure. This dissertation will investigate concepts for a multi-level security architecture to comply with these requirements.

  6. Service on demand for ISS users

    NASA Astrophysics Data System (ADS)

    Hüser, Detlev; Berg, Marco; Körtge, Nicole; Mildner, Wolfgang; Salmen, Frank; Strauch, Karsten

    2002-07-01

    Since the ISS started its operational phase, the need of logistics scenarios and solutions, supporting the utilisation of the station and its facilities, becomes increasingly important. Our contribution to this challenge is a SERVICE On DEMAND for ISS users, which offers a business friendly engineering and logistics support for the resupply of the station. Especially the utilisation by commercial and industrial users is supported and simplified by this service. Our industrial team, consisting of OHB-System and BEOS, provides experience and development support for space dedicated hard- and software elements, their transportation and operation. Furthermore, we operate as the interface between customer and the envisaged space authorities. Due to a variety of tailored service elements and the ongoing servicing, customers can concentrate on their payload content or mission objectives and don't have to deal with space-specific techniques and regulations. The SERVICE On DEMAND includes the following elements: ITR is our in-orbit platform service. ITR is a transport rack, used in the SPACEHAB logistics double module, for active and passive payloads on subrack- and drawer level of different standards. Due to its unique late access and early retrieval capability, ITR increases the flexibility concerning transport capabilities to and from the ISS. RIST is our multi-functional test facility for ISPR-based experiment drawer and locker payloads. The test program concentrates on physical and functional interface and performance testing at the payload developers site prior to the shipment to the integration and launch. The RIST service program comprises consulting, planning and engineering as well. The RIST test suitcase is planned to be available for lease or rent to users, too. AMTSS is an advanced multimedia terminal consulting service for communication with the space station scientific facilities, as part of the user home-base. This unique ISS multimedia kit combines communication technologies, software tools and hardware to provide a simple and cost-efficient access to data from the station, using the interconnection ground subnetwork. BEOLOG is our efficient ground logistics service for the transportation of payload hardware and support equipment from the user location to the launch/landing sites for the ISS service flights and back home. The main function of this service is the planning and organisation of all packaging, handling, storage & transportation tasks according to international rules. In conclusion, we offer novel service elements for logistics ground- and flight-infrastructure, dedicated for ISS users. These services can be easily adapted to the needs of users and are suitable for other μg- platforms as well.

  7. A Framework to Support the Sharing and Reuse of Computable Phenotype Definitions Across Health Care Delivery and Clinical Research Applications.

    PubMed

    Richesson, Rachel L; Smerek, Michelle M; Blake Cameron, C

    2016-01-01

    The ability to reproducibly identify clinically equivalent patient populations is critical to the vision of learning health care systems that implement and evaluate evidence-based treatments. The use of common or semantically equivalent phenotype definitions across research and health care use cases will support this aim. Currently, there is no single consolidated repository for computable phenotype definitions, making it difficult to find all definitions that already exist, and also hindering the sharing of definitions between user groups. Drawing from our experience in an academic medical center that supports a number of multisite research projects and quality improvement studies, we articulate a framework that will support the sharing of phenotype definitions across research and health care use cases, and highlight gaps and areas that need attention and collaborative solutions. An infrastructure for re-using computable phenotype definitions and sharing experience across health care delivery and clinical research applications includes: access to a collection of existing phenotype definitions, information to evaluate their appropriateness for particular applications, a knowledge base of implementation guidance, supporting tools that are user-friendly and intuitive, and a willingness to use them. We encourage prospective researchers and health administrators to re-use existing EHR-based condition definitions where appropriate and share their results with others to support a national culture of learning health care. There are a number of federally funded resources to support these activities, and research sponsors should encourage their use.

  8. A Framework to Support the Sharing and Reuse of Computable Phenotype Definitions Across Health Care Delivery and Clinical Research Applications

    PubMed Central

    Richesson, Rachel L.; Smerek, Michelle M.; Blake Cameron, C.

    2016-01-01

    Introduction: The ability to reproducibly identify clinically equivalent patient populations is critical to the vision of learning health care systems that implement and evaluate evidence-based treatments. The use of common or semantically equivalent phenotype definitions across research and health care use cases will support this aim. Currently, there is no single consolidated repository for computable phenotype definitions, making it difficult to find all definitions that already exist, and also hindering the sharing of definitions between user groups. Method: Drawing from our experience in an academic medical center that supports a number of multisite research projects and quality improvement studies, we articulate a framework that will support the sharing of phenotype definitions across research and health care use cases, and highlight gaps and areas that need attention and collaborative solutions. Framework: An infrastructure for re-using computable phenotype definitions and sharing experience across health care delivery and clinical research applications includes: access to a collection of existing phenotype definitions, information to evaluate their appropriateness for particular applications, a knowledge base of implementation guidance, supporting tools that are user-friendly and intuitive, and a willingness to use them. Next Steps: We encourage prospective researchers and health administrators to re-use existing EHR-based condition definitions where appropriate and share their results with others to support a national culture of learning health care. There are a number of federally funded resources to support these activities, and research sponsors should encourage their use. PMID:27563686

  9. Scaling of an information system in a public healthcare market--infrastructuring from the vendor's perspective.

    PubMed

    Johannessen, Liv Karen; Obstfelder, Aud; Lotherington, Ann Therese

    2013-05-01

    The purpose of this paper is to explore the making and scaling of information infrastructures, as well as how the conditions for scaling a component may change for the vendor. The first research question is how the making and scaling of a healthcare information infrastructure can be done and by whom. The second question is what scope for manoeuvre there might be for vendors aiming to expand their market. This case study is based on an interpretive approach, whereby data is gathered through participant observation and semi-structured interviews. A case study of the making and scaling of an electronic system for general practitioners ordering laboratory services from hospitals is described as comprising two distinct phases. The first may be characterized as an evolving phase, when development, integration and implementation were achieved in small steps, and the vendor, together with end users, had considerable freedom to create the solution according to the users' needs. The second phase was characterized by a large-scale procurement process over which regional healthcare authorities exercised much more control and the needs of groups other than the end users influenced the design. The making and scaling of healthcare information infrastructures is not simply a process of evolution, in which the end users use and change the technology. It also consists of large steps, during which different actors, including vendors and healthcare authorities, may make substantial contributions. This process requires work, negotiation and strategies. The conditions for the vendor may change dramatically, from considerable freedom and close relationships with users and customers in the small-scale development, to losing control of the product and being required to engage in more formal relations with customers in the wider public healthcare market. Onerous procurement processes may be one of the reasons why large-scale implementation of information projects in healthcare is difficult and slow. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. MicroArray Facility: a laboratory information management system with extended support for Nylon based technologies.

    PubMed

    Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal

    2006-09-20

    High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option.GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike.

  11. MicroArray Facility: a laboratory information management system with extended support for Nylon based technologies

    PubMed Central

    Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal

    2006-01-01

    Background High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike. PMID:16987406

  12. Bottom-up capacity building for data providers in RITMARE

    NASA Astrophysics Data System (ADS)

    Pepe, Monica; Basoni, Anna; Bastianini, Mauro; Fugazza, Cristiano; Menegon, Stefano; Oggioni, Alessandro; Pavesi, Fabio; Sarretta, Alessandro; Carrara, Paola

    2014-05-01

    RITMARE is a Flagship Project by the Italian Ministry of Research, coordinated by the National Research Council (CNR). It aims at the interdisciplinary integration of Italian marine research. Sub-project 7 shall create an interoperable infrastructure for the project, capable of interconnecting the whole community of researchers involved. It will allow coordinating and sharing of data, processes, and information produced by the other sub-projects [1]. Spatial Data Infrastructures (SDIs) allow for interoperable sharing among heterogeneous, distributed spatial content providers. The INSPIRE Directive [2] regulates the development of a pan-european SDI despite the great variety of national approaches in managing spatial data. However, six years after its adoption, its growth is still hampered by technological, cultural, and methodological gaps. In particular, in the research sector, actors may not be prone to comply with INSPIRE (or feel not compelled to) because they are too concentrated on domain-specific activities or hindered by technological issues. Indeed, the available technologies and tools for enabling standard-based discovery and access services are far from being user-friendly and requires time-consuming activities, such as metadata creation. Moreover, the INSPIRE implementation guidelines do not accommodate an essential component in environmental research, that is, in situ observations. In order to overcome most of the aforementioned issues and to enable researchers to actively give their contribution in the creation of the project infrastructure, a bottom-up approach has been adopted: a software suite has been developed, called Starter Kit, which is offered to research data production units, so that they can become autonomous, independent nodes of data provision. The Starter Kit enables the provision of geospatial resources, either geodata (e.g., maps and layers) or observations pulled from sensors, which are made accessible according to the OGC standards defined for the specific category of data (WMS, WFS, WCS, and SOS). Resources are annotated by fine-grained metadata that is compliant with standards (e.g., INSPIRE, SensorML) and also semantically enriched by leveraging controlled vocabularies and RDF-based data structures (e.g., the FOAF description of the project's organisation). The Starter Kit is packaged as an off-the-shelf virtual machine and is made available under an open license (GPL v.3) and with extensive support tools. Among the most innovative features of the architecture is the user-friendly, extensible approach to metadata creation. On the one hand, the number of metadata items that need to be provided by the user is reduced to the minimum by recourse to controlled vocabularies and context information. The semantic underpinning of these data structures enables advanced discovery functionalities. On the other hand, the templating mechanism adopted in metadata editing allows to easily plug-in further schemata. The Starter Kit provides a consistent framework for capacity building that brings the heterogeneous actors in the project under the same umbrella, while preserving the individual practices, formats, and workflows. At the same time, users are empowered with standard-compliant web services that can be discovered and accessed both locally and remotely, such as the RITMARE infrastructure itself. [1] Carrara, P., Sarretta, A., Giorgetti, A., Ribera D'Alcalà, M., Oggioni, A., & Partescano, E. (2013). An interoperable infrastructure for the Italian Marine Research. IMDIS 2013 [2] European Commission, "Establishing an Infrastructure for Spatial Information in the European Community (INSPIRE)" Directive 2007/2/EC, Official J. European Union, vol. 50, no. L 108, 2007, pp. 1-14.

  13. Public-Private Partnership: Joint recommendations to improve downloads of large Earth observation data

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.

    2016-12-01

    With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.

  14. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  15. Interactive energy atlas for Colorado and New Mexico: an online resource for decisionmakers

    USGS Publications Warehouse

    Carr, Natasha B.; Ignizio, Drew A.; Diffendorfer, James E.; Latysh, Natalie; Matherne, Ann Marie; Linard, Joshua I.; Leib, Kenneth J.; Hawkins, Sarah J.

    2013-01-01

    Throughout the western United States, increased demand for energy is driving the rapid development of nonrenewable and renewable energy resources. Resource managers must balance the benefits of energy development with the potential consequences for ecological resources and ecosystem services. To facilitate access to geospatial data related to energy resources, energy infrastructure, and natural resources that may be affected by energy development, the U.S. Geological Survey has developed an online Interactive Energy Atlas (Energy Atlas) for Colorado and New Mexico. The Energy Atlas is designed to meet the needs of varied users who seek information about energy in the western United States. The Energy Atlas has two primary capabilities: a geographic information system (GIS) data viewer and an interactive map gallery. The GIS data viewer allows users to preview and download GIS data related to energy potential and development in Colorado and New Mexico. The interactive map gallery contains a collection of maps that compile and summarize thematically related data layers in a user-friendly format. The maps are dynamic, allowing users to explore data at different resolutions and obtain information about the features being displayed. The Energy Atlas also includes an interactive decision-support tool, which allows users to explore the potential consequences of energy development for species that vary in their sensitivity to disturbance.

  16. NASA's Earth Observing Data and Information System - Near-Term Challenges

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Mitchell, Andrew; Ramapriyan, Hampapuram

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been a central component of the NASA Earth observation program since the 1990's. EOSDIS manages data covering a wide range of Earth science disciplines including cryosphere, land cover change, polar processes, field campaigns, ocean surface, digital elevation, atmosphere dynamics and composition, and inter-disciplinary research, and many others. One of the key components of EOSDIS is a set of twelve discipline-based Distributed Active Archive Centers (DAACs) distributed across the United States. Managed by NASA's Earth Science Data and Information System (ESDIS) Project at Goddard Space Flight Center, these DAACs serve over 3 million users globally. The ESDIS Project provides the infrastructure support for EOSDIS, which includes other components such as the Science Investigator-led Processing systems (SIPS), common metadata and metrics management systems, specialized network systems, standards management, and centralized support for use of commercial cloud capabilities. Given the long-term requirements, and the rapid pace of information technology and changing expectations of the user community, EOSDIS has evolved continually over the past three decades. However, many challenges remain. Challenges addressed in this paper include: growing volume and variety, achieving consistency across a diverse set of data producers, managing information about a large number of datasets, migration to a cloud computing environment, optimizing data discovery and access, incorporating user feedback from a diverse community, keeping metadata updated as data collections grow and age, and ensuring that all the content needed for understanding datasets by future users is identified and preserved.

  17. Data management in Oceanography at SOCIB

    NASA Astrophysics Data System (ADS)

    Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan

    2014-05-01

    SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution: Data web services using Thredds, Geoserver and RESTful own services. • Catalogue: Metadata is provided through the ncISO plugin in Thredds and Geonetwork. • Visualization: web and mobile applications to present SOCIB data to different user profiles. SOCIB data services and applications have been developed to provide response to science and society needs (eg. European initiatives such as Emodnet or Copernicus), by targeting different user profiles (eg. researchers, technicians, policy and decision makers, educators, students, and society in general). For example, SOCIB has developed applications to: 1) allow researchers and technicians to access oceanographic information; 2) provide decision support for oil spills response; 3) disseminate information about the coastal state for tourists and recreational users; 4) present coastal research in educational programs; and 5) offer easy and fast access to marine information through mobile devices. In conclusion, the organizational and conceptual structure of SOCIB's Data Centre and the components developed provide an example of marine information systems within the framework of new ocean observatories and/or marine research infrastructures.

  18. JTS and its Application in Environmental Protection Applications

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Gurov, Todor; Slavov, Dimitar; Ivanovska, Sofiya; Karaivanova, Aneta

    2010-05-01

    The environmental protection was identified as a domain of high interest for South East Europe, addressing practical problems related to security and quality of life. The gridification of the Bulgarian applications MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aims to develop an efficient Grid implementation of a sensitivity analysis of the Danish Eulerian Model), MSACM (Multi-Scale Atmospheric Composition Modeling) which aims to produce an integrated, multi-scale Balkan region oriented modelling system, able to interface the scales of the problem from emissions on the urban scale to their transport and transformation on the local and regional scales), MSERRHSA (Modeling System for Emergency Response to the Release of Harmful Substances in the Atmosphere) which aims to develop and deploy a modeling system for emergency response to the release of harmful substances in the atmosphere, targeted at the SEE and more specifically Balkan region) faces several challenges: These applications are resource intensive, in terms of both CPU utilization and data transfers and storage. The use of applications for operational purposes poses requirements for availability of resources, which are difficult to be met on a dynamically changing Grid environment. The validation of applications is resource intensive and time consuming. The successful resolution of these problems requires collaborative work and support from part of the infrastructure operators. However, the infrastructure operators are interested to avoid underutilization of resources. That is why we developed the Job Track Service and tested it during the development of the grid implementations of MCSAES, MSACM and MSERRHSA. The Job Track Service (JTS) is a grid middleware component which facilitates the provision of Quality of Service in grid infrastructures using gLite middleware like EGEE and SEEGRID. The service is based on messaging middleware and uses standart protocols like AMQP (Advanced Message Queuing Protocol) and XMPP (eXtensible Messaging and Presence Protocol) for real-time communication, while its security model is based on GSI authentication. It enables resource owners to provide the most popular types of QoS of execution to some of their users, using a standardized model. The first version of the service offered services to individual users. In this work we describe a new version of the Job Track service offering application specific functionality, geared towards the specific needs of the Environmental Modelling and Protection applications and oriented towards collaborative usage by groups and subgroups of users. We used the modular design of the JTS in order to implement plugins enabling smoother interaction of the users with the Grid environment. Our experience shows improved response times and decreased failure rate from the executions of the application. In this work we present such observations from the use of the South East European Grid infrastructure.

  19. A GeoServices Infrastructure for Near-Real-Time Access to Suomi NPP Satellite Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.; Hao, W.; Chettri, S.

    2012-12-01

    The new Suomi National Polar-orbiting Partnership (NPP) satellite extends NASA's moderate-resolution, multispectral observations with a suite of powerful imagers and sounders to support a broad array of research and applications. However, NPP data products consist of a complex set of data and metadata files in highly specialized formats; which NPP's operational ground segment delivers to users only with several hours' delay. This severely limits their use in critical applications such as weather forecasting, emergency / disaster response, search and rescue, and other activities that require near-real-time access to satellite observations. Alternative approaches, based on distributed Direct Broadcast facilities, can reduce the delay in NPP data delivery from hours to minutes, and can make products more directly usable by practitioners in the field. To assess and fulfill this potential, we are developing a suite of software that couples Direct Broadcast data feeds with a streamlined, scalable processing chain and geospatial Web services, so as to permit many more time-sensitive applications to use NPP data. The resulting geoservices infrastructure links a variety of end-user tools and applications to NPP data from different sources, and to other rapidly-changing geospatial data. By using well-known, standard software interfaces (such as OGC Web Services or OPeNDAP), this infrastructure serves a variety of end-user analysis and visualization tools, giving them access into datasets of arbitrary size and resolution and allowing them to request and receive tailored products on demand. The standards-based approach may also streamline data sharing among independent satellite receiving facilities, thus helping them to interoperate in providing frequent, composite views of continent-scale or global regions. To enable others to build similar or derived systems, the service components we are developing (based in part on the Community Satellite Processing Package (CSPP) from the University of Wisconsin and the International Polar-Orbiter Processing Package (IPOPP) from NASA) are being released as open source software. Furthermore, they are configured to operate in a cloud computing environment, so as to allow even small organizations to process and serve NPP data without large hardware investments; and to maintain near-real-time performance cost-effectively by growing and shrinking their use of computing resources to meet large, rapid fluctuations in end-user demand, data availability, and processing needs. (This is especially important for polar-orbiting satellites like NPP, which pass within range of a receiver only a few times each day.) We will discuss the design of the infrastructure, highlight its capabilities, and sketch its potential to facilitate broad access to satellite data processing and visualization, and to enhance near-real-time applications via distributed NPP data streams.

  20. Using Multi-modal Sensing for Human Activity Modeling in the Real World

    NASA Astrophysics Data System (ADS)

    Harrison, Beverly L.; Consolvo, Sunny; Choudhury, Tanzeem

    Traditionally smart environments have been understood to represent those (often physical) spaces where computation is embedded into the users' surrounding infrastructure, buildings, homes, and workplaces. Users of this "smartness" move in and out of these spaces. Ambient intelligence assumes that users are automatically and seamlessly provided with context-aware, adaptive information, applications and even sensing - though this remains a significant challenge even when limited to these specialized, instrumented locales. Since not all environments are "smart" the experience is not a pervasive one; rather, users move between these intelligent islands of computationally enhanced space while we still aspire to achieve a more ideal anytime, anywhere experience. Two key technological trends are helping to bridge the gap between these smart environments and make the associated experience more persistent and pervasive. Smaller and more computationally sophisticated mobile devices allow sensing, communication, and services to be more directly and continuously experienced by user. Improved infrastructure and the availability of uninterrupted data streams, for instance location-based data, enable new services and applications to persist across environments.

  1. Signature scheme based on bilinear pairs

    NASA Astrophysics Data System (ADS)

    Tong, Rui Y.; Geng, Yong J.

    2013-03-01

    An identity-based signature scheme is proposed by using bilinear pairs technology. The scheme uses user's identity information as public key such as email address, IP address, telephone number so that it erases the cost of forming and managing public key infrastructure and avoids the problem of user private generating center generating forgery signature by using CL-PKC framework to generate user's private key.

  2. Supporting the scientific lifecycle through cloud services

    NASA Astrophysics Data System (ADS)

    Gensch, S.; Klump, J. F.; Bertelmann, R.; Braune, C.

    2014-12-01

    Cloud computing has made resources and applications available for numerous use cases ranging from business processes in the private sector to scientific applications. Developers have created tools for data management, collaborative writing, social networking, data access and visualization, project management and many more; either for free or as paid premium services with additional or extended features. Scientists have begun to incorporate tools that fit their needs into their daily work. To satisfy specialized needs, some cloud applications specifically address the needs of scientists for sharing research data, literature search, laboratory documentation, or data visualization. Cloud services may vary in extent, user coverage, and inter-service integration and are also at risk of being abandonend or changed by the service providers making changes to their business model, or leaving the field entirely.Within the project Academic Enterprise Cloud we examine cloud based services that support the research lifecycle, using feature models to describe key properties in the areas of infrastructure and service provision, compliance to legal regulations, and data curation. Emphasis is put on the term Enterprise as to establish an academic cloud service provider infrastructure that satisfies demands of the research community through continious provision across the whole cloud stack. This could enable the research community to be independent from service providers regarding changes to terms of service and ensuring full control of its extent and usage. This shift towards a self-empowered scientific cloud provider infrastructure and its community raises implications about feasability of provision and overall costs. Legal aspects and licensing issues have to be considered, when moving data into cloud services, especially when personal data is involved.Educating researchers about cloud based tools is important to help in the transition towards effective and safe use. Scientists can benefit from the provision of standard services, like weblog and website creation, virtual machine deployments, and groupware provision using cloud based app store-like portals. And, other than in an industrial environment, researchers will want to keep their existing user profile when moving from one institution to another.

  3. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  4. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  5. Full Scale Software Support on Mobile Lightweight Devices by Utilization of All Types of Wireless Technologies

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej

    New kind of mobile lightweight devices can run full scale applications with same comfort as on desktop devices only with several limitations. One of them is insufficient transfer speed on wireless connectivity. Main area of interest is in a model of a radio-frequency based system enhancement for locating and tracking users of a mobile information system. The experimental framework prototype uses a wireless network infrastructure to let a mobile lightweight device determine its indoor or outdoor position. User location is used for data prebuffering and pushing information from server to user’s PDA. All server data is saved as artifacts along with its position information in building or larger area environment. The accessing of prebuffered data on mobile lightweight device can highly improve response time needed to view large multimedia data. This fact can help with design of new full scale applications for mobile lightweight devices.

  6. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    PubMed

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  7. Security Isn't Just for Techies Anymore

    ERIC Educational Resources Information Center

    Mills, Lane B.

    2004-01-01

    School district networks are particularly difficult to protect given the diverse types of users, software, equipment and connections that most school districts provide. Vulnerabilities to the security of school district's technology infrastructure can relate to users, data, software, hardware and transmission. This article discusses different…

  8. Bottom to Top Approach for Railway KPI Generation

    NASA Astrophysics Data System (ADS)

    Villarejo, Roberto; Johansson, Carl-Anders; Leturiondo, Urko; Simon, Victor; Seneviratne, Dammika; Galar, Diego

    2017-09-01

    Railway maintenance especially on infrastructure produces a vast amount of data. However, having data is not synonymous with having information; rather, data must be processed to extract information. In railway maintenance, the development of key performance indicators (KPIs) linked to punctuality or capacity can help planned and scheduled maintenance, thus aligning the maintenance department with corporate objectives. There is a need for an improved method to analyse railway data to find the relevant KPIs. The system should support maintainers, answering such questions as what maintenance should be done, where and when. The system should equip the user with the knowledge of the infrastructure's condition and configuration, and the traffic situation so maintenance resources can be targeted to only those areas needing work. The amount of information is vast, so it must be hierarchized and aggregated; users must filter out the useless indicators. Data are fused by compiling several individual indicators into a single index; the resulting composite indicators measure multidimensional concepts which cannot be captured by a single index. The paper describes a method of monitoring a complex entity. In this scenario, a plurality of use indices and weighting values are used to create a composite and aggregated use index from a combination of lower level use indices and weighting values. The resulting composite and aggregated indicators can be a decision-making tool for asset managers at different hierarchical levels.

  9. Molecular Genetics Information System (MOLGENIS): alternatives in developing local experimental genomics databases.

    PubMed

    Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C

    2004-09-01

    Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl

  10. Building an infrastructure at PICKSC for the educational use of kinetic software tools

    NASA Astrophysics Data System (ADS)

    Mori, W. B.; Decyk, V. K.; Tableman, A.; Fonseca, R. A.; Tsung, F. S.; Hu, Q.; Winjum, B. J.; Amorim, L. D.; An, W.; Dalichaouch, T. N.; Davidson, A.; Joglekar, A.; Li, F.; May, J.; Touati, M.; Xu, X. L.; Yu, P.

    2016-10-01

    One aim of the Particle-In-Cell and Kinetic Simulation Center (PICKSC) at UCLA is to coordinate a community development of educational software for undergraduate and graduate courses in plasma physics and computer science. The rich array of physical behaviors exhibited by plasmas can be difficult to grasp by students. If they are given the ability to quickly and easily explore plasma physics through kinetic simulations, and to make illustrative visualizations of plasma waves, particle motion in electromagnetic fields, instabilities, or other phenomena, then they can be equipped with first-hand experiences that inform and contextualize conventional texts and lectures. We are developing an infrastructure for any interested persons to take our kinetic codes, run them without any prerequisite knowledge, and explore desired scenarios. Furthermore, we are actively interested in any ideas or input from other plasma physicists. This poster aims to illustrate what we have developed and gather a community of interested users and developers. Supported by NSF under Grant ACI-1339893.

  11. Strengthening Data Confidentiality and Integrity Protection in the Context of a Multi-Centric Information System Dedicated to Autism Spectrum Disorder.

    PubMed

    Ben Said, Mohamed; Robel, Laurence; Golse, Bernard; Jais, Jean Philippe

    2017-01-01

    Autism spectrum disorders (ASD) are complex neuro-developmental disorders affecting children in early age. Diagnosis relies on multidisciplinary investigations, in psychiatry, neurology, genetics, electrophysiology, neuro-imagery, audiology, and ophthalmology. To support clinicians, researchers, and public health decision makers, we developed an information system dedicated to ASD, called TEDIS. It was designed to manage systematic, exhaustive and continuous multi-centric patient data collection via secured internet connections. TEDIS will be deployed in nine ASD expert assessment centers in Ile-DeFrance district. We present security policy and infrastructure developed in context of TEDIS to protect patient privacy and clinical information. TEDIS security policy was organized around governance, ethical and organisational chart-agreement, patients consents, controlled user access, patients' privacy protection, constrained patients' data access. Security infrastructure was enriched by further technical solutions to reinforce ASD patients' privacy protection. Solutions were tested on local secured intranet environment and showed fluid functionality with consistent, transparent and safe encrypting-decrypting results.

  12. Applying a multi-replication framework to support dynamic situation assessment and predictive capabilities

    NASA Astrophysics Data System (ADS)

    Lammers, Craig; McGraw, Robert M.; Steinman, Jeffrey S.

    2005-05-01

    Technological advances and emerging threats reduce the time between target detection and action to an order of a few minutes. To effectively assist with the decision-making process, C4I decision support tools must quickly and dynamically predict and assess alternative Courses Of Action (COAs) to assist Commanders in anticipating potential outcomes. These capabilities can be provided through the faster-than-real-time predictive simulation of plans that are continuously re-calibrating with the real-time picture. This capability allows decision-makers to assess the effects of re-tasking opportunities, providing the decision-maker with tremendous freedom to make time-critical, mid-course decisions. This paper presents an overview and demonstrates the use of a software infrastructure that supports DSAP capabilities. These DSAP capabilities are demonstrated through the use of a Multi-Replication Framework that supports (1) predictivie simulations using JSAF (Joint Semi-Automated Forces); (2) real-time simulation, also using JSAF, as a state estimation mechanism; and, (3) real-time C4I data updates through TBMCS (Theater Battle Management Core Systems). This infrastructure allows multiple replications of a simulation to be executed simultaneously over a grid faster-than-real-time, calibrated with live data feeds. A cost evaluator mechanism analyzes potential outcomes and prunes simulations that diverge from the real-time picture. In particular, this paper primarily serves to walk a user through the process for using the Multi-Replication Framework providing an enhanced decision aid.

  13. Fabrication Infrastructure to Enable Efficient Exploration and Utilization of Space

    NASA Technical Reports Server (NTRS)

    Howell, Joe T.; Fikes, John C.; McLemore, Carole A.; Manning, Curtis W.; Good, Jim

    2007-01-01

    Unlike past one-at-a-time mission approaches, system-of-systems infrastructures will be needed to enable ambitious scenarios for sustainable future space exploration and utilization. Fabrication infrastructure will be needed to support habitat structure development, tools and mechanical part fabrication, as well as repair and replacement of ground support and space mission hardware such as life support items, vehicle components and crew systems. The fabrication infrastructure will need the In Situ Fabrication and Repair (ISFR) element, which is working in conjunction with the In Situ Resources Utilization (ISRU) element, to live off the land. The ISFR Element supports the entire life cycle of Exploration by: reducing downtime due to failed components; decreasing risk to crew by recovering quickly from degraded operation of equipment; improving system functionality with advanced geometry capabilities; and enhancing mission safety by reducing assembly part counts of original designs where possible. This paper addresses the fabrication infrastructures that support efficient, affordable, reliable infrastructures for both space exploration systems and logistics; these infrastructures allow sustained, affordable and highly effective operations on the Moon, Mars and beyond.

  14. The history of infrastructures and the future of cyberinfrastructure in the Earth system sciences

    NASA Astrophysics Data System (ADS)

    Edwards, P. N.

    2012-12-01

    Infrastructures display similar historical patterns of inception, development, growth and decay. They typically begin as centralized systems which later proliferate into competing variants. Users' desire for seamless functionality tends eventually to push these variants toward interoperability, usually through "gateway" technologies that link incompatible systems into networks. Another stage is reached when these networks are linked to others, as in the cases of container transport (connecting trucking, rail, and shipping) or the Internet. End stages of infrastructure development include "splintering" (specialized service tiering) and decay, as newer infrastructures displace older ones. Temporal patterns are also visible in historical infrastructure development. This presentation, by a historian of science and technology, describes these patterns through examples of both physical and digital infrastructures, focusing on the global weather forecast infrastructure since the 19th century. It then investigates how some of these patterns might apply to the future of cyberinfrastructure for the Earth system sciences.

  15. Synergy Between Archives, VO, and the Grid at ESAC

    NASA Astrophysics Data System (ADS)

    Arviset, C.; Alvarez, R.; Gabriel, C.; Osuna, P.; Ott, S.

    2011-07-01

    Over the years, in support to the Science Operations Centers at ESAC, we have set up two Grid infrastructures. These have been built: 1) to facilitate daily research for scientists at ESAC, 2) to provide high computing capabilities for project data processing pipelines (e.g., Herschel), 3) to support science operations activities (e.g., calibration monitoring). Furthermore, closer collaboration between the science archives, the Virtual Observatory (VO) and data processing activities has led to an other Grid use case: the Remote Interface to XMM-Newton SAS Analysis (RISA). This web service-based system allows users to launch SAS tasks transparently to the GRID, save results on http-based storage and visualize them through VO tools. This paper presents real and operational use cases of Grid usages in these contexts

  16. GENESI-DR - A single access point to Earth Science data

    NASA Astrophysics Data System (ADS)

    Cossu, R.; Goncalves, P.; Pacini, F.

    2009-04-01

    The amount of information being generated about our planet is increasing at an exponential rate, but it must be easily accessible in order to apply it to the global needs relating to the state of the Earth. Currently, information about the state of the Earth, relevant services, analysis results, applications and tools are accessible in a very scattered and uncoordinated way, often through individual initiatives from Earth Observation mission operators, scientific institutes dealing with ground measurements, service companies, data catalogues, etc. A dedicated infrastructure providing transparent access to all this will support Earth Science communities by allowing them to easily and quickly derive objective information and share knowledge based on all environmentally sensitive domains. The use of high-speed networks (GÉANT) and the experimentation of new technologies, like BitTorrent, will also contribute to better services for the Earth Science communities. GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories), an ESA-led, European Commission (EC)-funded two-year project, is taking the lead in providing reliable, easy, long-term access to Earth Science data via the Internet. This project will allow scientists from different Earth Science disciplines located across Europe to locate, access, combine and integrate historical and fresh Earth-related data from space, airborne and in-situ sensors archived in large distributed repositories. GENESI-DR builds a federated collection of heterogeneous digital Earth Science repositories to establish a dedicated infrastructure providing transparent access to all this and allowing Earth Science communities to easily and quickly derive objective information and share knowledge based on all environmentally sensitive domains. The federated digital repositories, seen as services and data providers, will share access to their resources (catalogue functions, data access, processing services etc.) and will adhere to a common set of standards / policies / interfaces. The end-users will be provided with a virtual collection of digital Earth Science data, irrespectively of their location in the various single federated repositories. GENESI-DR objectives have lead to the identification of the basic GENESI-DR infrastructure requirements: • Capability, for Earth Science users, to discover data from different European Earth Science Digital Repositories through the same interface in a transparent and homogeneous way; • Easiness and speed of access to large volumes of coherently maintained distributed data in an effective and timely way; • Capability, for DR owners, to easily make available their data to a significantly increased audience with no need to duplicate them in a different storage system. Data discovery is based on a Central Discovery Service, which allows users and applications to easily query information about data collections and products existing in heterogeneous catalogues, at federated DR sites. This service can be accessed by users via web interface, the GENESI-DR Web Portal, or by external applications via open standardized interfaces exposed by the system. The Central Discovery Service identifies the DRs providing products complying with the user search criteria and returns the corresponding access points to the requester. By taking into consideration different and efficient data transfer technologies such as HTTPS, GridFTP and BitTorrent, the infrastructure provides easiness and speed of access. Conversely, for data publishing GENESI-DR provides several mechanisms to assist DR owners in producing a metadata catalogues. In order to reach its objectives, the GENESI-DR e-Infrastructure will be validated against user needs for accessing and sharing Earth Science data. Initially, four specific applications in the land, atmosphere and marine domains have been selected, including: • Near real time orthorectification for agricultural crops monitoring • Urban area mapping in support of emergency response • Data assimilation in GlobModel, addressing major environmental and health issues in Europe, with a particular focus on air quality • SeaDataNet to aid environmental assessments and to forecast the physical state of the oceans in near real time. Other applications will complement this during the second half of the project. GENESI-DR also aims to develop common approaches to preserve the historical archives and the ability to access the derived user information as both software and hardware transformations occur. Ensuring access to Earth Science data for future generations is of utmost importance because it allows for the continuity of knowledge generation improvement. For instance, scientists accessing today's climate change data in 50 years will be able to better understand and detect trends in global warming and apply this knowledge to ongoing natural phenomena. GENESI-DR will work towards harmonising operations and applying approved standards, policies and interfaces at key Earth Science data repositories. To help with this undertaking, GENESI-DR will establish links with the relevant organisations and programmes such as space agencies, institutional environmental programmes, international Earth Science programmes and standardisation bodies.

  17. Problem severity, technology adoption, and intent to seek online counseling among overseas Filipino workers.

    PubMed

    Hechanova, Ma Regina M; Tuliao, Antover P; Teh, Lota A; Alianan, Arsenio S; Acosta, Avegale

    2013-08-01

    This study examined the factors that influence the intent to seek online counseling among overseas Filipino workers (OFWs). A survey among 365 OFWs revealed that problem severity and technology adoption predict intent to use online counseling. Among the three factors of technology adoption, perceived ease in the use of technology and perceived presence of organization and technological infrastructure to support use predicted intent to use online counseling. Our hypothesis about the presence of interaction between problem severity and facilitating conditions was supported. Among individuals with low problem severity, those who perceive the presence of organization and technological infrastructure to support use have a higher intent to use online counseling. However, at higher levels of problem severity, the effect of facilitating conditions seems to disappear. These findings highlight the crucial role of preventive online mental health services. The study contributes to theory by integrating the stage model of help-seeking behaviors and technology adoption theory in predicting intent to use online counseling. Specifically, that intent to seek online counseling is affected by the existence and perceived gravity of a problem, moderated by technology adoption factors, particularly facilitating conditions. These have implications on the need to educate potential users on the advantages of counseling and ensure that migrant workers have access to technology and that the technology is easy to use.

  18. Commercial Contributions to the Success of the HEDS Enterprise: A Working Model

    NASA Technical Reports Server (NTRS)

    Nall, Mark; Askew, Ray

    2000-01-01

    The future of NASA involves the exploration of space beyond the confines of orbit about the Earth. This includes robotic investigations and Human Exploration and Development of Space (HEDS). The HEDS Strategic Plan states: "HEDS will join with the private sector to stimulate opportunities for commercial development in space as a key to future settlement. Near-term efforts will emphasize joint pilot projects that provide clear benefit to Earth from the development of near-Earth space." In support of this endeavor, NASA has established the Commercial Development of Space as a prime goal and is exploring all the ways in which NASA might make contributions to this development. NASA has long supported the development of space for commercial use. In 1985 it formally established and provided funds to support a program which created a number of joint ventures between universities and industry for this purpose. These were known as Centers for the Commercial Development of Space (CCDS). In 1999 NASA established a broader policy on commercialization with the aim of encouraging near-term commercial investment in conjunction with the International Space Station. Joint pilot projects will be initiated to stimulate this near-term investment. The long-term development of commercial concepts utilizing space access continues through the activities of the Commercial Space Centers (CSC), a sub-set of the original CCDS group. These Centers primarily require access to space for the conduct of their work. The remainder of the initial Centers focus on the development of tools and infrastructure to support users of the space environment. It is in this arena that long term development for commercial use and infrastructure development will occur. This paper will provide a retrospective examination of the Commercial Centers, the variety of models employed, the lessons learned, and the progress to date. This review will provide the bases for how successful models can be employed to accelerate private investment in the development of the infrastructure necessary for the success of the HEDS enterprise.

  19. Parallel digital forensics infrastructure.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less

  20. AstrodyToolsWeb an e-Science project in Astrodynamics and Celestial Mechanics fields

    NASA Astrophysics Data System (ADS)

    López, R.; San-Juan, J. F.

    2013-05-01

    Astrodynamics Web Tools, AstrodyToolsWeb (http://tastrody.unirioja.es), is an ongoing collaborative Web Tools computing infrastructure project which has been specially designed to support scientific computation. AstrodyToolsWeb provides project collaborators with all the technical and human facilities in order to wrap, manage, and use specialized noncommercial software tools in Astrodynamics and Celestial Mechanics fields, with the aim of optimizing the use of resources, both human and material. However, this project is open to collaboration from the whole scientific community in order to create a library of useful tools and their corresponding theoretical backgrounds. AstrodyToolsWeb offers a user-friendly web interface in order to choose applications, introduce data, and select appropriate constraints in an intuitive and easy way for the user. After that, the application is executed in real time, whenever possible; then the critical information about program behavior (errors and logs) and output, including the postprocessing and interpretation of its results (graphical representation of data, statistical analysis or whatever manipulation therein), are shown via the same web interface or can be downloaded to the user's computer.

  1. The VISPA internet platform for outreach, education and scientific research in various experiments

    NASA Astrophysics Data System (ADS)

    van Asseldonk, D.; Erdmann, M.; Fischer, B.; Fischer, R.; Glaser, C.; Heidemann, F.; Müller, G.; Quast, T.; Rieger, M.; Urban, M.; Welling, C.

    2015-12-01

    VISPA provides a graphical front-end to computing infrastructures giving its users all functionality needed for working conditions comparable to a personal computer. It is a framework that can be extended with custom applications to support individual needs, e.g. graphical interfaces for experiment-specific software. By design, VISPA serves as a multipurpose platform for many disciplines and experiments as demonstrated in the following different use-cases. A GUI to the analysis framework OFFLINE of the Pierre Auger collaboration, submission and monitoring of computing jobs, university teaching of hundreds of students, and outreach activity, especially in CERN's open data initiative. Serving heterogeneous user groups and applications gave us lots of experience. This helps us in maturing the system, i.e. improving the robustness and responsiveness, and the interplay of the components. Among the lessons learned are the choice of a file system, the implementation of websockets, efficient load balancing, and the fine-tuning of existing technologies like the RPC over SSH. We present in detail the improved server setup and report on the performance, the user acceptance and the realized applications of the system.

  2. CEMS: Building a Cloud-Based Infrastructure to Support Climate and Environmental Data Services

    NASA Astrophysics Data System (ADS)

    Kershaw, P. J.; Curtis, M.; Pechorro, E.

    2012-04-01

    CEMS, the facility for Climate and Environmental Monitoring from Space, is a new joint collaboration between academia and industry to bring together their collective expertise to support research into climate change and provide a catalyst for growth in related Earth Observation (EO) technologies and services in the commercial sector. A recent major investment by the UK Space Agency has made possible the development of a dedicated facility at ISIC, the International Space Innovation Centre at Harwell in the UK. CEMS has a number of key elements: the provision of access to large-volume EO and climate datasets co-located with high performance computing facilities; a flexible infrastructure to support the needs of research projects in the academic community and new business opportunities for commercial companies. Expertise and tools for scientific data quality and integrity are another essential component, giving users confidence and transparency in its data, services and products. Central to the development of this infrastructure is the utilisation of cloud-based technology: multi-tenancy and the dynamic provision of resources are key characteristics to exploit in order to support the range of organisations using the facilities and the varied use cases. The hosting of processing services and applications next to the data within the CEMS facility is another important capability. With the expected exponential increase in data volumes within the climate science and EO domains it is becoming increasingly impracticable for organisations to retrieve this data over networks and provide the necessary storage. Consider for example, the factor of o20 increase in data volumes expected for the ESA Sentinel missions over the equivalent Envisat instruments. We explore the options for the provision of a hybrid community/private cloud looking at offerings from the commercial sector and developments in the Open Source community. Building on this virtualisation layer, a further core services tier will support and serve applications as part of a service oriented architecture. We consider the constituent services in this layer to support access to the data, data processing and the orchestration of workflows.

  3. Fast Risk Assessment Software For Natural Hazard Phenomena Using Georeference Population And Infrastructure Data Bases

    NASA Astrophysics Data System (ADS)

    Marrero, J. M.; Pastor Paz, J. E.; Erazo, C.; Marrero, M.; Aguilar, J.; Yepes, H. A.; Estrella, C. M.; Mothes, P. A.

    2015-12-01

    Disaster Risk Reduction (DRR) requires an integrated multi-hazard assessment approach towards natural hazard mitigation. In the case of volcanic risk, long term hazard maps are generally developed on a basis of the most probable scenarios (likelihood of occurrence) or worst cases. However, in the short-term, expected scenarios may vary substantially depending on the monitoring data or new knowledge. In this context, the time required to obtain and process data is critical for optimum decision making. Availability of up-to-date volcanic scenarios is as crucial as it is to have this data accompanied by efficient estimations of their impact among populations and infrastructure. To address this impact estimation during volcanic crises, or other natural hazards, a web interface has been developed to execute an ANSI C application. This application allows one to compute - in a matter of seconds - the demographic and infrastructure impact that any natural hazard may cause employing an overlay-layer approach. The web interface is tailored to users involved in the volcanic crises management of Cotopaxi volcano (Ecuador). The population data base and the cartographic basis used are of public domain, published by the National Office of Statistics of Ecuador (INEC, by its Spanish acronym). To run the application and obtain results the user is expected to upload a raster file containing information related to the volcanic hazard or any other natural hazard, and determine categories to group population or infrastructure potentially affected. The results are displayed in a user-friendly report.

  4. Bridging the Digital Divide: Developing Mexico’s Information and Communication Technology Infrastructure

    DTIC Science & Technology

    2011-10-28

    since the liberalization of Mexico’s telecommunications industry in the early-1990s, public spending on infrastructure...experience backlash when the national government experiences political, social, or fiscal hardship, related to economic liberalization .39 The tension...million users and a 41 Gobierno de los Estados Unidos Mexicanos, Presidencia de la República

  5. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    USDA-ARS?s Scientific Manuscript database

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  6. Intelligent Data Reduction (IDARE)

    NASA Technical Reports Server (NTRS)

    Brady, D. Michael; Ford, Donnie R.

    1990-01-01

    A description of the Intelligent Data Reduction (IDARE) expert system and an IDARE user's manual are given. IDARE is a data reduction system with the addition of a user profile infrastructure. The system was tested on a nickel-cadmium battery testbed. Information is given on installing, loading, maintaining the IDARE system.

  7. Provenance for Runtime Workflow Steering and Validation in Computational Seismology

    NASA Astrophysics Data System (ADS)

    Spinuso, A.; Krischer, L.; Krause, A.; Filgueira, R.; Magnoni, F.; Muraleedharan, V.; David, M.

    2014-12-01

    Provenance systems may be offered by modern workflow engines to collect metadata about the data transformations at runtime. If combined with effective visualisation and monitoring interfaces, these provenance recordings can speed up the validation process of an experiment, suggesting interactive or automated interventions with immediate effects on the lifecycle of a workflow run. For instance, in the field of computational seismology, if we consider research applications performing long lasting cross correlation analysis and high resolution simulations, the immediate notification of logical errors and the rapid access to intermediate results, can produce reactions which foster a more efficient progress of the research. These applications are often executed in secured and sophisticated HPC and HTC infrastructures, highlighting the need for a comprehensive framework that facilitates the extraction of fine grained provenance and the development of provenance aware components, leveraging the scalability characteristics of the adopted workflow engines, whose enactment can be mapped to different technologies (MPI, Storm clusters, etc). This work looks at the adoption of W3C-PROV concepts and data model within a user driven processing and validation framework for seismic data, supporting also computational and data management steering. Validation needs to balance automation with user intervention, considering the scientist as part of the archiving process. Therefore, the provenance data is enriched with community-specific metadata vocabularies and control messages, making an experiment reproducible and its description consistent with the community understandings. Moreover, it can contain user defined terms and annotations. The current implementation of the system is supported by the EU-Funded VERCE (http://verce.eu). It provides, as well as the provenance generation mechanisms, a prototypal browser-based user interface and a web API built on top of a NoSQL storage technology, experimenting ways to ensure a rapid and flexible access to the lineage traces. It supports the users with the visualisation of graphical products and offers combined operations to access and download the data which may be selectively stored at runtime, into dedicated data archives.

  8. Exploratory Mixed-Method Study of End-User Computing within an Information Technology Infrastructure Library U.S. Army Service Delivery Environment

    ERIC Educational Resources Information Center

    Manzano, Sancho J., Jr.

    2012-01-01

    Empirical studies have been conducted on what is known as end-user computing from as early as the 1980s to present-day IT employees. There have been many studies on using quantitative instruments by Cotterman and Kumar (1989) and Rockart and Flannery (1983). Qualitative studies on end-user computing classifications have been conducted by…

  9. The Resilient Infrastructure Initiative

    DOE PAGES

    Clifford, Megan

    2016-10-01

    Infrastructure is, by design, largely unnoticed until it breaks down and services fail. This includes water supplies, gas pipelines, bridges and dams, phone lines and cell towers, roads and culverts, railways, and the electric grid—all of the complex systems that keep our societies and economies running. Climate change, population growth, increased urbanization, system aging, and outdated design standards stress existing infrastructure and its ability to satisfy the rapidly changing demands from users. Here, the resilience of both physical and cyber infrastructure systems, however, is critical to a community as it prepares for, responds to, and recovers from a disaster, whethermore » natural or man-made.« less

  10. Irrigation Dynamics and Tactics - Developing a Sustainable and Profitable Irrigation Strategy for Agricultural Areas

    NASA Astrophysics Data System (ADS)

    Van Opstal, J.; Neale, C. M. U.; Lecina, S.

    2014-12-01

    Irrigation management is a dynamic process that adapts according to weather conditions and water availability, as well as socio-economic influences. The goal of water users is to adapt their management to achieve maximum profits. However, these decisions should take into account the environmental impact on the surroundings. Agricultural irrigation systems need to be viewed as a system that is an integral part of a watershed. Therefore changes in the infrastructure, operation and management of an irrigated area, has an impact on the water quantity and quality available for other water users. A strategy can be developed for decision-makers using an irrigation system modelling tool. Such a tool can simulate the impact of the infrastructure, operation and management of an irrigation area on its hydrology and agricultural productivity. This combination of factors is successfully simulated with the Ador model, which is able to reproduce on-farm irrigation and water delivery by a canal system. Model simulations for this study are supported with spatial analysis tools using GIS and remote sensing. Continuous measurements of drainage water will be added to indicate the water quality aspects. The Bear River Canal Company located in Northern Utah (U.S.A.) is used as a case study for this research. The irrigation area encompasses 26,000 ha and grows mainly alfalfa, grains, corn and onions. The model allows the simulation of different strategies related to water delivery, on-farm water use, crop rotations, and reservoirs and networks capacities under different weather and water availability conditions. Such changes in the irrigation area will have consequences for farmers in the study area regarding crop production, and for downstream users concerning both the quantity and quality of outflows. The findings from this study give insight to decision-makers and water users for changing irrigation water delivery strategies to improve the sustainability and profitability of agriculture in the future.

  11. Pimp your landscape: a tool for qualitative evaluation of the effects of regional planning measures on ecosystem services.

    PubMed

    Fürst, Christine; Volk, Martin; Pietzsch, Katrin; Makeschin, Franz

    2010-12-01

    The article presents the platform "Pimp your landscape" (PYL), which aims firstly at the support of planners by simulating alternative land-use scenarios and by an evaluation of benefits or risks for regionally important ecosystem services. Second, PYL supports an integration of information on environmental and landscape conditions into impact assessment. Third, PYL supports the integration of impacts of planning measures on ecosystem services. PYL is a modified 2-D cellular automaton with GIS features. The cells have the major attribute "land-use type" and can be supplemented with additional information, such as specifics regarding geology, topography and climate. The GIS features support the delineation of non-cellular infrastructural elements, such as roads or water bodies. An evaluation matrix represents the core element of the system. In this matrix, values in a relative scale from 0 (lowest value) to 100 (highest value) are assigned to the land-use types and infrastructural elements depending on their effect on ecosystem services. The option to configure rules for describing the impact of environmental attributes and proximity effects on cell values and land-use transition probabilities is of particular importance. User interface and usage of the platform are demonstrated by an application case. Constraints and limits of the recent version are discussed, including the need to consider in the evaluation, landscape-structure aspects such as patch size, fragmentation and spatial connectivity. Regarding the further development, it is planned to include the impact of land management practices to support climate change adaptation and mitigation strategies in regional planning.

  12. The VERCE Science Gateway: enabling user friendly seismic waves simulations across European HPC infrastructures

    NASA Astrophysics Data System (ADS)

    Spinuso, Alessandro; Krause, Amy; Ramos Garcia, Clàudia; Casarotti, Emanuele; Magnoni, Federica; Klampanos, Iraklis A.; Frobert, Laurent; Krischer, Lion; Trani, Luca; David, Mario; Leong, Siew Hoon; Muraleedharan, Visakh

    2014-05-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology. As a result of VERCE's official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to perform simulations of seismic waves as a service to the seismological community. We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions. The results will be staged out from the HPC via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies. We will try to demonstrate how data access, validation and visualisation can be supported by a general purpose provenance framework which, besides common provenance concepts imported from the OPM and the W3C-PROV initiatives, also offers an extensible metadata archive including community and user defined metadata and annotations. Finally, we will show how the VERCE Gateway platform will allow the customisation of pre and post processing phases of the simulation workflows, thanks to the availability of a registry of processing elements (PEs,) which are easily developed and maintained by the seismologists.

  13. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  14. Advances of NOAA Training Program in Climate Services

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.

    2012-12-01

    Since 2002, NOAA's National Weather Service (NWS) Climate Services Division (CSD) has offered numerous training opportunities to NWS staff. After eight-years of development, the training program offers three instructor-led courses and roughly 25 online (distance learning) modules covering various climate topics, such as: climate data and observations, climate variability and change, and NWS national / local climate products (tools, skill, and interpretation). Leveraging climate information and expertise available at all NOAA line offices and partners allows for the delivery of the most advanced knowledge and is a very critical aspect of the training program. The emerging NOAA Climate Service (NCS) requires a well-trained, climate-literate workforce at the local level capable of delivering NOAA's climate products and services as well as providing climate-sensitive decision support. NWS Weather Forecast Offices and River Forecast Centers presently serve as local outlets for the NCS climate services. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-sound messages and amiable communication techniques are important in developing an engaged dialog between the climate service providers and users. Several pilot projects have been conducted by the NWS CSD this past year that apply the program's training lessons and expertise to specialized external user group training. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring instructions to the potential applications for each group of users. Training technical users identified the following critical issues: (1) knowledge of target audience expectations, initial knowledge status, and potential use of climate information; (2) leveraging partnership with climate services providers; and, (3) applying 3H training approach, where the first H stands for Head (trusted science), the second H stands for Heart (make it easy), and the third H for Hand (support with applications).

  15. Development and Operation of the Americas ALOS Data Node

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Marlin, R. H.; La Belle-Hamer, A. L.

    2004-12-01

    In the spring of 2005, the Japanese Aerospace Exploration Agency (JAXA) will launch the next generation in advanced, remote sensing satellites. The Advanced Land Observing Satellite (ALOS) includes three sensors, two visible imagers and one L-band polarimetric SAR, providing high-quality remote sensing data to the scientific and commercial communities throughout the world. Focusing on remote sensing and scientific pursuits, ALOS will image nearly the entire Earth using all three instruments during its expected three-year lifetime. These data sets offer the potential for data continuation of older satellite missions as well as new products for the growing user community. One of the unique features of the ALOS mission is the data distribution approach. JAXA has created a worldwide cooperative data distribution network. The data nodes are NOAA /ASF representing the Americas ALOS Data Node (AADN), ESA representing the ALOS European and African Node (ADEN), Geoscience Australia representing Oceania and JAXA representing the Asian continent. The AADN is the sole agency responsible for archival, processing and distribution of L0 and L1 products to users in both North and South America. In support of this mission, AADN is currently developing a processing and distribution infrastructure to provide easy access to these data sets. Utilizing a custom, grid-based process controller and media generation system, the overall infrastructure has been designed to provide maximum throughput while requiring a minimum of operator input and maintenance. This paper will present an overview of the ALOS system, details of each sensor's capabilities and of the processing and distribution system being developed by AADN to provide these valuable data sets to users throughout North and South America.

  16. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  17. Complex Networks and Critical Infrastructures

    NASA Astrophysics Data System (ADS)

    Setola, Roberto; de Porcellinis, Stefano

    The term “Critical Infrastructures” indicates all those technological infrastructures such as: electric grids, telecommunication networks, railways, healthcare systems, financial circuits, etc. that are more and more relevant for the welfare of our countries. Each one of these infrastructures is a complex, highly non-linear, geographically dispersed cluster of systems, that interact with their human owners, operators, users and with the other infrastructures. Their augmented relevance and the actual political and technological scenarios, which have increased their exposition to accidental failure and deliberate attacks, demand for different and innovative protection strategies (generally indicate as CIP - Critical Infrastructure Protection). To this end it is mandatory to understand the mechanisms that regulate the dynamic of these infrastructures. In this framework, an interesting approach is those provided by the complex networks. In this paper we illustrate some results achieved considering structural and functional properties of the corresponding topological networks both when each infrastructure is assumed as an autonomous system and when we take into account also the dependencies existing among the different infrastructures.

  18. A Web-Based Decision Support System for Assessing Regional Water-Quality Conditions and Management Actions

    NASA Astrophysics Data System (ADS)

    Booth, N. L.; Everman, E.; Kuo, I.; Sprague, L.; Murphy, L.

    2011-12-01

    A new web-based decision support system has been developed as part of the U.S. Geological Survey (USGS) National Water Quality Assessment Program's (NAWQA) effort to provide ready access to Spatially Referenced Regressions On Watershed attributes (SPARROW) results of stream water-quality conditions and to offer sophisticated scenario testing capabilities for research and water-quality planning via an intuitive graphical user interface with a map-based display. The SPARROW Decision Support System (DSS) is delivered through a web browser over an Internet connection, making it widely accessible to the public in a format that allows users to easily display water-quality conditions, distribution of nutrient sources, nutrient delivery to downstream waterbodies, and simulations of altered nutrient inputs including atmospheric and agricultural sources. The DSS offers other features for analysis including various background map layers, model output exports, and the ability to save and share prediction scenarios. SPARROW models currently supported by the DSS are based on the modified digital versions of the 1:500,000-scale River Reach File (RF1) and 1:100,000-scale National Hydrography Dataset (medium-resolution, NHDPlus) stream networks. The underlying modeling framework and server infrastructure illustrate innovations in the information technology and geosciences fields for delivering SPARROW model predictions over the web by performing intensive model computations and map visualizations of the predicted conditions within the stream network.

  19. A Web-Based Decision Support System for Assessing Regional Water-Quality Conditions and Management Actions

    USGS Publications Warehouse

    Booth, N.L.; Everman, E.J.; Kuo, I.-L.; Sprague, L.; Murphy, L.

    2011-01-01

    The U.S. Geological Survey National Water Quality Assessment Program has completed a number of water-quality prediction models for nitrogen and phosphorus for the conterminous United States as well as for regional areas of the nation. In addition to estimating water-quality conditions at unmonitored streams, the calibrated SPAtially Referenced Regressions On Watershed attributes (SPARROW) models can be used to produce estimates of yield, flow-weighted concentration, or load of constituents in water under various land-use condition, change, or resource management scenarios. A web-based decision support infrastructure has been developed to provide access to SPARROW simulation results on stream water-quality conditions and to offer sophisticated scenario testing capabilities for research and water-quality planning via a graphical user interface with familiar controls. The SPARROW decision support system (DSS) is delivered through a web browser over an Internet connection, making it widely accessible to the public in a format that allows users to easily display water-quality conditions and to describe, test, and share modeled scenarios of future conditions. SPARROW models currently supported by the DSS are based on the modified digital versions of the 1:500,000-scale River Reach File (RF1) and 1:100,000-scale National Hydrography Dataset (medium-resolution, NHDPlus) stream networks. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  20. Value-added Data Services at the Goddard Earth Sciences Data and Information Services Center

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.; Alcott, Gary T.; Kempler, Steven J.; Lynnes, Christopher S.; Vollmer, Bruce E.

    2004-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), in addition to serving the Earth Science community as one of the major Distributed Active Archives Centers (DAACs), provides much more than just data. Among the value-added services available to general users are subsetting data spatially and/or by parameter, online analysis (to avoid downloading unnecessarily all the data), and assistance in obtaining data from other centers. Services available to data producers and high-volume users include consulting on building new products with standard formats and metadata and construction of data management systems. A particularly useful service is data processing at the DISC (i.e., close to the input data) with the users algorithm. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools. Partnerships between the GES DISC and scientists, both producers and users, allow the scientists to concentrate on science, while the GES DISC handles the data management, e.g., formats, integration, and data processing. The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from simple data support to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. At the same time, such partnerships allow the GES DISC to serve the user community more efficiently and to better prioritize on-line holdings. Several examples of successful partnerships are described in the presentation.

  1. The GÉANT network: addressing current and future needs of the HEP community

    NASA Astrophysics Data System (ADS)

    Capone, Vincenzo; Usman, Mian

    2015-12-01

    The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization Infrastructure, end-to-end performance monitoring, advanced network services (dynamic circuits, L2-L3VPN, MD-VPN). This talk will outline the factors that help the GÉANT network to respond to the needs of the High Energy Physics community, both in Europe and worldwide. The Pan-European network provides the connectivity between 40 European national research and education networks. In addition, GÉANT also connects the European NRENs to the R&E networks in other world region and has reach to over 110 NREN worldwide, making GÉANT the best connected Research and Education network, with its multiple intercontinental links to different continents e.g. North and South America, Africa and Asia-Pacific. The High Energy Physics computational needs have always had (and will keep having) a leading role among the scientific user groups of the GÉANT network: the LHCONE overlay network has been built, in collaboration with the other big world REN, specifically to address the peculiar needs of the LHC data movement. Recently, as a result of a series of coordinated efforts, the LHCONE network has been expanded to the Asia-Pacific area, and is going to include some of the main regional R&E network in the area. The LHC community is not the only one that is actively using a distributed computing model (hence the need for a high-performance network); new communities are arising, as BELLE II. GÉANT is deeply involved also with the BELLE II Experiment, to provide full support to their distributed computing model, along with a perfSONAR-based network monitoring system. GÉANT has also coordinated the setup of the network infrastructure to perform the BELLE II Trans-Atlantic Data Challenge, and has been active on helping the BELLE II community to sort out their end-to-end performance issues. In this talk we will provide information about the current GÉANT network architecture and of the international connectivity, along with the upcoming upgrades and the planned and foreseeable improvements. We will also describe the implementation of the solutions provided to support the LHC and BELLE II experiments.

  2. The Digital Divide: The View from Latin America and the Caribbean.

    ERIC Educational Resources Information Center

    Rodriguez, Adolfo

    This paper discusses the digital divide from the perspective of Latin America and the Caribbean. Highlights include: new issues that make access to electronic resources difficult for users; differences in technological infrastructure among countries; how Internet users are distributed worldwide; Internet access in Africa; the number of students…

  3. Modular Laboratories—Cost-Effective and Sustainable Infrastructure for Resource-Limited Settings

    PubMed Central

    Bridges, Daniel J.; Colborn, James; Chan, Adeline S. T.; Winters, Anna M.; Dengala, Dereje; Fornadel, Christen M.; Kosloff, Barry

    2014-01-01

    High-quality laboratory space to support basic science, clinical research projects, or health services is often severely lacking in the developing world. Moreover, the construction of suitable facilities using traditional methods is time-consuming, expensive, and challenging to implement. Three real world examples showing how shipping containers can be converted into modern laboratories are highlighted. These include use as an insectary, a molecular laboratory, and a BSL-3 containment laboratory. These modular conversions have a number of advantages over brick and mortar construction and provide a cost-effective and timely solution to offer high-quality, user-friendly laboratory space applicable within the developing world. PMID:25223943

  4. IMT-2000 Satellite Standards with Applications to Mobile Air Traffic Communications Networks

    NASA Technical Reports Server (NTRS)

    Shamma, Mohammed A.

    2004-01-01

    The International Mobile Telecommunications - 2000 (IMT-2000) standard and more specifically the Satellite component of it, is investigated as a potential alternative for communications to aircraft mobile users en-route and in terminal area. Its application to Air Traffic Management (ATM) communication needs is considered. A summary of the specifications of IMT-2000 satellite standards are outlined. It is shown via a system research analysis that it is possible to support most air traffic communication needs via an IMT-2000 infrastructure. This technology can compliment existing, or future digital aeronautical communications technologies such as VDL2, VDL3, Mode S, and UAT.

  5. Payload operations management of a planned European SL-Mission employing establishments of ESA and national agencies

    NASA Technical Reports Server (NTRS)

    Joensson, Rolf; Mueller, Karl L.

    1994-01-01

    Spacelab (SL)-missions with Payload Operations (P/L OPS) from Europe involve numerous space agencies, various ground infrastructure systems and national user organizations. An effective management structure must bring together different entities, facilities and people, but at the same time keep interfaces, costs and schedule under strict control. This paper outlines the management concept for P/L OPS of a planned European SL-mission. The proposal draws on the relevant experience in Europe, which was acquired via the ESA/NASA mission SL-1, by the execution of two German SL-missions and by the involvement in, or the support of, several NASA-missions.

  6. A collaborative environment for shared classification of neuroimages: The experience of the Colibri project.

    PubMed

    Alloni, Anna; Lanzola, Giordano; Triulzi, Fabio; Bellazzi, Riccardo; Reni, Gianluigi

    2015-08-01

    The Colibri project is introduced, whose aim is setting up a shared database of Magnetic Resonance images concerning pediatric patients affected by neurological rare disorders. The project involves 19 Italian centers of excellence in pediatric neuro-radiology and is supported by the nationwide coordinating center for the Information and Communication Technology research infrastructure. After the first year devoted to the design and the implementation, in November 2014 the system finally went into service at the centers involved in the project. This paper illustrates the initial assessment of the user perception and provides some preliminary statistics about its use.

  7. LAVA web-based remote simulation: enhancements for education and technology innovation

    NASA Astrophysics Data System (ADS)

    Lee, Sang Il; Ng, Ka Chun; Orimoto, Takashi; Pittenger, Jason; Horie, Toshi; Adam, Konstantinos; Cheng, Mosong; Croffie, Ebo H.; Deng, Yunfei; Gennari, Frank E.; Pistor, Thomas V.; Robins, Garth; Williamson, Mike V.; Wu, Bo; Yuan, Lei; Neureuther, Andrew R.

    2001-09-01

    The Lithography Analysis using Virtual Access (LAVA) web site at http://cuervo.eecs.berkeley.edu/Volcano/ has been enhanced with new optical and deposition applets, graphical infrastructure and linkage to parallel execution on networks of workstations. More than ten new graphical user interface applets have been designed to support education, illustrate novel concepts from research, and explore usage of parallel machines. These applets have been improved through feedback and classroom use. Over the last year LAVA provided industry and other academic communities 1,300 session and 700 rigorous simulations per month among the SPLAT, SAMPLE2D, SAMPLE3D, TEMPEST, STORM, and BEBS simulators.

  8. Data discovery and data processing for environmental research infrastructures

    NASA Astrophysics Data System (ADS)

    Los, Wouter; Beranzoli, Laura; Corriero, Giuseppe; Cossu, Roberto; Fiore, Nicola; Hardisty, Alex; Legré, Yannick; Pagano, Pasquale; Puglisi, Giuseppe; Sorvari, Sanna; Turunen, Esa

    2013-04-01

    The European ENVRI project (Common operations of Environmental Research Infrastructures) is addressing common ICT solutions for the research infrastructures as selected in the ESFRI Roadmap. More specifically, the project is looking for solutions that will assist interdisciplinary users who want to benefit from the data and other services of more than a single research infrastructure. However, the infrastructure architectures, the data, data formats, scales and granularity are very different. Indeed, they deal with diverse scientific disciplines, from plate tectonics, the deep sea, sea and land surface up to atmosphere and troposphere, from the dead to the living environment, and with a variety of instruments producing increasingly larger amounts of data. One of the approaches in the ENVRI project is to design a common Reference Model that will serve to promote infrastructure interoperability at the data, technical and service levels. The analysis of the characteristics of the environmental research infrastructures assisted in developing the Reference Model, and which is also an example for comparable infrastructures worldwide. Still, it is for users already now important to have the facilities available for multi-disciplinary data discovery and data processing. The rise of systems research, addressing Earth as a single complex and coupled system is requiring such capabilities. So, another approach in the project is to adapt existing ICT solutions to short term applications. This is being tested for a few study cases. One of these is looking for possible coupled processes following a volcano eruption in the vertical column from deep sea to troposphere. Another one deals with volcano either human impacts on atmospheric and sea CO2 pressure and the implications for sea acidification and marine biodiversity and their ecosystems. And a third one deals with the variety of sensor and satellites data sensing the area around a volcano cone. Preliminary results on these studies will be reported. The common results will assist in shaping more generic solutions to be adopted by the appropriate research infrastructures.

  9. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.

  10. Doing Your Science While You're in Orbit

    NASA Astrophysics Data System (ADS)

    Green, Mark L.; Miller, Stephen D.; Vazhkudai, Sudharshan S.; Trater, James R.

    2010-11-01

    Large-scale neutron facilities such as the Spallation Neutron Source (SNS) located at Oak Ridge National Laboratory need easy-to-use access to Department of Energy Leadership Computing Facilities and experiment repository data. The Orbiter thick- and thin-client and its supporting Service Oriented Architecture (SOA) based services (available at https://orbiter.sns.gov) consist of standards-based components that are reusable and extensible for accessing high performance computing, data and computational grid infrastructure, and cluster-based resources easily from a user configurable interface. The primary Orbiter system goals consist of (1) developing infrastructure for the creation and automation of virtual instrumentation experiment optimization, (2) developing user interfaces for thin- and thick-client access, (3) provide a prototype incorporating major instrument simulation packages, and (4) facilitate neutron science community access and collaboration. The secure Orbiter SOA authentication and authorization is achieved through the developed Virtual File System (VFS) services, which use Role-Based Access Control (RBAC) for data repository file access, thin-and thick-client functionality and application access, and computational job workflow management. The VFS Relational Database Management System (RDMS) consists of approximately 45 database tables describing 498 user accounts with 495 groups over 432,000 directories with 904,077 repository files. Over 59 million NeXus file metadata records are associated to the 12,800 unique NeXus file field/class names generated from the 52,824 repository NeXus files. Services that enable (a) summary dashboards of data repository status with Quality of Service (QoS) metrics, (b) data repository NeXus file field/class name full text search capabilities within a Google like interface, (c) fully functional RBAC browser for the read-only data repository and shared areas, (d) user/group defined and shared metadata for data repository files, (e) user, group, repository, and web 2.0 based global positioning with additional service capabilities are currently available. The SNS based Orbiter SOA integration progress with the Distributed Data Analysis for Neutron Scattering Experiments (DANSE) software development project is summarized with an emphasis on DANSE Central Services and the Virtual Neutron Facility (VNF). Additionally, the DANSE utilization of the Orbiter SOA authentication, authorization, and data transfer services best practice implementations are presented.

  11. Open Clients for Distributed Databases

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.; Arko, R. A.

    2001-12-01

    We are actively developing a collection of open source example clients that demonstrate use of our "back end" data management infrastructure. The data management system is reported elsewhere at this meeting (Arko and Chayes: A Scaleable Database Infrastructure). In addition to their primary goal of being examples for others to build upon, some of these clients may have limited utility in them selves. More information about the clients and the data infrastructure is available on line at http://data.ldeo.columbia.edu. The available examples to be demonstrated include several web-based clients including those developed for the Community Review System of the Digital Library for Earth System Education, a real-time watch standers log book, an offline interface to use log book entries, a simple client to search on multibeam metadata and others are Internet enabled and generally web-based front ends that support searches against one or more relational databases using industry standard SQL queries. In addition to the web based clients, simple SQL searches from within Excel and similar applications will be demonstrated. By defining, documenting and publishing a clear interface to the fully searchable databases, it becomes relatively easy to construct client interfaces that are optimized for specific applications in comparison to building a monolithic data and user interface system.

  12. So ware-Defined Network Solutions for Science Scenarios: Performance Testing Framework and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Settlemyer, Bradley; Kettimuthu, R.; Boley, Josh

    High-performance scientific work flows utilize supercomputers, scientific instruments, and large storage systems. Their executions require fast setup of a small number of dedicated network connections across the geographically distributed facility sites. We present Software-Defined Network (SDN) solutions consisting of site daemons that use dpctl, Floodlight, ONOS, or OpenDaylight controllers to set up these connections. The development of these SDN solutions could be quite disruptive to the infrastructure, while requiring a close coordination among multiple sites; in addition, the large number of possible controller and device combinations to investigate could make the infrastructure unavailable to regular users for extended periods ofmore » time. In response, we develop a Virtual Science Network Environment (VSNE) using virtual machines, Mininet, and custom scripts that support the development, testing, and evaluation of SDN solutions, without the constraints and expenses of multi-site physical infrastructures; furthermore, the chosen solutions can be directly transferred to production deployments. By complementing VSNE with a physical testbed, we conduct targeted performance tests of various SDN solutions to help choose the best candidates. In addition, we propose a switching response method to assess the setup times and throughput performances of different SDN solutions, and present experimental results that show their advantages and limitations.« less

  13. Development and implementation of an Integrated Water Resources Management System (IWRMS)

    NASA Astrophysics Data System (ADS)

    Flügel, W.-A.; Busch, C.

    2011-04-01

    One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.

  14. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    PubMed Central

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org. PMID:22559942

  15. Siberian Earth System Science Cluster - A web-based Geoportal to provide user-friendly Earth Observation Products for supporting NEESPI scientists

    NASA Astrophysics Data System (ADS)

    Eberle, J.; Gerlach, R.; Hese, S.; Schmullius, C.

    2012-04-01

    To provide earth observation products in the area of Siberia, the Siberian Earth System Science Cluster (SIB-ESS-C) was established as a spatial data infrastructure at the University of Jena (Germany), Department for Earth Observation. This spatial data infrastructure implements standards published by the Open Geospatial Consortium (OGC) and the International Organizsation for Standardization (ISO) for data discovery, data access, data processing and data analysis. The objective of SIB-ESS-C is to faciliate environmental research and Earth system science in Siberia. The region for this project covers the entire Asian part of the Russian Federation approximately between 58°E - 170°W and 48°N - 80°N. To provide discovery, access and analysis services a webportal was published for searching and visualisation of available data. This webportal is based on current web technologies like AJAX, Drupal Content Management System as backend software and a user-friendly surface with Drag-n-Drop and further mouse events. To have a wide range of regular updated earth observation products, some products from sensor MODIS at the satellites Aqua and Terra were processed. A direct connection to NASA archive servers makes it possible to download MODIS Level 3 and 4 products and integrate it in the SIB-ESS-C infrastructure. These data can be downloaded in a file format called Hierarchical Data Format (HDF). For visualisation and further analysis, this data is reprojected, converted to GeoTIFF and global products clipped to the project area. All these steps are implemented as an automatic process chain. If new MODIS data is available within the infrastructure this process chain is executed. With the link to a MODIS catalogue system, the system gets new data daily. With the implemented analysis processes, timeseries data can be analysed, for example to plot a trend or different time series against one another. Scientists working in this area and working with MODIS data can make use of this service over the webportal. Both searching manually the NASA archive for MODIS data, processing these data automatically and then download it for further processing and using the regular updated products.

  16. Geospatial-enabled Data Exploration and Computation through Data Infrastructure Building Blocks

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2015-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices and sensors. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. The GABBs project aims at enabling broader access to geospatial data exploration and computation by developing spatial data infrastructure building blocks that leverage capabilities of end-to-end application service and virtualized computing framework in HUBzero. Funded by NSF Data Infrastructure Building Blocks (DIBBS) initiative, GABBs provides a geospatial data architecture that integrates spatial data management, mapping and visualization and will make it available as open source. The outcome of the project will enable users to rapidly create tools and share geospatial data and tools on the web for interactive exploration of data without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the development of geospatial data infrastructure building blocks and the scientific use cases that help drive the software development, as well as seek feedback from the user communities.

  17. Enabling end-user network monitoring via the multicast consolidated proxy monitor

    NASA Astrophysics Data System (ADS)

    Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew

    2001-07-01

    The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.

  18. VESPA: A community-driven Virtual Observatory in Planetary Science

    NASA Astrophysics Data System (ADS)

    Erard, S.; Cecconi, B.; Le Sidaner, P.; Rossi, A. P.; Capria, M. T.; Schmitt, B.; Génot, V.; André, N.; Vandaele, A. C.; Scherf, M.; Hueso, R.; Määttänen, A.; Thuillot, W.; Carry, B.; Achilleos, N.; Marmo, C.; Santolik, O.; Benson, K.; Fernique, P.; Beigbeder, L.; Millour, E.; Rousseau, B.; Andrieu, F.; Chauvin, C.; Minin, M.; Ivanoski, S.; Longobardo, A.; Bollard, P.; Albert, D.; Gangloff, M.; Jourdane, N.; Bouchemit, M.; Glorian, J.-M.; Trompet, L.; Al-Ubaidi, T.; Juaristi, J.; Desmars, J.; Guio, P.; Delaa, O.; Lagain, A.; Soucek, J.; Pisa, D.

    2018-01-01

    The VESPA data access system focuses on applying Virtual Observatory (VO) standards and tools to Planetary Science. Building on a previous EC-funded Europlanet program, it has reached maturity during the first year of a new Europlanet 2020 program (started in 2015 for 4 years). The infrastructure has been upgraded to handle many fields of Solar System studies, with a focus both on users and data providers. This paper describes the broad lines of the current VESPA infrastructure as seen by a potential user, and provides examples of real use cases in several thematic areas. These use cases are also intended to identify hints for future developments and adaptations of VO tools to Planetary Science.

  19. Hydropower licensing and evolving climate: climate knowledge to support risk assessment for long-term infrastructure decisions

    NASA Astrophysics Data System (ADS)

    Ray, A. J.; Walker, S. H.; Trainor, S. F.; Cherry, J. E.

    2014-12-01

    This presentation focuses on linking climate knowledge to the complicated decision process for hydropower dam licensing, and the affected parties involved in that process. The U.S. Federal Energy Regulatory Commission issues of licenses for nonfederal hydroelectric operations, typically 30-50 year licenses, and longer infrastructure lifespan, a similar time frame as the anticipated risks of changing climate and hydrology. Resources managed by other federal and state agencies such as the NOAA National Marine Fisheries Service may be affected by new or re-licensed projects. The federal Integrated Licensing Process gives the opportunity for affected parties to recommend issues for consultative investigation and possible mitigation, such as impacts to downstream fisheries. New or re-licensed projects have the potential to "pre-adapt" by considering and incorporating risks of climate change into their planned operations as license terms and conditions. Hundreds of hydropower facilities will be up for relicensing in the coming years (over 100 in the western Sierra Nevada alone, and large-scale water projects such as the proposed Lake Powell Pipeline), as well as proposed new dams such as the Susitna project in Alaska. Therefore, there is a need for comprehensive guidance on delivering climate analysis to support understanding of risks of hydropower projects to other affected resources, and decisions on licensing. While each project will have a specific context, many of the questions will be similar. We also will discuss best practices for the use of climate science in water project planning and management, and how creating the best and most appropriate science is also still a developing art. We will discuss the potential reliability of that science for consideration in long term planning, licensing, and mitigation planning for those projects. For science to be "actionable," that science must be understood and accepted by the potential users. This process is a negotiation, with climate scientists needing to understand the concerns of users and respond, and users developing a better understanding of the state of climate science in order to make an informed choice. We will also discuss what is needed to streamline providing that analysis for the many re-licensing decisions expected in the upcoming years.

  20. Consolidation and development roadmap of the EMI middleware

    NASA Astrophysics Data System (ADS)

    Kónya, B.; Aiftimiei, C.; Cecchi, M.; Field, L.; Fuhrmann, P.; Nilsen, J. K.; White, J.

    2012-12-01

    Scientific research communities have benefited recently from the increasing availability of computing and data infrastructures with unprecedented capabilities for large scale distributed initiatives. These infrastructures are largely defined and enabled by the middleware they deploy. One of the major issues in the current usage of research infrastructures is the need to use similar but often incompatible middleware solutions. The European Middleware Initiative (EMI) is a collaboration of the major European middleware providers ARC, dCache, gLite and UNICORE. EMI aims to: deliver a consolidated set of middleware components for deployment in EGI, PRACE and other Distributed Computing Infrastructures; extend the interoperability between grids and other computing infrastructures; strengthen the reliability of the services; establish a sustainable model to maintain and evolve the middleware; fulfil the requirements of the user communities. This paper presents the consolidation and development objectives of the EMI software stack covering the last two years. The EMI development roadmap is introduced along the four technical areas of compute, data, security and infrastructure. The compute area plan focuses on consolidation of standards and agreements through a unified interface for job submission and management, a common format for accounting, the wide adoption of GLUE schema version 2.0 and the provision of a common framework for the execution of parallel jobs. The security area is working towards a unified security model and lowering the barriers to Grid usage by allowing users to gain access with their own credentials. The data area is focusing on implementing standards to ensure interoperability with other grids and industry components and to reuse already existing clients in operating systems and open source distributions. One of the highlights of the infrastructure area is the consolidation of the information system services via the creation of a common information backbone.

  1. Final report for the Integrated and Robust Security Infrastructure (IRSI) laboratory directed research and development project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, R.L.; Hamilton, V.A.; Istrail, G.G.

    1997-11-01

    This report describes the results of a Sandia-funded laboratory-directed research and development project titled {open_quotes}Integrated and Robust Security Infrastructure{close_quotes} (IRSI). IRSI was to provide a broad range of commercial-grade security services to any software application. IRSI has two primary goals: application transparency and manageable public key infrastructure. IRSI must provide its security services to any application without the need to modify the application to invoke the security services. Public key mechanisms are well suited for a network with many end users and systems. There are many issues that make it difficult to deploy and manage a public key infrastructure. IRSImore » addressed some of these issues to create a more manageable public key infrastructure.« less

  2. Service management at CERN with Service-Now

    NASA Astrophysics Data System (ADS)

    Toteva, Z.; Alvarez Alonso, R.; Alvarez Granda, E.; Cheimariou, M.-E.; Fedorko, I.; Hefferman, J.; Lemaitre, S.; Clavo, D. Martin; Martinez Pedreira, P.; Pera Mira, O.

    2012-12-01

    The Information Technology (IT) and the General Services (GS) departments at CERN have decided to combine their extensive experience in support for IT and non-IT services towards a common goal - to bring the services closer to the end user based on Information Technology Infrastructure Library (ITIL) best practice. The collaborative efforts have so far produced definitions for the incident and the request fulfilment processes which are based on a unique two-dimensional service catalogue that combines both the user and the support team views of all services. After an extensive evaluation of the available industrial solutions, Service-now was selected as the tool to implement the CERN Service-Management processes. The initial release of the tool provided an attractive web portal for the users and successfully implemented two basic ITIL processes; the incident management and the request fulfilment processes. It also integrated with the CERN personnel databases and the LHC GRID ticketing system. Subsequent releases continued to integrate with other third-party tools like the facility management systems of CERN as well as to implement new processes such as change management. Independently from those new development activities it was decided to simplify the request fulfilment process in order to achieve easier acceptance by the CERN user community. We believe that due to the high modularity of the Service-now tool, the parallel design of ITIL processes e.g., event management and non-ITIL processes, e.g., computer centre hardware management, will be easily achieved. This presentation will describe the experience that we have acquired and the techniques that were followed to achieve the CERN customization of the Service-Now tool.

  3. Identifying Audiences of E-Infrastructures - Tools for Measuring Impact

    PubMed Central

    van den Besselaar, Peter

    2012-01-01

    Research evaluation should take into account the intended scholarly and non-scholarly audiences of the research output. This holds too for research infrastructures, which often aim at serving a large variety of audiences. With research and research infrastructures moving to the web, new possibilities are emerging for evaluation metrics. This paper proposes a feasible indicator for measuring the scope of audiences who use web-based e-infrastructures, as well as the frequency of use. In order to apply this indicator, a method is needed for classifying visitors to e-infrastructures into relevant user categories. The paper proposes such a method, based on an inductive logic program and a Bayesian classifier. The method is tested, showing that the visitors are efficiently classified with 90% accuracy into the selected categories. Consequently, the method can be used to evaluate the use of the e-infrastructure within and outside academia. PMID:23239995

  4. Modeling and Managing Risk in Billing Infrastructures

    NASA Astrophysics Data System (ADS)

    Baiardi, Fabrizio; Telmon, Claudio; Sgandurra, Daniele

    This paper discusses risk modeling and risk management in information and communications technology (ICT) systems for which the attack impact distribution is heavy tailed (e.g., power law distribution) and the average risk is unbounded. Systems with these properties include billing infrastructures used to charge customers for services they access. Attacks against billing infrastructures can be classified as peripheral attacks and backbone attacks. The goal of a peripheral attack is to tamper with user bills; a backbone attack seeks to seize control of the billing infrastructure. The probability distribution of the overall impact of an attack on a billing infrastructure also has a heavy-tailed curve. This implies that the probability of a massive impact cannot be ignored and that the average impact may be unbounded - thus, even the most expensive countermeasures would be cost effective. Consequently, the only strategy for managing risk is to increase the resilience of the infrastructure by employing redundant components.

  5. A roadmap for the implementation of mHealth innovations for image-based diagnostic support in clinical and public-health settings: a focus on front-line health workers and health-system organizations.

    PubMed

    Wallis, Lee; Hasselberg, Marie; Barkman, Catharina; Bogoch, Isaac; Broomhead, Sean; Dumont, Guy; Groenewald, Johann; Lundin, Johan; Norell Bergendahl, Johan; Nyasulu, Peter; Olofsson, Maud; Weinehall, Lars; Laflamme, Lucie

    2017-06-01

    Diagnostic support for clinicians is a domain of application of mHealth technologies with a slow uptake despite promising opportunities, such as image-based clinical support. The absence of a roadmap for the adoption and implementation of these types of applications is a further obstacle. This article provides the groundwork for a roadmap to implement image-based support for clinicians, focusing on how to overcome potential barriers affecting front-line users, the health-care organization and the technical system. A consensual approach was used during a two-day roundtable meeting gathering a convenience sample of stakeholders (n = 50) from clinical, research, policymaking and business fields and from different countries. A series of sessions was held including small group discussions followed by reports to the plenary. Session moderators synthesized the reports in a number of theme-specific strategies that were presented to the participants again at the end of the meeting for them to determine their individual priority. There were four to seven strategies derived from the thematic sessions. Once reviewed and prioritized by the participants some received greater priorities than others. As an example, of the seven strategies related to the front-line users, three received greater priority: the need for any system to significantly add value to the users; the usability of mHealth apps; and the goodness-of-fit into the work flow. Further, three aspects cut across the themes: ease of integration of the mHealth applications; solid ICT infrastructure and support network; and interoperability. Research and development in image-based diagnostic pave the way to making health care more accessible and more equitable. The successful implementation of those solutions will necessitate a seamless introduction into routines, adequate technical support and significant added value.

  6. Cloud flexibility using DIRAC interware

    NASA Astrophysics Data System (ADS)

    Fernandez Albor, Víctor; Seco Miguelez, Marcos; Fernandez Pena, Tomas; Mendez Muñoz, Victor; Saborido Silva, Juan Jose; Graciani Diaz, Ricardo

    2014-06-01

    Communities of different locations are running their computing jobs on dedicated infrastructures without the need to worry about software, hardware or even the site where their programs are going to be executed. Nevertheless, this usually implies that they are restricted to use certain types or versions of an Operating System because either their software needs an definite version of a system library or a specific platform is required by the collaboration to which they belong. On this scenario, if a data center wants to service software to incompatible communities, it has to split its physical resources among those communities. This splitting will inevitably lead to an underuse of resources because the data centers are bound to have periods where one or more of its subclusters are idle. It is, in this situation, where Cloud Computing provides the flexibility and reduction in computational cost that data centers are searching for. This paper describes a set of realistic tests that we ran on one of such implementations. The test comprise software from three different HEP communities (Auger, LHCb and QCD phenomelogists) and the Parsec Benchmark Suite running on one or more of three Linux flavors (SL5, Ubuntu 10.04 and Fedora 13). The implemented infrastructure has, at the cloud level, CloudStack that manages the virtual machines (VM) and the hosts on which they run, and, at the user level, the DIRAC framework along with a VM extension that will submit, monitorize and keep track of the user jobs and also requests CloudStack to start or stop the necessary VM's. In this infrastructure, the community software is distributed via the CernVM-FS, which has been proven to be a reliable and scalable software distribution system. With the resulting infrastructure, users are allowed to send their jobs transparently to the Data Center. The main purpose of this system is the creation of flexible cluster, multiplatform with an scalable method for software distribution for several VOs. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine, which is transparent to the user.

  7. Twenty-Five Year Site Plan FY2013 - FY2037

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, William H.

    2012-07-12

    Los Alamos National Laboratory (the Laboratory) is the nation's premier national security science laboratory. Its mission is to develop and apply science and technology to ensure the safety, security, and reliability of the United States (U.S.) nuclear stockpile; reduce the threat of weapons of mass destruction, proliferation, and terrorism; and solve national problems in defense, energy, and the environment. The fiscal year (FY) 2013-2037 Twenty-Five Year Site Plan (TYSP) is a vital component for planning to meet the National Nuclear Security Administration (NNSA) commitment to ensure the U.S. has a safe, secure, and reliable nuclear deterrent. The Laboratory also usesmore » the TYSP as an integrated planning tool to guide development of an efficient and responsive infrastructure that effectively supports the Laboratory's missions and workforce. Emphasizing the Laboratory's core capabilities, this TYSP reflects the Laboratory's role as a prominent contributor to NNSA missions through its programs and campaigns. The Laboratory is aligned with Nuclear Security Enterprise (NSE) modernization activities outlined in the NNSA Strategic Plan (May 2011) which include: (1) ensuring laboratory plutonium space effectively supports pit manufacturing and enterprise-wide special nuclear materials consolidation; (2) constructing the Chemistry and Metallurgy Research Replacement Nuclear Facility (CMRR-NF); (3) establishing shared user facilities to more cost effectively manage high-value, experimental, computational and production capabilities; and (4) modernizing enduring facilities while reducing the excess facility footprint. Th is TYSP is viewed by the Laboratory as a vital planning tool to develop an effi cient and responsive infrastructure. Long range facility and infrastructure development planning are critical to assure sustainment and modernization. Out-year re-investment is essential for sustaining existing facilities, and will be re-evaluated on an annual basis. At the same time, major modernization projects will require new line-item funding. This document is, in essence, a roadmap that defines a path forward for the Laboratory to modernize, streamline, consolidate, and sustain its infrastructure to meet its national security mission.« less

  8. NASA GES DISC On-line Visualization and Analysis System for Gridded Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.; Berrick, S.; Rui, H.; Liu, Z.; Zhu, T.; Teng, W.; Shen, S.; Qin, J.

    2005-01-01

    The ability to use data stored in the current NASA Earth Observing System (EOS) archives for studying regional or global phenomena is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. Gaining this understanding and applying it to data reduction is a time-consuming task that must be undertaken before the core investigation can begin. This is an especially difficult challenge when science objectives require users to deal with large multi-sensor data sets that are usually of different formats, structures, and resolutions. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has taken a major step towards meeting this challenge by developing an infrastructure with a Web interface that allows users to perform interactive analysis online without downloading any data, the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni." Giovanni provides interactive, online, analysis tools for data users to facilitate their research. There have been several instances of this interface created to serve TRMM users, Aerosol scientists, Ocean Color and Agriculture applications users. The first generation of these tools support gridded data only. The user selects geophysical parameters, area of interest, time period; and the system generates an output on screen in a matter of seconds. The currently available output options are: Area plot averaged or accumulated over any available data period for any rectangular area; Time plot time series averaged over any rectangular area; Hovmoller plots image view of any longitude-time and latitude-time cross sections; ASCII output for all plot types; Image animation for area plot. Another analysis suite deals with parameter intercomparison: scatter plots, temporal correlation maps, GIs-compatible outputs, etc. This allow user to focus on data content (i.e. science parameters) and eliminate the need for expensive learning, development and processing tasks that are redundantly incurred by an archive's user community. The current implementation utilizes the GrADS-DODS Server (GDS), and provides subsetting and analysis services across the Internet for any GrADS-readable dataset. The subsetting capability allows users to retrieve a specified temporal and/or spatial subdomain from a large dataset, eliminating the need to download everything simply to access a small relevant portion of a dataset. The analysis capability allows users to retrieve the results of an operation applied to one or more datasets on the server. We use this approach to read pre-processed binary files and/or to read and extract the needed parts directly from HDF or HDF-EOS files. These subsets then serve as inputs into GrADS analysis scripts. It can be used in a wide variety of Earth science applications: climate and weather events study and monitoring; modeling. It can be easily configured for new applications.

  9. The EPOS Architecture: Integrated Services for solid Earth Science

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Consortium, Epos

    2013-04-01

    The European Plate Observing System (EPOS) represents a scientific vision and an IT approach in which innovative multidisciplinary research is made possible for a better understanding of the physical processes controlling earthquakes, volcanic eruptions, unrest episodes and tsunamis as well as those driving tectonics and Earth surface dynamics. EPOS has a long-term plan to facilitate integrated use of data, models and facilities from existing (but also new) distributed research infrastructures, for solid Earth science. One primary purpose of EPOS is to take full advantage of the new e-science opportunities coming available. The aim is to obtain an efficient and comprehensive multidisciplinary research platform for the Earth sciences in Europe. The EPOS preparatory phase (EPOS PP), funded by the European Commission within the Capacities program, started on November 1st 2010 and it has completed its first two years of activity. EPOS is presently mid-way through its preparatory phase and to date it has achieved all the objectives, milestones and deliverables planned in its roadmap towards construction. The EPOS mission is to integrate the existing research infrastructures (RIs) in solid Earth science warranting increased accessibility and usability of multidisciplinary data from monitoring networks, laboratory experiments and computational simulations. This is expected to enhance worldwide interoperability in the Earth Sciences and establish a leading, integrated European infrastructure offering services to researchers and other stakeholders. The Preparatory Phase aims at leveraging the project to the level of maturity required to implement the EPOS construction phase, with a defined legal structure, detailed technical planning and financial plan. We will present the EPOS architecture, which relies on the integration of the main outcomes from legal, governance and financial work following the strategic EPOS roadmap and according to the technical work done during the first two years in order to establish an effective implementation plan guaranteeing long term sustainability for the infrastructure and the associated services. We plan to describe the RIs to be integrated in EPOS and to illustrate the initial suite of integrated and thematic core services to be offered to the users. We will present examples of combined data analyses and we will address the importance of opening our research infrastructures to users from different communities. We will describe the use-cases identified so far in order to allow stakeholders and potential future users to understand and interact with the EPOS infrastructure. In this framework, we also discuss the global perspectives for data infrastructures in order to verify the coherency of the EPOS plans and present the EPOS contributions. We also discuss the international cooperation initiatives in which EPOS is involved emphasizing the implications for solid Earth data infrastructures. In particular, EPOS and the satellite Earth Observation communities are collaborating in order to promote the integration of data from in-situ monitoring networks and satellite observing systems. Finally, we will also discuss the priorities for the third year of activity and the key actions planned to better involve users in EPOS. In particular, we will discuss the work done to finalize the design phase as well as the activities to start the validation and testing phase of the EPOS infrastructure.

  10. Use of Electronic Journals in Astronomy and Astrophysics Libraries and Information Centres in India: A Librarians' Perspective

    NASA Astrophysics Data System (ADS)

    Pathak, S. K.; Deshpande, N. J.; Rai, V.

    2010-10-01

    The objectives of this study were to find out whether librarians are satisfied with the present infrastructure for electronic journals and also to find out whether librarians are taking advantage of consortia. A structured questionnaire for librarians was divided into eight parts which were further sub-divided and designed to get information on various aspects of library infrastructure and usage of electronic journals. The goal was to find out the basic minimum infrastructure needed to provide access to electronic journals to a community of users and to facilitate communication in all major astronomy & astrophysics organizations in India. The study aims to highlight key insights from responses of librarians who are responsible for managing astronomy & astrophysics libraries in India and to identify the information needs of the users. Each community and discipline will have its own specific legacy of journal structure, reading, publishing, and researching practices, and time will show which kinds of e-journals are most effective and useful.

  11. Modernization of the NASA scientific and technical information program

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.; Hunter, Judy F.; Ostergaard, K.

    1993-01-01

    The NASA Scientific and Technical Information Program utilizes a technology infrastructure assembled in the mid 1960s to late 1970s to process and disseminate its information products. When this infrastructure was developed it placed NASA as a leader in processing STI. The retrieval engine for the STI database was the first of its kind and was used as the basis for developing commercial, other U.S., and foreign government agency retrieval systems. Due to the combination of changes in user requirements and the tremendous increase in technological capabilities readily available in the marketplace, this infrastructure is no longer the most cost-effective or efficient methodology available. Consequently, the NASA STI Program is pursuing a modernization effort that applies new technology to current processes to provide near-term benefits to the user. In conjunction with this activity, we are developing a long-term modernization strategy designed to transition the Program to a multimedia, global 'library without walls.' Critical pieces of the long-term strategy include streamlining access to sources of STI by using advances in computer networking and graphical user interfaces; creating and disseminating technical information in various electronic media including optical disks, video, and full text; and establishing a Technology Focus Group to maintain a current awareness of emerging technology and to plan for the future.

  12. Social and structural aspects of the overdose risk environment in St. Petersburg, Russia.

    PubMed

    Green, Traci C; Grau, Lauretta E; Blinnikova, Ksenia N; Torban, Mikhail; Krupitsky, Evgeny; Ilyuk, Ruslan; Kozlov, Andrei; Heimer, Robert

    2009-05-01

    While overdose is a common cause of mortality among opioid injectors worldwide, little information exists on opioid overdoses or how context may influence overdose risk in Russia. This study sought to uncover social and structural aspects contributing to fatal overdose risk in St. Petersburg and assess prevention intervention feasibility. Twenty-one key informant interviews were conducted with drug users, treatment providers, toxicologists, police, and ambulance staff. Thematic coding of interview content was conducted to elucidate elements of the overdose risk environment. Several factors within St. Petersburg's environment were identified as shaping illicit drug users' risk behaviours and contributing to conditions of suboptimal response to overdose in the community. Most drug users live and experience overdoses at home, where family and home environment may mediate or moderate risk behaviours. The overdose risk environment is also worsened by inefficient emergency response infrastructure, insufficient cardiopulmonary or naloxone training resources, and the preponderance of abstinence-based treatment approaches to the exclusion of other treatment modalities. However, attitudes of drug users and law enforcement officials generally support overdose prevention intervention feasibility. Modifiable aspects of the risk environment suggest community-based and structural interventions, including overdose response training for drug users and professionals that encompasses naloxone distribution to the users and equipping more ambulances with naloxone. Local social and structural elements influence risk environments for overdose. Interventions at the community and structural levels to prevent and respond to opioid overdoses are needed for and integral to reducing overdose mortality in St. Petersburg.

  13. Information Technology: Better Informed Decision Making Needed on Navy’s Next Generation Enterprise Network Acquisition

    DTIC Science & Technology

    2011-03-01

    million. To bridge the time frame between the end of the NMCI contract and the full transition to NGEN, DON awarded a $3.7 billion continuity of...leasehold improvements; and moveable infrastructure associated with local network operations. End-User Hardware December 2011 Provide end-user

  14. Communication and Information Systems Infrastructure Skills.

    ERIC Educational Resources Information Center

    Maughan, George R.

    2001-01-01

    Asserts that users and managers of information technology (IT) in higher education institutions need evolving skills as well as an awareness of how changing technology makes them dependent on each other in new ways. Describes the roles and skills of the core IT workforce, department managers, and universal users, and addresses training needs. (EV)

  15. Timeline analysis tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Mucks, John

    1997-02-01

    The timeline analysis system (TAS) was developed by Rome Laboratory to assist intelligence analysts with the comprehension of large amounts of information. Under the TAS program data visualization, manipulation and reasoning tools were developed in close coordination with end users. The initial TAS prototype was developed for foreign command and control analysts at Space Command in Colorado Springs and was fielded there in 1989. The TAS prototype replaced manual paper timeline maintenance and analysis techniques and has become an integral part of Space Command's information infrastructure. TAS was designed to be domain independent and has been tailored and proliferated to a number of other users. The TAS program continues to evolve because of strong user support. User funded enhancements and Rome Lab funded technology upgrades have significantly enhanced TAS over the years and will continue to do so for the foreseeable future. TAS was recently provided to the New York State Police (NYSP) for evaluation using actual case data. Timeline analysis it turns out is a popular methodology used in law enforcement. The evaluation has led to a more comprehensive application and evaluation project sponsored by the National Institute of Justice (NIJ). This paper describes the capabilities of TAS, results of the initial NYSP evaluation and the plan for a more comprehensive NYSP evaluation.

  16. Integration of external metadata into the Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Berger, Katharina; Levavasseur, Guillaume; Stockhause, Martina; Lautenschlager, Michael

    2015-04-01

    International projects with high volume data usually disseminate their data in a federated data infrastructure, e.g.~the Earth System Grid Federation (ESGF). The ESGF aims to make the geographically distributed data seamlessly discoverable and accessible. Additional data-related information is currently collected and stored in separate repositories by each data provider. This scattered and useful information is not or only partly available for ESGF users. Examples for such additional information systems are ES-DOC/metafor for model and simulation information, IPSL's versioning information, CHARMe for user annotations, DKRZ's quality information and data citation information. The ESGF Quality Control working team (esgf-qcwt) aims to integrate these valuable pieces of additional information into the ESGF in order to make them available to users and data archive managers by (i) integrating external information into ESGF portal, (ii) integrating links to external information objects into the ESGF metadata index, e.g. by the use of PIDs (Persistent IDentifiers), and (iii) automating the collection of external information during the ESGF data publication process. For the sixth phase of CMIP (Coupled Model Intercomparison Project), the ESGF metadata index is to be enriched by additional information on data citation, file version, etc. This information will support users directly and can be automatically exploited by higher level services (human and machine readability).

  17. Individual and Environmental Correlates to Quality of Life in Park Users in Colombia

    PubMed Central

    Camargo, Diana Marina

    2017-01-01

    Purpose: To explore individual and environmental correlates to quality of life (QoL) in park users in Colombia. Methods: A cross-sectional study with face-to-face interviews was conducted with 1392 park users from ten parks in Colombia. The survey included sociodemographic questions, health condition assessed with EuroQuol-5-Dimensions-5-Levels; in addition, questions about accessibility to the parks and perceptions about quality of infrastructure and green areas were asked. The Spanish version of the questionnaire EUROHIS-QOL-8 items was applied to assess QoL. Log-binomial regression models were applied for analyses. Results: Years of schooling, visits to the park with a companion, active use of the park, a maximum score for quality of trees and walking paths, and the perception of safety on the way to the park were positively associated with a better QoL (p < 0.05). Health conditions related to problems in the ability to perform activities of daily living and anxiety/depression showed negative associations. Conclusions: The present study contributes to the Latin American studies by providing information on how parks in an intermediate city may contribute to increased QoL of park users through safety in neighborhoods, social support, active use, and aesthetics, cleanliness, and care of green areas. PMID:29048373

  18. Understanding USGS user needs and Earth observing data use for decision making

    NASA Astrophysics Data System (ADS)

    Wu, Z.

    2016-12-01

    US Geological Survey (USGS) initiated the Requirements, Capabilities and Analysis for Earth Observations (RCA-EO) project in the Land Remote Sensing (LRS) program, collaborating with the National Oceanic and Atmospheric Administration (NOAA) to jointly develop the supporting information infrastructure - The Earth Observation Requirements Evaluation Systems (EORES). RCA-EO enables us to collect information on current data products and projects across the USGS and evaluate the impacts of Earth observation data from all sources, including spaceborne, airborne, and ground-based platforms. EORES allows users to query, filter, and analyze usage and impacts of Earth observation data at different organizational level within the bureau. We engaged over 500 subject matter experts and evaluated more than 1000 different Earth observing data sources and products. RCA-EO provides a comprehensive way to evaluate impacts of Earth observing data on USGS mission areas and programs through the survey of 345 key USGS products and services. We paid special attention to user feedback about Earth observing data to inform decision making on improving user satisfaction. We believe the approach and philosophy of RCA-EO can be applied in much broader scope to derive comprehensive knowledge of Earth observing systems impacts and usage and inform data products development and remote sensing technology innovation.

  19. Infrastructure issues.

    PubMed

    Hagland, Mark

    2010-03-01

    CIOs must ensure the creation of a technology foundation underlying the implementation of new applications, in order to guarantee continuous computing and other essential characteristics of IT service for end-users, going forward. Focusing on the needs of end-users will be essential to creating that foundation. End-user expectations are already outstripping technological capabilities, putting pressure on CIOs to carefully balance the offering of highly desired applications with the creation of a strong tech foundation to undergird those apps.

  20. Future Naval Use of COTS Networking Infrastructure

    DTIC Science & Technology

    2009-07-01

    user to benefit from Google’s vast databases and computational resources. Obviously, the ability to harness the full power of the Cloud could be... Computing Impact Findings Action Items Take-Aways Appendices: Pages 54-68 A. Terms of Reference Document B. Sample Definitions of Cloud ...and definition of Cloud Computing . While Cloud Computing is developing in many variations – including Infrastructure as a Service (IaaS), Platform as

  1. The virtual machine (VM) scaler: an infrastructure manager supporting environmental modeling on IaaS clouds

    USDA-ARS?s Scientific Manuscript database

    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific m...

  2. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth

    DTIC Science & Technology

    2007-09-01

    Office Why GAO Did This Study Highlights Accountability Integrity Reliability September 2007 DEFENSE INFRASTRUCTURE Challenges Increase Risks for...authority to conduct evaluations on his own initiative. It addresses (1) the challenges and associated risks the Army faces in providing for timely...but it faces several complex implementation challenges that risk late provision of needed infrastructure to adequately support incoming personnel

  3. COOPEUS - connecting research infrastructures in environmental sciences

    NASA Astrophysics Data System (ADS)

    Koop-Jakobsen, Ketil; Waldmann, Christoph; Huber, Robert

    2015-04-01

    The COOPEUS project was initiated in 2012 bringing together 10 research infrastructures (RIs) in environmental sciences from the EU and US in order to improve the discovery, access, and use of environmental information and data across scientific disciplines and across geographical borders. The COOPEUS mission is to facilitate readily accessible research infrastructure data to advance our understanding of Earth systems through an international community-driven effort, by: Bringing together both user communities and top-down directives to address evolving societal and scientific needs; Removing technical, scientific, cultural and geopolitical barriers for data use; and Coordinating the flow, integrity and preservation of information. A survey of data availability was conducted among the COOPEUS research infrastructures for the purpose of discovering impediments for open international and cross-disciplinary sharing of environmental data. The survey showed that the majority of data offered by the COOPEUS research infrastructures is available via the internet (>90%), but the accessibility to these data differ significantly among research infrastructures; only 45% offer open access on their data, whereas the remaining infrastructures offer restricted access e.g. do not release raw data or sensible data, demand user registration or require permission prior to release of data. These rules and regulations are often installed as a form of standard practice, whereas formal data policies are lacking in 40% of the infrastructures, primarily in the EU. In order to improve this situation COOPEUS has installed a common data-sharing policy, which is agreed upon by all the COOPEUS research infrastructures. To investigate the existing opportunities for improving interoperability among environmental research infrastructures, COOPEUS explored the opportunities with the GEOSS common infrastructure (GCI) by holding a hands-on workshop. Through exercises directly registering resources, the first steps were taken to implement the GCI as a platform for documenting the capabilities of the COOPEUS research infrastructures. COOPEUS recognizes the potential for the GCI to become an important platform promoting cross-disciplinary approaches in the studies of multifaceted environmental challenges. Recommendations from the workshop participants also revealed that in order to attract research infrastructures to use the GCI, the registration process must be simplified and accelerated. However, also the data policies of the individual research infrastructure, or lack thereof, can prevent the use of the GCI or other portals, due to unclarities regarding data management authority and data ownership. COOPEUS shall continue to promote cross-disciplinary data exchange in the environmental field and will in the future expand to also include other geographical areas.

  4. The Visit-Data Warehouse: Enabling Novel Secondary Use of Health Information Exchange Data

    PubMed Central

    Fleischman, William; Lowry, Tina; Shapiro, Jason

    2014-01-01

    Introduction/Objectives: Health Information Exchange (HIE) efforts face challenges with data quality and performance, and this becomes especially problematic when data is leveraged for uses beyond primary clinical use. We describe a secondary data infrastructure focusing on patient-encounter, nonclinical data that was built on top of a functioning HIE platform to support novel secondary data uses and prevent potentially negative impacts these uses might have otherwise had on HIE system performance. Background: HIE efforts have generally formed for the primary clinical use of individual clinical providers searching for data on individual patients under their care, but many secondary uses have been proposed and are being piloted to support care management, quality improvement, and public health. Description of the HIE and Base Infrastructure: This infrastructure review describes a module built into the Healthix HIE. Healthix, based in the New York metropolitan region, comprises 107 participating organizations with 29,946 acute-care beds in 383 facilities, and includes more than 9.2 million unique patients. The primary infrastructure is based on the InterSystems proprietary Caché data model distributed across servers in multiple locations, and uses a master patient index to link individual patients’ records across multiple sites. We built a parallel platform, the “visit data warehouse,” of patient encounter data (demographics, date, time, and type of visit) using a relational database model to allow accessibility using standard database tools and flexibility for developing secondary data use cases. These four secondary use cases include the following: (1) tracking encounter-based metrics in a newly established geriatric emergency department (ED), (2) creating a dashboard to provide a visual display as well as a tabular output of near-real-time de-identified encounter data from the data warehouse, (3) tracking frequent ED users as part of a regional-approach to case management intervention, and (4) improving an existing quality improvement program that analyzes patients with return visits to EDs within 72 hours of discharge. Results/Lessons Learned: Setting up a separate, near-real-time, encounters-based relational database to complement an HIE built on a hierarchical database is feasible, and may be necessary to support many secondary uses of HIE data. As of November 2014, the visit-data warehouse (VDW) built by Healthix is undergoing technical validation testing and updates on an hourly basis. We had to address data integrity issues with both nonstandard and missing HL7 messages because of varied HL7 implementation across the HIE. Also, given our HIEs federated structure, some sites expressed concerns regarding data centralization for the VDW. An established and stable HIE governance structure was critical in overcoming this initial reluctance. Conclusions: As secondary use of HIE data becomes more prevalent, it may be increasingly necessary to build separate infrastructure to support secondary use without compromising performance. More research is needed to determine optimal ways of building such infrastructure and validating its use for secondary purposes. PMID:25848595

  5. Building the interspace: Digital library infrastructure for a University Engineering Community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schatz, B.

    A large-scale digital library is being constructed and evaluated at the University of Illinois, with the goal of bringing professional search and display to Internet information services. A testbed planned to grow to 10K documents and 100K users is being constructed in the Grainger Engineering Library Information Center, as a joint effort of the University Library and the National Center for Supercomputing Applications (NCSA), with evaluation and research by the Graduate School of Library and Information Science and the Department of Computer Science. The electronic collection will be articles from engineering and science journals and magazines, obtained directly from publishersmore » in SGML format and displayed containing all text, figures, tables, and equations. The publisher partners include IEEE Computer Society, AIAA (Aerospace Engineering), American Physical Society, and Wiley & Sons. The software will be based upon NCSA Mosaic as a network engine connected to commercial SGML displayers and full-text searchers. The users will include faculty/students across the midwestern universities in the Big Ten, with evaluations via interviews, surveys, and transaction logs. Concurrently, research into scaling the testbed is being conducted. This includes efforts in computer science, information science, library science, and information systems. These efforts will evaluate different semantic retrieval technologies, including automatic thesaurus and subject classification graphs. New architectures will be designed and implemented for a next generation digital library infrastructure, the Interspace, which supports interaction with information spread across information spaces within the Net.« less

  6. Mission Exploitation Platform PROBA-V

    NASA Astrophysics Data System (ADS)

    Goor, Erwin

    2016-04-01

    VITO and partners developed an end-to-end solution to drastically improve the exploitation of the PROBA-V EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data. From November 2015 an operational Mission Exploitation Platform (MEP) PROBA-V, as an ESA pathfinder project, will be gradually deployed at the VITO data center with direct access to the complete data archive. Several applications will be released to the users, e.g. - A time series viewer, showing the evolution of PROBA-V bands and derived vegetation parameters for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains e.g. for the calculation of N-daily composites. - A Virtual Machine will be provided with access to the data archive and tools to work with this data, e.g. various toolboxes and support for R and Python. After an initial release in January 2016, a research platform will gradually be deployed allowing users to design, debug and test applications on the platform. From the MEP PROBA-V, access to Sentinel-2 and landsat data will be addressed as well, e.g. to support the Cal/Val activities of the users. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to the complete data archive. To realise this, private cloud technology (openStack) is used and a distributed processing environment is built based on Hadoop. The Hadoop ecosystem offers a lot of technologies (Spark, Yarn, Accumulo, etc.) which we integrate with several open-source components. The impact of this MEP on the user community will be high and will completely change the way of working with the data and hence open the large time series to a larger community of users. The presentation will address these benefits for the users and discuss on the technical challenges in implementing this MEP.

  7. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619

  8. TRENT2D WG: a smart web infrastructure for debris-flow modelling and hazard assessment

    NASA Astrophysics Data System (ADS)

    Zorzi, Nadia; Rosatti, Giorgio; Zugliani, Daniel; Rizzi, Alessandro; Piffer, Stefano

    2016-04-01

    Mountain regions are naturally exposed to geomorphic flows, which involve large amounts of sediments and induce significant morphological modifications. The physical complexity of this class of phenomena represents a challenging issue for modelling, leading to elaborate theoretical frameworks and sophisticated numerical techniques. In general, geomorphic-flows models proved to be valid tools in hazard assessment and management. However, model complexity seems to represent one of the main obstacles to the diffusion of advanced modelling tools between practitioners and stakeholders, although the UE Flood Directive (2007/60/EC) requires risk management and assessment to be based on "best practices and best available technologies". Furthermore, several cutting-edge models are not particularly user-friendly and multiple stand-alone software are needed to pre- and post-process modelling data. For all these reasons, users often resort to quicker and rougher approaches, leading possibly to unreliable results. Therefore, some effort seems to be necessary to overcome these drawbacks, with the purpose of supporting and encouraging a widespread diffusion of the most reliable, although sophisticated, modelling tools. With this aim, this work presents TRENT2D WG, a new smart modelling solution for the state-of-the-art model TRENT2D (Armanini et al., 2009, Rosatti and Begnudelli, 2013), which simulates debris flows and hyperconcentrated flows adopting a two-phase description over a mobile bed. TRENT2D WG is a web infrastructure joining advantages offered by the software-delivering model SaaS (Software as a Service) and by WebGIS technology and hosting a complete and user-friendly working environment for modelling. In order to develop TRENT2D WG, the model TRENT2D was converted into a service and exposed on a cloud server, transferring computational burdens from the user hardware to a high-performing server and reducing computational time. Then, the system was equipped with an interface supporting Web-based GIS functionalities, making the model accessible through the World Wide Web. Furthermore, WebGIS technology allows georeferenced model input data and simulation results to be produced, managed, displayed and processed in a unique and intuitive working environment. Thanks to its large flexibility, TRENT2D WG was equipped also with a BUWAL-type procedure (Heinimann et al., 1998) to assess and map debris-flow hazard. In this way, model results can be used straightforwardly as input data of the hazard-mapping procedure, avoiding work fragmentation and taking wide advantage of the functionalities offered by WebGIS technology. TRENT2D WG is intended to become a reliable tool for researchers, practitioners and stakeholders, supporting modelling and hazard mapping effectively and encouraging connections between the research field and professional needs at a working scale.

  9. Rehabilitation, Replacement and Redesign of the Nation's Water and Wastewater Infrastructure as a Valuable Adaptation Opportunity

    EPA Science Inventory

    In support of the Agency's Sustainable Water Infrastructure Initiative, EPA's Office of Research and Develpment initiated the Aging Water Infrastructure Research Program in 2007. The program, with its core focus on the support of strategic asset management, is designed to facili...

  10. Group of Eight Infrastructure Condition Survey 2007. Aggregated Data

    ERIC Educational Resources Information Center

    Group of Eight (NJ1), 2008

    2008-01-01

    The "Group of Eight Infrastructure Condition Survey 2007" represents the Go8's first effort to enhance the quality of information available about the condition of building and support infrastructure of member universities, their capital investment trends and challenges. The survey aims to support the systematic benchmarking of facilities…

  11. Brokering Capabilities for EarthCube - supporting Multi-disciplinary Earth Science Research

    NASA Astrophysics Data System (ADS)

    Jodha Khalsa, Siri; Pearlman, Jay; Nativi, Stefano; Browdy, Steve; Parsons, Mark; Duerr, Ruth; Pearlman, Francoise

    2013-04-01

    The goal of NSF's EarthCube is to create a sustainable infrastructure that enables the sharing of all geosciences data, information, and knowledge in an open, transparent and inclusive manner. Brokering of data and improvements in discovery and access are a key to data exchange and promotion of collaboration across the geosciences. In this presentation we describe an evolutionary process of infrastructure and interoperability development focused on participation of existing science research infrastructures and augmenting them for improved access. All geosciences communities already have, to a greater or lesser degree, elements of an information infrastructure in place. These elements include resources such as data archives, catalogs, and portals as well as vocabularies, data models, protocols, best practices and other community conventions. What is necessary now is a process for levering these diverse infrastructure elements into an overall infrastructure that provides easy discovery, access and utilization of resources across disciplinary boundaries. Brokers connect disparate systems with only minimal burdens upon those systems, and enable the infrastructure to adjust to new technical developments and scientific requirements as they emerge. Robust cyberinfrastructure will arise only when social, organizational, and cultural issues are resolved in tandem with the creation of technology-based services. This is a governance issue, but is facilitated by infrastructure capabilities that can impact the uptake of new interdisciplinary collaborations and exchange. Thus brokering must address both the cyberinfrastructure and computer technology requirements and also the social issues to allow improved cross-domain collaborations. This is best done through use-case-driven requirements and agile, iterative development methods. It is important to start by solving real (not hypothetical) information access and use problems via small pilot projects that develop capabilities targeted to specific communities. Brokering, as a critical capability for connecting systems, evolves over time through more connections and increased functionality. This adaptive process allows for continual evaluation as to how well science-driven use cases are being met. There is a near term, and possibly unique, opportunity through EarthCube and European e-Infrastructure projects to increase the impact and interconnectivity of projects. In the developments described in this presentation, brokering has been demonstrated to be an essential part of a robust, adaptive technical infrastructure and demonstration and user scenarios can address of both the governance and detailed implementation paths forward. The EarthCube Brokering roadmap proposes the expansion of brokering pilots into fully operational prototypes that work with the broader science and informatics communities to answer these questions, connect existing and emerging systems, and evolve the EarthCube infrastructure.

  12. Elastic Cloud Computing Infrastructures in the Open Cirrus Testbed Implemented via Eucalyptus

    NASA Astrophysics Data System (ADS)

    Baun, Christian; Kunze, Marcel

    Cloud computing realizes the advantages and overcomes some restrictionsof the grid computing paradigm. Elastic infrastructures can easily be createdand managed by cloud users. In order to accelerate the research ondata center management and cloud services the OpenCirrusTM researchtestbed has been started by HP, Intel and Yahoo!. Although commercialcloud offerings are proprietary, Open Source solutions exist in the field ofIaaS with Eucalyptus, PaaS with AppScale and at the applications layerwith Hadoop MapReduce. This paper examines the I/O performance ofcloud computing infrastructures implemented with Eucalyptus in contrastto Amazon S3.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifford, Megan

    Infrastructure is, by design, largely unnoticed until it breaks down and services fail. This includes water supplies, gas pipelines, bridges and dams, phone lines and cell towers, roads and culverts, railways, and the electric grid—all of the complex systems that keep our societies and economies running. Climate change, population growth, increased urbanization, system aging, and outdated design standards stress existing infrastructure and its ability to satisfy the rapidly changing demands from users. Here, the resilience of both physical and cyber infrastructure systems, however, is critical to a community as it prepares for, responds to, and recovers from a disaster, whethermore » natural or man-made.« less

  14. Risk assessment of sewer condition using artificial intelligence tools: application to the SANEST sewer system.

    PubMed

    Sousa, V; Matos, J P; Almeida, N; Saldanha Matos, J

    2014-01-01

    Operation, maintenance and rehabilitation comprise the main concerns of wastewater infrastructure asset management. Given the nature of the service provided by a wastewater system and the characteristics of the supporting infrastructure, technical issues are relevant to support asset management decisions. In particular, in densely urbanized areas served by large, complex and aging sewer networks, the sustainability of the infrastructures largely depends on the implementation of an efficient asset management system. The efficiency of such a system may be enhanced with technical decision support tools. This paper describes the role of artificial intelligence tools such as artificial neural networks and support vector machines for assisting the planning of operation and maintenance activities of wastewater infrastructures. A case study of the application of this type of tool to the wastewater infrastructures of Sistema de Saneamento da Costa do Estoril is presented.

  15. Consideration of an Applied Model of Public Health Program Infrastructure

    PubMed Central

    Lavinghouze, Rene; Snyder, Kimberly; Rieker, Patricia; Ottoson, Judith

    2015-01-01

    Systemic infrastructure is key to public health achievements. Individual public health program infrastructure feeds into this larger system. Although program infrastructure is rarely defined, it needs to be operationalized for effective implementation and evaluation. The Ecological Model of Infrastructure (EMI) is one approach to defining program infrastructure. The EMI consists of 5 core (Leadership, Partnerships, State Plans, Engaged Data, and Managed Resources) and 2 supporting (Strategic Understanding and Tactical Action) elements that are enveloped in a program’s context. We conducted a literature search across public health programs to determine support for the EMI. Four of the core elements were consistently addressed, and the other EMI elements were intermittently addressed. The EMI provides an initial and partial model for understanding program infrastructure, but additional work is needed to identify evidence-based indicators of infrastructure elements that can be used to measure success and link infrastructure to public health outcomes, capacity, and sustainability. PMID:23411417

  16. Current status of the EPOS WG4 - GNSS and Other Geodetic Data

    NASA Astrophysics Data System (ADS)

    Fernandes, Rui; Bastos, Luisa; Bruyninx, Carine; D'Agostino, Nicola; Dousa, Jan; Ganas, Athanassios; Lidberg, Martin; Nocquet, Jean-Mathieu

    2014-05-01

    WG4 - "EPOS Geodetic Data and Other Geodetic Data" is the Working Group of the EPOS project in charge of defining and preparing the integration of the existing Pan-European Geodetic Infrastructures that will support European Geosciences, which is the ultimate goal of the EPOS project. The WG4 is formed by representatives of the participating EPOS countries (23) but it is also open to the entire geodetic community. In fact, WG4 also already includes members from countries that formally are not integrating EPOS in this first step. The geodetic component of EPOS (WG4) is dealing essentially with Research Infrastructures focused on continuous operating GNSS (cGNSS) in the current phase. The option of concentrating the efforts on the presently most generalized geodetic tool supporting research on Solid Earth was decided in order to optimize the existing resources. Nevertheless, WG4 will continue to pursue the development of tools and methodologies that permit the access of the EPOS community to other geodetic information (e.g., gravimetry). Furthermore, although the focus is on Solid Earth applications, other research and technical applications (e.g., reference frames, meteorology, space weather) can also benefit from the efforts of WG4 EPOS towards the optimization of the geodetic resources in Europe. We will present and discuss the plans for the implementation of the thematic and core services (TCS) for geodetic data within EPOS and the related business plan. We will focus on strategies towards the implementation of the best solutions that will permit to the end-users, and in particular geo-scientists, to access the geodetic data, derived solutions, and associated metadata using transparent and uniform processes. Five pillars have been defined proposed for the TCS: Dissemination, Preservation, Monitoring, and Analysis of geodetic data plus the Support and Governance Infrastructure. Current proposals and remaining open questions will be discussed.

  17. Comparison of Computer-based Clinical Decision Support Systems and Content for Diabetes Mellitus.

    PubMed

    Kantor, M; Wright, A; Burton, M; Fraser, G; Krall, M; Maviglia, S; Mohammed-Rajput, N; Simonaitis, L; Sonnenberg, F; Middleton, B

    2011-01-01

    Computer-based clinical decision support (CDS) systems have been shown to improve quality of care and workflow efficiency, and health care reform legislation relies on electronic health records and CDS systems to improve the cost and quality of health care in the United States; however, the heterogeneity of CDS content and infrastructure of CDS systems across sites is not well known. We aimed to determine the scope of CDS content in diabetes care at six sites, assess the capabilities of CDS in use at these sites, characterize the scope of CDS infrastructure at these sites, and determine how the sites use CDS beyond individual patient care in order to identify characteristics of CDS systems and content that have been successfully implemented in diabetes care. We compared CDS systems in six collaborating sites of the Clinical Decision Support Consortium. We gathered CDS content on care for patients with diabetes mellitus and surveyed institutions on characteristics of their site, the infrastructure of CDS at these sites, and the capabilities of CDS at these sites. The approach to CDS and the characteristics of CDS content varied among sites. Some commonalities included providing customizability by role or user, applying sophisticated exclusion criteria, and using CDS automatically at the time of decision-making. Many messages were actionable recommendations. Most sites had monitoring rules (e.g. assessing hemoglobin A1c), but few had rules to diagnose diabetes or suggest specific treatments. All sites had numerous prevention rules including reminders for providing eye examinations, influenza vaccines, lipid screenings, nephropathy screenings, and pneumococcal vaccines. Computer-based CDS systems vary widely across sites in content and scope, but both institution-created and purchased systems had many similar features and functionality, such as integration of alerts and reminders into the decision-making workflow of the provider and providing messages that are actionable recommendations.

  18. IEDA: Making Small Data BIG Through Interdisciplinary Partnerships Among Long-tail Domains

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Carbotte, S. M.; Arko, R. A.; Ferrini, V. L.; Hsu, L.; Song, L.; Ghiorso, M. S.; Walker, D. J.

    2014-12-01

    The Big Data world in the Earth Sciences so far exists primarily for disciplines that generate massive volumes of observational or computed data using large-scale, shared instrumentation such as global sensor networks, satellites, or high-performance computing facilities. These data are typically managed and curated by well-supported community data facilities that also provide the tools for exploring the data through visualization or statistical analysis. In many other domains, especially those where data are primarily acquired by individual investigators or small teams (known as 'Long-tail data'), data are poorly shared and integrated, lacking a community-based data infrastructure that ensures persistent access, quality control, standardization, and integration of data, as well as appropriate tools to fully explore and mine the data within the context of broader Earth Science datasets. IEDA (Integrated Earth Data Applications, www.iedadata.org) is a data facility funded by the US NSF to develop and operate data services that support data stewardship throughout the full life cycle of observational data in the solid earth sciences, with a focus on the data management needs of individual researchers. IEDA builds on a strong foundation of mature disciplinary data systems for marine geology and geophysics, geochemistry, and geochronology. These systems have dramatically advanced data resources in those long-tail Earth science domains. IEDA has strengthened these resources by establishing a consolidated, enterprise-grade infrastructure that is shared by the domain-specific data systems, and implementing joint data curation and data publication services that follow community standards. In recent years, other domain-specific data efforts have partnered with IEDA to take advantage of this infrastructure and improve data services to their respective communities with formal data publication, long-term preservation of data holdings, and better sustainability. IEDA hopes to foster such partnerships with streamlined data services, including user-friendly, single-point interfaces for data submission, discovery, and access across the partner systems to support interdisciplinary science.

  19. Flexible Workflow Software enables the Management of an Increased Volume and Heterogeneity of Sensors, and evolves with the Expansion of Complex Ocean Observatory Infrastructures.

    NASA Astrophysics Data System (ADS)

    Tomlin, M. C.; Jenkyns, R.

    2015-12-01

    Ocean Networks Canada (ONC) collects data from observatories in the northeast Pacific, Salish Sea, Arctic Ocean, Atlantic Ocean, and land-based sites in British Columbia. Data are streamed, collected autonomously, or transmitted via satellite from a variety of instruments. The Software Engineering group at ONC develops and maintains Oceans 2.0, an in-house software system that acquires and archives data from sensors, and makes data available to scientists, the public, government and non-government agencies. The Oceans 2.0 workflow tool was developed by ONC to manage a large volume of tasks and processes required for instrument installation, recovery and maintenance activities. Since 2013, the workflow tool has supported 70 expeditions and grown to include 30 different workflow processes for the increasing complexity of infrastructures at ONC. The workflow tool strives to keep pace with an increasing heterogeneity of sensors, connections and environments by supporting versioning of existing workflows, and allowing the creation of new processes and tasks. Despite challenges in training and gaining mutual support from multidisciplinary teams, the workflow tool has become invaluable in project management in an innovative setting. It provides a collective place to contribute to ONC's diverse projects and expeditions and encourages more repeatable processes, while promoting interactions between the multidisciplinary teams who manage various aspects of instrument development and the data they produce. The workflow tool inspires documentation of terminologies and procedures, and effectively links to other tools at ONC such as JIRA, Alfresco and Wiki. Motivated by growing sensor schemes, modes of collecting data, archiving, and data distribution at ONC, the workflow tool ensures that infrastructure is managed completely from instrument purchase to data distribution. It integrates all areas of expertise and helps fulfill ONC's mandate to offer quality data to users.

  20. Simulation and Verification of Synchronous Set Relations in Rewriting Logic

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Munoz, Cesar A.

    2011-01-01

    This paper presents a mathematical foundation and a rewriting logic infrastructure for the execution and property veri cation of synchronous set relations. The mathematical foundation is given in the language of abstract set relations. The infrastructure consists of an ordersorted rewrite theory in Maude, a rewriting logic system, that enables the synchronous execution of a set relation provided by the user. By using the infrastructure, existing algorithm veri cation techniques already available in Maude for traditional asynchronous rewriting, such as reachability analysis and model checking, are automatically available to synchronous set rewriting. The use of the infrastructure is illustrated with an executable operational semantics of a simple synchronous language and the veri cation of temporal properties of a synchronous system.

Top