Sample records for access protocol opendap

  1. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.

  2. Data Integration Support for Data Served in the OPeNDAP and OGC Environments

    NASA Technical Reports Server (NTRS)

    McDonald, Kenneth R.; Wharton, Stephen W. (Technical Monitor)

    2006-01-01

    NASA is coordinating a technology development project to construct a gateway between system components built upon the Open-source Project for a Network Data AcceSs Protocol (OPeNDAP) and those made available made available via interfaces specified by the Open Geospatial Consortium (OGC). This project is funded though the Advanced Collaborative Connections for Earth-Sun System Science (ACCESS) Program and is a NASA contribution to the Committee on Earth Satellites (CEOS) Working Group on Information Systems and Services (WGISS). The motivation for the project is the set of data integration needs that have been expressed by the Coordinated Enhanced Observing Period (CEOP), an international program that is addressing the study of the global water cycle. CEOP is assembling a large collection in situ and satellite data and mode1 results from a wide variety of sources covering 35 sites around the globe. The data are provided by systems based on either the OPeNDAP or OGC protocols but the research community desires access to the full range of data and associated services from a single client. This presentation will discuss the current status of the OPeNDAP/OGC Gateway Project. The project is building upon an early prototype that illustrated the feasibility of such a gateway and which was demonstrated to the CEOP science community. In its first year as an ACCESS project, the effort has been has focused on the design of the catalog and data services that will be provided by the gateway and the mappings between the metadata and services provided in the two environments.

  3. Customer-oriented Data Formats and Services for Global Land Data Assimilation System (GLDAS) Products at the NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Fang, H.; Kato, H.; Rodell, M.; Teng, W. L.; Vollmer, B. E.

    2008-12-01

    The Global Land Data Assimilation System (GLDAS) has been generating a series of land surface state (e.g., soil moisture and surface temperature) and flux (e.g., evaporation and sensible heat flux) products, simulated by four land surface models (CLM, Mosaic, Noah and VIC). These products are now accessible at the Hydrology Data and Information Services Center (HDISC), a component of the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Current GLDAS data hosted at HDISC include a set of 1.0° data products, covering 1979 to the present, from the four models and a 0.25° data product, covering 2000 to the present, from the Noah model. In addition to the basic anonymous ftp data downloading, users can avail themselves of several advanced data search and downloading services, such as Mirador and OPeNDAP. Mirador is a Google-based search tool that provides keywords searching, on-the-fly spatial and parameter subsetting of selected data. OPeNDAP (Open-source Project for a Network Data Access Protocol) enables remote OPeNDAP clients to access OPeNDAP served data regardless of local storage format. Additional data services to be available in the near future from HDISC include (1) on-the-fly converter of GLDAS to NetCDF and binary data formats; (2) temporal aggregation of GLDAS files; and (3) Giovanni, an online visualization and analysis tool that provides a simple way to visualize, analyze, and access vast amounts of data without having to download the data.

  4. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.

  5. Intro and Recent Advances: Remote Data Access via OPeNDAP Web Services

    NASA Technical Reports Server (NTRS)

    Fulker, David

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  6. Guided Tour of Pythonian Museum

    NASA Technical Reports Server (NTRS)

    Lee, H. Joe

    2017-01-01

    At http:hdfeos.orgzoo, we have a large collection of Python examples of dealing with NASA HDF (Hierarchical Data Format) products. During this hands-on Python tutorial session, we'll present a few common hacks to access and visualize local NASA HDF data. We'll also cover how to access remote data served by OPeNDAP (Open-source Project for a Network Data Access Protocol). As a glue language, we will demonstrate how you can use Python for your data workflow - from searching data to analyzing data with machine learning.

  7. NCAR's Research Data Archive: OPeNDAP Access for Complex Datasets

    NASA Astrophysics Data System (ADS)

    Dattore, R.; Worley, S. J.

    2014-12-01

    Many datasets have complex structures including hundreds of parameters and numerous vertical levels, grid resolutions, and temporal products. Making these data accessible is a challenge for a data provider. OPeNDAP is powerful protocol for delivering in real-time multi-file datasets that can be ingested by many analysis and visualization tools, but for these datasets there are too many choices about how to aggregate. Simple aggregation schemes can fail to support, or at least make it very challenging, for many potential studies based on complex datasets. We address this issue by using a rich file content metadata collection to create a real-time customized OPeNDAP service to match the full suite of access possibilities for complex datasets. The Climate Forecast System Reanalysis (CFSR) and it's extension, the Climate Forecast System Version 2 (CFSv2) datasets produced by the National Centers for Environmental Prediction (NCEP) and hosted by the Research Data Archive (RDA) at the Computational and Information Systems Laboratory (CISL) at NCAR are examples of complex datasets that are difficult to aggregate with existing data server software. CFSR and CFSv2 contain 141 distinct parameters on 152 vertical levels, six grid resolutions and 36 products (analyses, n-hour forecasts, multi-hour averages, etc.) where not all parameter/level combinations are available at all grid resolution/product combinations. These data are archived in the RDA with the data structure provided by the producer; no additional re-organization or aggregation have been applied. Since 2011, users have been able to request customized subsets (e.g. - temporal, parameter, spatial) from the CFSR/CFSv2, which are processed in delayed-mode and then downloaded to a user's system. Until now, the complexity has made it difficult to provide real-time OPeNDAP access to the data. We have developed a service that leverages the already-existing subsetting interface and allows users to create a virtual dataset with its own structure (das, dds). The user receives a URL to the customized dataset that can be used by existing tools to ingest, analyze, and visualize the data. This presentation will detail the metadata system and OPeNDAP server that enable user-customized real-time access and show an example of how a visualization tool can access the data.

  8. Share Data with OPeNDAP Hyrax: New Features and Improvements

    NASA Technical Reports Server (NTRS)

    Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in 1. server installation, 2. server configuration, 3. Hyrax aggregation capabilities, 4. support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS), 5. support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  9. NASA's Big Earth Data Initiative Accomplishments

    NASA Technical Reports Server (NTRS)

    Klene, Stephan A.; Pauli, Elisheva; Pressley, Natalie N.; Cechini, Matthew F.; McInerney, Mark

    2017-01-01

    The goal of NASA's effort for BEDI is to improve the usability, discoverability, and accessibility of Earth Observation data in support of societal benefit areas. Accomplishments: In support of BEDI goals, datasets have been entered into Common Metadata Repository(CMR), made available via the Open-source Project for a Network Data Access Protocol (OPeNDAP), have a Digital Object Identifier (DOI) registered for the dataset, and to support fast visualization many layers have been added in to the Global Imagery Browse Services (GIBS).

  10. NASA's Big Earth Data Initiative Accomplishments

    NASA Astrophysics Data System (ADS)

    Klene, S. A.; Pauli, E.; Pressley, N. N.; Cechini, M. F.; McInerney, M.

    2017-12-01

    The goal of NASA's effort for BEDI is to improve the usability, discoverability, and accessibility of Earth Observation data in support of societal benefit areas. Accomplishments: In support of BEDI goals, datasets have been entered into Common Metadata Repository(CMR), made available via the Open-source Project for a Network Data Access Protocol (OPeNDAP), have a Digital Object Identifier (DOI) registered for the dataset, and to support fast visualization many layers have been added in to the Global Imagery Browse Service(GIBS)

  11. Extending OPeNDAP's Data-Access Protocol to Include Enhanced Pre-Retrieval Operations

    NASA Astrophysics Data System (ADS)

    Fulker, D. W.

    2013-12-01

    We describe plans to extend OPeNDAP's Web-services protocol as a Building Block for NSF's EarthCube initiative. Though some data-access services have offered forms of subset-selection for decades, other pre-retrieval operations have been unavailable, in part because their benefits (over equivalent post-retrieval actions) are only now becoming fully evident. This is due in part to rapid growth in the volumes of data that are pertinent to the geosciences, exacerbated by limitations such as Internet speeds and latencies as well as pressures toward data usage on ever-smaller devices. In this context, as recipients of a "Building Blocks" award from the most recent round of EarthCube funding, we are launching the specification and prototype implementation of a new Open Data Services Invocation Protocol (ODSIP), by which clients may invoke a newly rich set of data-acquisition services, ranging from statistical summarization and criteria-driven subsetting to re-gridding/resampling. ODSIP will be an extension to DAP4, the latest version of OPeNDAP's widely used data access protocol, which underpins a number of open-source, multilingual, client-server systems (offering data access as a Web service), including THREDDS, PyDAP, GrADS, ERDAP and FERRET, as well as OPeNDAP's own Hyrax servers. We are motivated by the idea that key parts of EarthCube can be built effectively around clients and servers that employ a common and conceptually rich protocol for data acquisition. This concept extends 'data provision' to include pre-retrieval operations that, even when invoked by remote clients, exhibit efficiencies of data-proximate computation. Our aim for ODSIP is to embed a largely domain-neutral algebra of server functions that, despite being deliberately compact, can fulfill a broad range of user needs for pre-retrieval operations. To that end, our approach builds upon languages and tools that have proven effective in multi-domain contexts, and we will employ a user-centered design process built around three science scenarios: 1) accelerated visualization/analysis of model outputs on non-rectangular meshes (over coastal North Carolina); 2) dynamic downscaling of climate predictions for regional utility (over Hawaii); and 3) feature-oriented retrievals of satellite imagery (focusing on satellite-derived sea-surface-temperature fronts). These scenarios will test important aspects of the server-function algebra: * The Hawaii climate study requires coping with issues of scale on rectangular grids, placing strong emphasis on statistical functions. * The east-coast storm-surge study requires irregular grids, thus exploring mathematical challenges that have been addressed in many domains via the GridFields library, which we will employ. We think important classes of geoscience problems in multiple domains--where dealing with discontinuities, for example--are essentially intractable without polygonal meshes. * The sea-surface fronts study integrates vector-style features with array-style coverages, thus touching on the kinds of mathematics that arise when mixing Eulerian and Lagrangian frameworks. Our presentation will sketch the context for ODSIP, our process for a user-centered design, and our hopes for how ODSIP, as an emerging cyberinfrastructure concept for the Geosciences, may serve as a fundamental building block for EarthCube.

  12. NASA Update for Unidata Stratcomm

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2017-01-01

    The NASA representative to the Unidata Strategic Committee presented a semiannual update on NASAs work with and use of Unidata technologies. The talk updated Unidata on the program of cloud computing prototypes underway for the Earth Observing System Data and Information System (EOSDIS). Also discussed was a trade study on the use of the Open source Project for a Network Data Access Protocol (OPeNDAP) with Web Object Storage in the cloud.

  13. A price and performance comparison of three different storage architectures for data in cloud-based systems

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  14. Using OPeNDAP's Data-Services Framework to Lift Mash-Ups above Blind Dates

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Fulker, D. W.

    2015-12-01

    OPeNDAP's data-as-service framework (Hyrax) matches diverse sources with many end-user tools and contexts. Keys to its flexibility include: A data model embracing tabular data alongside n-dim arrays and other structures useful in geoinformatics. A REST-like protocol that supports—via suffix notation—a growing set of output forms (netCDF, XML, etc.) plus a query syntax for subsetting. Subsetting applies (via constraints on column values) to tabular data or (via constraints on indices or coordinates) to array-style data . A handler-style architecture that admits a growing set of input types. Community members may contribute handlers, making Hyrax effective as middleware, where N sources are mapped to M outputs with order N+M effort (not NxM). Hyrax offers virtual aggregations of source data, enabling granularity aimed at users, not data-collectors. OPeNDAP-access libraries exist in multiple languages, including Python, Java, and C++. Recent enhancements are increasing this framework's interoperability (i.e., its mash-up) potential. Extensions implemented as servlets—running adjacent to Hyrax—are enriching the forms of aggregation and enabling new protocols: User-specified aggregations, namely, applying a query to (huge) lists of source granules, and receiving one (large) table or zipped netCDF file. OGC (Open Geospatial Consortium) protocols, WMS and WCS. A Webification (W10n) protocol that returns JavaScript Object Notation (JSON). Extensions to OPeNDAP's query language are reducing transfer volumes and enabling new forms of inspection. Advances underway include: Functions that, for triangular-mesh sources, return sub-meshes spec'd via geospatial bounding boxes. Functions that, for data from multiple, satellite-borne sensors (with differing orbits), select observations based on coincidence. Calculations of means, histograms, etc. that greatly reduce output volumes.. Paths for communities to contribute new server functions (in Python, e.g.) that data providers may incorporate into Hyrax via installation parameters. One could say Hyrax itself is a mash-up, but we suggest it as an instrument for a mash-up artist's toolbox. This instrument can support mash-ups built on netCDF files, OGC protocols, JavaScript Web pages, and/or programs written in Python, Java, C or C++.

  15. Advancing the Power and Utility of Server-Side Aggregation

    NASA Technical Reports Server (NTRS)

    Fulker, Dave; Gallagher, James

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters.Topics 2 through 7 will be relevant to data consumers, data providers andnotably, due to the open-source nature of all OPeNDAP softwareto developers wishing to extend Hyrax, to build compatible clients and servers, andor to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  16. A Flexible Component based Access Control Architecture for OPeNDAP Services

    NASA Astrophysics Data System (ADS)

    Kershaw, Philip; Ananthakrishnan, Rachana; Cinquini, Luca; Lawrence, Bryan; Pascoe, Stephen; Siebenlist, Frank

    2010-05-01

    Network data access services such as OPeNDAP enable widespread access to data across user communities. However, without ready means to restrict access to data for such services, data providers and data owners are constrained from making their data more widely available. Even with such capability, the range of different security technologies available can make interoperability between services and user client tools a challenge. OPeNDAP is a key data access service in the infrastructure under development to support the CMIP5 (Couple Model Intercomparison Project Phase 5). The work is being carried out as part of an international collaboration including the US Earth System Grid and Curator projects and the EU funded IS-ENES and Metafor projects. This infrastructure will bring together Petabytes of climate model data and associated metadata from over twenty modelling centres around the world in a federation with a core archive mirrored at three data centres. A security system is needed to meet the requirements of organisations responsible for model data including the ability to restrict data access to registered users, keep them up to date with changes to data and services, audit access and protect finite computing resources. Individual organisations have existing tools and services such as OPeNDAP with which users in the climate research community are already familiar. The security system should overlay access control in a way which maintains the usability and ease of access to these services. The BADC (British Atmospheric Data Centre) has been working in collaboration with the Earth System Grid development team and partner organisations to develop the security architecture. OpenID and MyProxy were selected at an early stage in the ESG project to provide single sign-on capability across the federation of participating organisations. Building on the existing OPeNDAP specification an architecture based on pluggable server side components has been developed at the BADC. These components filter requests to the service they protect and apply the required authentication and authorisation schemes. Filters have been developed for OpenID and SSL client based authentication. The latter enabling access with MyProxy issued credentials. By preserving a clear separation between the security and application functionality, multiple authentication technologies may be supported without the need for modification to the underlying OPeNDAP application. The software has been developed in the Python programming language securing the Python based OPeNDAP implementation, PyDAP. This utilises the Python WSGI (Web Server Gateway Interface) specification to create distinct security filter components. Work is also currently underway to develop a parallel Java based filter implementation to secure the THREDDS Data Server. Whilst the ability to apply this flexible approach to the server side security layer is important, the development of compatible client software is vital to the take up of these services across a wide user base. To date PyDAP and wget based clients have been tested and work is planned to integrate the required security interface into the netCDF API. This forms part of ongoing collaboration with the OPeNDAP user and development community to ensure interoperability.

  17. TES/Aura L2 Atmospheric Temperatures Limb V6 (TL2TLS)

    Atmospheric Science Data Center

    2018-03-01

    TES/Aura L2 Atmospheric Temperatures Limb (TL2TLS) News:  TES News ... Level:  L2 Platform:  TES/Aura L2 Atmospheric Temperatures Spatial Coverage:  27 x 23 km Limb ... OPeNDAP Access: OPeNDAP Parameters:  Atmospheric Temperature Temperature Precision Vertical Resolution ...

  18. TES/Aura L2 Atmospheric Temperatures Limb V6 (TL2ATMTL)

    Atmospheric Science Data Center

    2018-03-01

    TES/Aura L2 Atmospheric Temperatures Limb (TL2ATMTL) News:  TES News ... Level:  L2 Platform:  TES/Aura L2 Atmospheric Temperatures Spatial Coverage:  27 x 23 km Limb ... OPeNDAP Access: OPeNDAP Parameters:  Atmospheric Temperature Temperature Precision Vertical Resolution ...

  19. TES/Aura L2 Ammonia (NH3) Lite Nadir V6 (TL2NH3LN)

    Atmospheric Science Data Center

    2017-07-20

    TES/Aura L2 Ammonia (NH3) Lite Nadir (TL2NH3LN) News:  TES News ... Level:  L2 Instrument:  TES/Aura L2 Ammonia Spatial Coverage:  5.3 km nadir Spatial ... OPeNDAP Access:  OPeNDAP Parameters:  Ammonia Order Data:  Earthdata Search:   Order Data ...

  20. Aggregating Queries Against Large Inventories of Remotely Accessible Data

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Fulker, D. W.

    2016-12-01

    Those seeking to discover data for a specific purpose often encounter search results that are so large as to be useless without computing assistance. This situation arises, with increasing frequency, in part because repositories contain ever greater numbers of granules, and their granularities may well be poorly aligned or even orthogonal to the data-selection needs of the user. This presentation describes a recently developed service for simultaneously querying large lists of OPeNDAP-accessible granules to extract specified data. The specifications include a richly expressive set of data-selection criteria—applicable to content as well as metadata—and the service has been tested successfully against lists naming hundreds of thousands of granules. Querying such numbers of local files (i.e., granules) on a desktop or laptop computer is practical (by using a scripting language, e.g.), but this practicality is diminished when the data are remote and thus best accessed through a Web-services interface. In these cases, which are increasingly common, scripted queries can take many hours because of inherent network latencies. Furthermore, communication dropouts can add fragility to such scripts, yielding gaps in the acquired results. In contrast, OPeNDAP's new aggregated-query services enable data discovery in the context of very large inventory sizes. These capabilities have been developed for use with OPeNDAP's Hyrax server, which is an open-source realization of DAP (for "Data Access Protocol," a specification widely used in NASA, NOAA and other data-intensive contexts). These aggregated-query services exhibit good response times (on the order of seconds, not hours) even for inventories that list hundreds of thousands of source granules.

  1. GI-axe: an access broker framework for the geosciences

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Nativi, S.; Santoro, M.; Papeschi, F.; Mazzetti, P.

    2012-12-01

    The efficient and effective discovery of heterogeneous geospatial resources (e.g. data and services) is currently addressed by implementing "Discovery Brokering components"—such as GI-cat which is successfully used by the GEO brokering framework. A related (and subsequent) problem is the access of discovered resources. As for the discovery case, there exists a clear challenge: the geospatial Community makes use of heterogeneous access protocols and data models. In fact, different standards (and best practices) are defined and used by the diverse Geoscience domains and Communities of practice. Besides, through a client application, Users want to access diverse data to be jointly used in a common Geospatial Environment (CGE): a geospatial environment characterized by a spatio-temporal CRS (Coordinate Reference System), resolution, and extension. Users want to define a CGE and get the selected data ready to be used in such an environment. Finally, they want to download data according to a common encoding (either binary or textual). Therefore, it is possible to introduce the concept of "Access Brokering component" which addresses all these intermediation needs, in a transparent way for both clients (i.e. Users) and access servers (i.e. Data Providers). This work presents GI-axe: a flexible Access Broker which is capable to intermediate the different access standards and to get data according to a CGE, previously specified by the User. In doing that, GI-axe complements the capabilities of the brokered access servers, in keeping with the brokering principles. Let's consider a sample use case of a User needing to access a global temperature dataset available online on a THREDDS Data Server and a rainfall dataset accessible through a WFS—she/he may have obtained the datasets as a search result from a discovery broker. Distribution information metadata accompanying the temperature dataset further indicate that a given OPeNDAP service has to be accessed to retrieve it. At this point, the User would be in charge of searching for an existing OPeNDAP client and retrieve the desired data with the desired CGE; worse he/she may need to write his/her own OPeNDAP client. While, the User has to utilize a GIS to access the rainfall data and perform all the necessary transformations to obtain the same CGE. The GI-axe access broker takes this interoperability burden off the User, by bearing the charge of accessing the available services and performing the needed adaptations to get both data according to the same CGE. Actually, GI-axe can also expose both the TDS the WFS as (for example) a WMS, allowing the User to utilize a unique and (perhaps) more familiar client. The User can this way concentrate on less technological aspects more inherent to his/her scientific field. GI-axe has been first developed and experimented in the multidisciplinary interoperability framework of the European Community funded EuroGEOSS project. Presently, is utilized in the GEOSS Discovery & Access Brokering framework.

  2. Applications For Real Time NOMADS At NCEP To Disseminate NOAA's Operational Model Data Base

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Wang, J.; Rutledge, G.

    2007-05-01

    A wide range of environmental information, in digital form, with metadata descriptions and supporting infrastructure is contained in the NOAA Operational Modeling Archive Distribution System (NOMADS) and its Real Time (RT) project prototype at the National Centers for Environmental Prediction (NCEP). NOMADS is now delivering on its goal of a seamless framework, from archival to real time data dissemination for NOAA's operational model data holdings. A process is under way to make NOMADS part of NCEP's operational production of products. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development. In the National Research Council's "Completing the Forecast", Recommendation 3.4 states: "NOMADS should be maintained and extended to include (a) long-term archives of the global and regional ensemble forecasting systems at their native resolution, and (b) re-forecast datasets to facilitate post-processing." As one of many participants of NOMADS, NCEP serves the operational model data base using data application protocol (Open-DAP) and other services for participants to serve their data sets and users to obtain them. Using the NCEP global ensemble data as an example, we show an Open-DAP (also known as DODS) client application that provides a request-and-fulfill mechanism for access to the complex ensemble matrix of holdings. As an example of the DAP service, we show a client application which accesses the Global or Regional Ensemble data set to produce user selected weather element event probabilities. The event probabilities are easily extended over model forecast time to show probability histograms defining the future trend of user selected events. This approach insures an efficient use of computer resources because users transmit only the data necessary for their tasks. Data sets are served by OPeN-DAP allowing commercial clients such as MATLAB or IDL as well as freeware clients such as GrADS to access the NCEP real time database. We will demonstrate how users can use NOMADS services to repackage area subsets and select levels and variables that are sent to a users selected ftp site. NOMADS can also display plots on demand for area subsets, selected levels, time series and selected variables.

  3. Sustainability of Open-Source Software Organizations as Underpinning for Sustainable Interoperability on Large Scales

    NASA Astrophysics Data System (ADS)

    Fulker, D. W.; Gallagher, J. H. R.

    2015-12-01

    OPeNDAP's Hyrax data server is an open-source framework fostering interoperability via easily-deployed Web services. Compatible with solutions listed in the (PA001) session description—federation, rigid standards and brokering/mediation—the framework can support tight or loose coupling, even with dependence on community-contributed software. Hyrax is a Web-services framework with a middleware-like design and a handler-style architecture that together reduce the interoperability challenge (for N datatypes and M user contexts) to an O(N+M) problem, similar to brokering. Combined with an open-source ethos, this reduction makes Hyrax a community tool for gaining interoperability. E.g., in its response to the Big Earth Data Initiative (BEDI), NASA references OPeNDAP-based interoperability. Assuming its suitability, the question becomes: how sustainable is OPeNDAP, a small not-for-profit that produces open-source software, i.e., has no software-sales? In other words, if geoscience interoperability depends on OPeNDAP and similar organizations, are those entities in turn sustainable? Jim Collins (in Good to Great) highlights three questions that successful companies can answer (paraphrased here): What is your passion? Where is your world-class excellence? What drives your economic engine? We attempt to shed light on OPeNDAP sustainability by examining these. Passion: OPeNDAP has a focused passion for improving the effectiveness of scientific data sharing and use, as deeply-cooperative community endeavors. Excellence: OPeNDAP has few peers in remote, scientific data access. Skills include computer science with experience in data science, (operational, secure) Web services, and software design (for servers and clients, where the latter vary from Web pages to standalone apps and end-user programs). Economic Engine: OPeNDAP is an engineering services organization more than a product company, despite software being key to OPeNDAP's reputation. In essence, provision of engineering expertise, via contracts and grants, is the economic engine. Hence sustainability, as needed to address global grand challenges in geoscience, depends on agencies' and others' abilities and willingness to offer grants and let contracts for continually upgrading open-source software from OPeNDAP and others.

  4. ERDDAP - An Easier Way for Diverse Clients to Access Scientific Data From Diverse Sources

    NASA Astrophysics Data System (ADS)

    Mendelssohn, R.; Simons, R. A.

    2008-12-01

    ERDDAP is a new open-source, web-based service that aggregates data from other web services: OPeNDAP grid servers (THREDDS), OPeNDAP sequence servers (Dapper), NOS SOAP service, SOS (IOOS, OOStethys), microWFS, DiGIR (OBIS, BMDE). Regardless of the data source, ERDDAP makes all datasets available to clients via standard (and enhanced) DAP requests and makes some datasets accessible via WMS. A client's request also specifies the desired format for the results, e.g., .asc, .csv, .das, .dds, .dods, htmlTable, XHTML, .mat, netCDF, .kml, .png, or .pdf (formats more directly useful to clients). ERDDAP interprets a client request, requests the data from the data source (in the appropriate way), reformats the data source's response, and sends the result to the client. Thus ERDDAP makes data from diverse sources available to diverse clients via standardized interfaces. Clients don't have to install libraries to get data from ERDDAP because ERDDAP is RESTful and resource-oriented: a URL completely defines a data request and the URL can be used in any application that can send a URL and receive a file. This also makes it easy to use ERDDAP in mashups with other web services. ERDDAP could be extended to support other protocols. ERDDAP's hub and spoke architecture simplifies adding support for new types of data sources and new types of clients. ERDDAP includes metadata management support, catalog services, and services to make graphs and maps.

  5. Efficiently Serving HDF5 Products via OPeNDAP

    NASA Technical Reports Server (NTRS)

    Yang, Kent

    2017-01-01

    Hyrax OPeNDAP services are widely used by the Earth Science data centers in NASA, NOAA and other organizations to serve end users. In this talk, we will present some key features added in the HDF5 Hyrax OPeNDAP handler that can help data centers to better serve the HDF5netCDF-4 data products. Among these new features, we will focus on the following:1.The DAP4 support 2.The memory cache and the disk cache support that can reduce the service access time 3.The enhancement that makes the swath-like HDF5 products visualized by CF-client tools. We will also discuss the role of the HDF5 handler in-depth in the recent study of the Hyrax service in the cloud environment.

  6. Using Open and Interoperable Ways to Publish and Access LANCE AIRS Near-Real Time Data

    NASA Astrophysics Data System (ADS)

    Zhao, P.; Lynnes, C.; Vollmer, B.; Savtchenko, A. K.; Yang, W.

    2011-12-01

    Atmospheric Infrared Sounder (AIRS) Near-Real Time (NRT) data from the Land Atmosphere Near real time Capability for EOS (LANCE) provide the information on the global and regional atmospheric state with very low latency. An open and interoperable platform is useful to facilitate access to and integration of LANCE AIRS NRT data. This paper discusses the use of open-source software components to build Web services for publishing and accessing AIRS NRT data in the context of Service Oriented Architecture (SOA). The AIRS NRT data have also been made available through an OPeNDAP server. OPeNDAP allows several open-source netCDF-based tools such as Integrated Data Viewer, Ferret and Panoply to directly display the Level 2 data over the network. To enable users to locate swath data files in the OPeNDAP server that lie within a certain geographical area, graphical "granule maps" are being added to show the outline of each file on a map of the Earth. The metadata of AIRS NRT data and services is then explored to implement information advertisement and discovery in catalogue systems. Datacasting, an RSS-based technology for accessing Earth Science data and information to facilitate the subscriptions to AIRS NRT data availability, filtering, downloading and viewing data, is also discussed. To provide an easy entry point to AIRS NRT data and services, a Web portal designed for customized data downloading and visualization is introduced.

  7. Citation and Recognition of contributions using Semantic Provenance Knowledge Captured in the OPeNDAP Software Framework

    NASA Astrophysics Data System (ADS)

    West, P.; Michaelis, J.; Lebot, T.; McGuinness, D. L.; Fox, P. A.

    2014-12-01

    Providing proper citation and attribution for published data, derived data products, and the software tools used to generate them, has always been an important aspect of scientific research. However, It is often the case that this type of detailed citation and attribution is lacking. This is in part because it often requires manual markup since dynamic generation of this type of provenance information is not typically done by the tools used to access, manipulate, transform, and visualize data. In addition, the tools themselves lack the information needed to be properly cited themselves. The OPeNDAP Hyrax Software Framework is a tool that provides access to and the ability to constrain, manipulate, and transform, different types of data from different data formats, into a common format, the DAP (Data Access Protocol), in order to derive new data products. A user, or another software client, specifies an HTTP URL in order to access a particular piece of data, and appropriately transform it to suit a specific purpose of use. The resulting data products, however, do not contain any information about what data was used to create it, or the software process used to generate it, let alone information that would allow the proper citing and attribution to down stream researchers and tool developers. We will present our approach to provenance capture in Hyrax including a mechanism that can be used to report back to the hosting site any derived products, such as publications and reports, using the W3C PROV recommendation pingback service. We will demonstrate our utilization of Semantic Web and Web standards, the development of an information model that extends the PROV model for provenance capture, and the development of the pingback service. We will present our findings, as well as our practices for providing provenance information, visualization of the provenance information, and the development of pingback services, to better enable scientists and tool developers to be recognized and properly cited for their contributions.

  8. Common Patterns with End-to-end Interoperability for Data Access

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Potter, N.; Jones, M. B.

    2010-12-01

    At first glance, using common storage formats and open standards should be enough to ensure interoperability between data servers and client applications, but that is often not the case. In the REAP (Realtime Environment for Analytical Processing; NSF #0619060) project we integrated access to data from OPeNDAP servers into the Kepler workflow system and found that, as in previous cases, we spent the bulk of our effort addressing the twin issues of data model compatibility and integration strategies. Implementing seamless data access between a remote data source and a client application (data sink) can be broken down into two kinds of issues. First, the solution must address any differences in the data models used by the data source (OPeNDAP) and the data sink (the Kepler workflow system). If these models match completely, there is little work to be done. However, that is rarely the case. To map OPeNDAP's data model to Kepler's, we used two techniques (ignoring trivial conversions): On-the-fly type mapping and out-of-band communication. Type conversion takes place both for data and metadata because Kepler requires a priori knowledge of some aspects (e.g., syntactic metadata) of the data to build a workflow. In addition, OPeNDAP's constraint expression syntax was used to send out-of-band information to restrict the data requested from the server, facilitating changes in the returned data's type. This technique provides a way for users to exert fine-grained control over the data request, a potentially useful technique, at the cost of requiring that users understand a little about the data source's processing capabilities. The second set of issues for end-to-end data access are integration strategies. OPeNDAP provides several different tools for bringing data into an application: C++, C and Java libraries that provide functions for newly written software; The netCDF library which enables existing applications to read from servers using an older interface; and simple file transfers. These options affect seamlessness in that they represent tradeoffs in new development (required for the first option) with cumbersome extra user actions (required by the last option). While the middle option, adding new functionality to an existing library (netCDF), is very appealing because practice has shown that it can be very effective over a wide range of clients, it's very hard to build these libraries because correctly writing a new implementation of an existing API that preserves the original's exact semantics can be a daunting task. In the example discussed here, we developed a new module for Kepler using OPeNDAP's Java API. This provided a way to leverage internal optimizations for data organization in Kepler and we felt that outweighed the additional cost of new development and the need for users to learn how to use a new Kepler module. While common storage formats and open standards play an important role in data access, our work with the Kepler workflow system reinforces the experience that matching the data models of the data server (source) and user client (sink) and choosing the most appropriate integration strategy are critical to achieving interoperability.

  9. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Technical Reports Server (NTRS)

    Won, Young-In; Wanchoo, Lalit; Behnke, Jeanne

    2016-01-01

    This study reviews the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements, such as discipline and study area, will assist in understanding how the user needs have evolved over time. The data usage pattern based on product level provides insight into the level of data quality. In addition, the data metrics by various services, such as the Open-source Project for a Network Data Access Protocol (OPeNDAP) and subsets, address how these services have extended the usage of data. Over-all, this study presents the usage of data and metadata by metrics analyses, which may assist data centers in better supporting the needs of the users.

  10. The Time Series Data Server (TSDS) for Standards-Compliant, Convenient, and Efficient Access to Time Series Data

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.

    2009-12-01

    Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http://www.unidata.ucar.edu/software/netcdf/ncml/) which enables dataset virtualization. An NcML file can expose a single file, a subset, or an aggregation of files as a single, logical dataset. With the appropriate NcML adapter, the TSDS can read data from its native format, eliminating the need for data providers to reformat their data and lowering the barrier for integration. Data can even be read via remote services which is important for enabling VxOs to be truly virtual. The TSDS provides reading, writing, and filtering capabilities through a modular framework. A collection of standard modules is available and customized modules are easy to create and integrate. This way the TSDS can read and write data in a variety of formats and apply filters to them an a manner customizable to meet the needs of both the data providers and consumers. The TSDS server is currently in use serving solar irradiance data from the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/), and is being introduced into the space physics virtual observatory community. The TSDS software is Open Source and available at SourceForge.

  11. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.

  12. OPeNDAP servers like Hyrax and TDS can easily support common single-sign-on authentication protocols using the Apache httpd and related software; adding support for these protocols to clients can be more challenging

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Potter, N.; Evans, B. J. K.

    2016-12-01

    OPeNDAP, in conjunction with the Australian National University, documented the installation process needed to add authentication to OPeNDAP-enabled data servers (Hyrax, TDS, etc.) and examined 13 OPeNDAP clients to determine how best to add authentication using LDAP, Shibboleth and OAuth2 (we used NASA's URS). We settled on a server configuration (architecture) that uses the Apache web server and a collection of open-source modules to perform the authentication and authorization actions. This is not the only way to accomplish those goals, but using Apache represents a good balance between functionality, leveraging existing work that has been well vetted and includes support for a wide variety of web services, include those that depend on a servlet engine such as tomcat (which both Hyrax and TDS do). Or work shows how LDAP, OAuth2 and Shibboleth can all be accommodated using this readily available software stack. Also important is that the Apache software is very widely used and is fairly robust - extremely important for security software components. In order to make use of a server requiring authentication, clients must support the authentication process. Because HTTP has included authentication for well over a decade, and because HTTP/HTTPS can be used by simply linking programs with a library, both the LDAP and OAuth2/URS authentication schemes have almost universal support within the OPeNDAP client base. The clients, i.e. the HTTP client libraries they employ, understand how to submit the credentials to the correct server when confronted by an HTTP/S Unauthorized (401) response. Interestingly OAuth2 can achieve it's SSO objectives while relying entirely on normative HTTP transport. All 13 of the clients examined worked.The situation with Shibboleth is different. While Shibboleth does use HTTP, it also requires the client to either scrape a web page or support the SAML2.0 ECP profile, which, for programmatic clients, means using SOAP messages. Since working with SOAP is outside the scope of HTTP, support for Shibboleth must be added explicitly into the client software. Some of the potential burden of enabling OPeNDAP clients to work with Shibboleth may be mitigated by getting both NetCDF-C and NetCDF-Java libraries to use the Shibboleth ECP profile. If done, this would get 9 of the 13 clients we examined working.

  13. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  14. Using Python Packages in 6D (Py)Ferret: EOF Analysis, OPeNDAP Sequence Data

    NASA Astrophysics Data System (ADS)

    Smith, K. M.; Manke, A.; Hankin, S. C.

    2012-12-01

    PyFerret was designed to provide the easy methods of access, analysis, and display of data found in the Ferret under the simple yet powerful Python scripting/programming language. This has enabled PyFerret to take advantage of a large and expanding collection of third-party scientific Python modules. Furthermore, ensemble and forecast axes have been added to Ferret and PyFerret for creating and working with collections of related data in Ferret's delayed-evaluation and minimal-data-access mode of operation. These axes simplify processing and visualization of these collections of related data. As one example, an empirical orthogonal function (EOF) analysis Python module was developed, taking advantage of the linear algebra module and other standard functionality in NumPy for efficient numerical array processing. This EOF analysis module is used in a Ferret function to provide an ensemble of levels of data explained by each EOF and Time Amplitude Function (TAF) product. Another example makes use of the PyDAP Python module to provide OPeNDAP sequence data for use in Ferret with minimal data access characteristic of Ferret.

  15. Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.

    2015-12-01

    The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  16. OceanNOMADS: A New Distribution Node for Operational Ocean Model Output

    NASA Astrophysics Data System (ADS)

    Cross, S.; Vance, T.; Breckenridge, T.

    2009-12-01

    The NOAA National Operational Model Archive and Distribution System (NOMADS) is a distributed, web-services based project providing real-time and retrospective access to climate and weather model data and related datasets. OceanNOMADS is a new NOMADS node dedicated to ocean model and related data, with an initial focus on operational ocean models from NOAA and the U.S. Navy. The node offers data access through a Thematic Real-time Environmental Distributed Data Services (THREDDS) server via the commonly used OPeNDAP protocol. The primary server is operated by the National Coastal Data Development Center and hosted by the Northern Gulf Institute at Stennis Space Center, MS. In cooperation with the National Marine Fisheries Service and Mississippi State University (MSU), a duplicate server is being installed at MSU with a 1-gigabit connection to the National Lambda Rail. This setup will allow us to begin to quantify the benefit of high-speed data connections to scientists needing remote access to these large datasets. Work is also underway on the next generation of services from OceanNOMADS, including user-requested server-side data reformatting, regridding, and aggregation, as well as tools for model-data comparison.

  17. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  18. Metadata in the Wild: An Empirical Survey of OPeNDAP-accessible Metadata and its Implications for Discovery

    NASA Astrophysics Data System (ADS)

    Hardy, D.; Janée, G.; Gallagher, J.; Frew, J.; Cornillon, P.

    2006-12-01

    The OPeNDAP Data Access Protocol (DAP) is a community standard for sharing scientific data across the Internet. Data providers using DAP have adopted a variety of metadata conventions to improve data utility, such as COARDS (1995) and CF (2003). Our results show, however, that metadata do not follow these conventions in practice. We collected metadata from over a hundred DAP servers, tens of thousands of data objects, and hundreds of collections. We found that a minority claim to adhere to a metadata convention, and a small percentage accurately adhere to their stated convention. We present descriptive statistics of our survey and highlight common traits such as well-populated attributes. Our empirical results indicate that unified search services cannot rely solely on metadata conventions. Although we encourage all providers to adopt a small subset of the CF convention for discovery purposes, we have no evidence to suggest that improved conventions would simplify the fundamental problem of heterogeneity. Large-scale discovery services must find methods for integrating incompatible metadata.

  19. Interoperable Data Access Services for NOAA IOOS

    NASA Astrophysics Data System (ADS)

    de La Beaujardiere, J.

    2008-12-01

    The Integrated Ocean Observing System (IOOS) is intended to enhance our ability to collect, deliver, and use ocean information. The goal is to support research and decision-making by providing data on our open oceans, coastal waters, and Great Lakes in the formats, rates, and scales required by scientists, managers, businesses, governments, and the public. The US National Oceanic and Atmospheric Administration (NOAA) is the lead agency for IOOS. NOAA's IOOS office supports the development of regional coastal observing capability and promotes data management efforts to increase data accessibility. Geospatial web services have been established at NOAA data providers including the National Data Buoy Center (NDBC), the Center for Operational Oceanographic Products and Services (CO-OPS), and CoastWatch, and at regional data provider sites. Services established include Open-source Project for a Network Data Access Protocol (OpenDAP), Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), and OGC Web Coverage Service (WCS). These services provide integrated access to data holdings that have been aggregated at each center from multiple sources. We wish to collaborate with other groups to improve our service offerings to maximize interoperability and enhance cross-provider data integration, and to share common service components such as registries, catalogs, data conversion, and gateways. This paper will discuss the current status of NOAA's IOOS efforts and possible next steps.

  20. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    NASA Technical Reports Server (NTRS)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  1. Creating and Searching a Local Inventory for Data Granules in a Remote Archive

    NASA Astrophysics Data System (ADS)

    Cornillon, P. C.

    2016-12-01

    More often than not, search capabilities for network accessible data do not exist or do not meet the requirements of the user. For large archives this can make finding data of interest tedious at best. This summer, the author encountered such a problem with regard to the two existing archives of VIIRS L2 sea surface temperature (SST) fields obtained with the new ACSPO retrieval algorithm; one at the Jet Propulsion Laboratory's PO-DAAC and the other at NOAA's National Centers for Environmental Information (NCEI). In both cases the data were available via ftp and OPeNDAP but there was no search capability at the PO-DAAC and the NCEI archive was incomplete. Furthermore, in order to meet the needs of a broad range of datasets and users, the beta version of the search engine at NCEI was cumbersome for the searches of interest. Although some of these problems have been resolved since (and may be described in other posters/presentations at this meeting), the solution described in this presentation offers the user the ability to develop a search capability for archives lacking a search capability and/or to configure searches more to his or her preferences than the generic searches offered by the data provider. The solution, a Matlab script, used html access to the PO-DAAC web site to locate all VIIRS 10 minute granules and OPeNDAP access to acquire the bounding box for each granule from the metadata bound to the file. This task required several hours of wall time to acquire the data and to write the bounding boxes to a local file with the associated ftp and OPeNDAP urls for the 110,000+ granule archive. A second Matlab script searched the local archive, seconds, for granules falling in a user defined space-time window and an ascii file of wget commands associated with these was generated. This file was then executed to acquire the data of interest. The wget commands can be configured to acquire the entire files via ftp or a subset of each file via OPeNDAP. Furthermore, the search capability, based on bounding boxes and rectangular regions, could easily be modified to further refine the search. Finally, the script that builds the inventory has been designed to update the local inventory, minutes per month rather than hours.

  2. Real-Time Access to Altimetry and Operational Oceanography Products via OPeNDAP/LAS Technologies : the Example of Aviso, Mercator and Mersea Projects

    NASA Astrophysics Data System (ADS)

    Baudel, S.; Blanc, F.; Jolibois, T.; Rosmorduc, V.

    2004-12-01

    The Products and Services (P&S) department in the Space Oceanography Division at CLS is in charge of diffusing and promoting altimetry and operational oceanography data. P&S is so involved in Aviso satellite altimetry project, in Mercator ocean operational forecasting system, and in the European Godae /Mersea ocean portal. Aiming to a standardisation and a common vision and management of all these ocean data, these projects led to the implementation of several OPeNDAP/LAS Internet servers. OPeNDAP allows the user to extract via a client software (like IDL, Matlab or Ferret) the data he is interested in and only this data, avoiding him to download full information files. OPeNDAP allows to extract a geographic area, a period time, an oceanic variable, and an output format. LAS is an OPeNDAP data access web server whose special feature consists in the facility for unify in a single vision the access to multiple types of data from distributed data sources. The LAS can make requests to different remote OPeNDAP servers. This enables to make comparisons or statistics upon several different data types. Aviso is the CNES/CLS service which distributes altimetry products since 1993. The Aviso LAS distributes several Ssalto/Duacs altimetry products such as delayed and near-real time mean sea level anomaly, absolute dynamic topography, absolute geostrophic velocities, gridded significant wave height and gridded wind speed modulus. Mercator-Ocean is a French operational oceanography centre which distributes its products by several means among them LAS/OPeNDAP servers as part of Mercator Mersea-strand1 contribution. 3D ocean description (temperature, salinity, current and other oceanic variables) of the North Atlantic and Mediterranean are real-time available and weekly updated. LAS special feature consisting in the possibility of making requests to several remote data centres with same OPeNDAP configurations particularly fitted to Mersea strand-1 problematics. This European project (June 2003 to June 2004) sponsored by the European Commission was the first experience of an integrated operational oceanography project. The objective was the assessment of several existing operational in situ and satellite monitoring and numerical forecasting systems for the future elaboration (Mersea Integrated Project, 2004-2008) of an integrated system able to deliver, operationally, information products (physical, chemical, biological) towards end-users in several domains related to environment, security and safety. Five forecasting ocean models with data assimilation coming from operational in situ or satellite data centres, have been intercompared. The main difficulty of this LAS implementation has lied in the ocean model metrics definition and a common file format adoption which forced the model teams to produce the same datasets in the same formats (NetCDF, COARDS/CF convention). Notice that this was a pioneer approach and that it has been adopted by Godae standards (see F. Blanc's paper in this session). Going on these web technologies implementation and entering a more user-oriented issue, perspectives deal with the implementation of a Map Server, a GIS opensource server which will communicate with the OPeNDAP server. The Map server will be able to manipulate simultaneously raster and vector multidisciplinary remote data. The aim is to construct a full complete web oceanic data distribution service. The projects in which we are involved allow us to progress towards that.

  3. Global Ocean Currents Database

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Sun, L.

    2016-02-01

    The NOAA's National Centers for Environmental Information has released an ocean currents database portal that aims 1) to integrate global ocean currents observations from a variety of instruments with different resolution, accuracy and response to spatial and temporal variability into a uniform network common data form (NetCDF) format and 2) to provide a dedicated online data discovery, access to NCEI-hosted and distributed data sources for ocean currents data. The portal provides a tailored web application that allows users to search for ocean currents data by platform types and spatial/temporal ranges of their interest. The dedicated web application is available at http://www.nodc.noaa.gov/gocd/index.html. The NetCDF format supports widely-used data access protocols and catalog services such as OPeNDAP (Open-source Project for a Network Data Access Protocol) and THREDDS (Thematic Real-time Environmental Distributed Data Services), which the GOCD users can use data files with their favorite analysis and visualization client software without downloading to their local machine. The potential users of the ocean currents database include, but are not limited to, 1) ocean modelers for their model skills assessments, 2) scientists and researchers for studying the impact of ocean circulations on the climate variability, 3) ocean shipping industry for safety navigation and finding optimal routes for ship fuel efficiency, 4) ocean resources managers while planning for the optimal sites for wastes and sewages dumping and for renewable hydro-kinematic energy, and 5) state and federal governments to provide historical (analyzed) ocean circulations as an aid for search and rescue

  4. Optimizing the Use of Storage Systems Provided by Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H.; Potter, N.; Byrne, D. A.; Ogata, J.; Relph, J.

    2013-12-01

    Cloud computing systems present a set of features that include familiar computing resources (albeit augmented to support dynamic scaling of processing power) bundled with a mix of conventional and unconventional storage systems. The linux base on which many Cloud environments (e.g., Amazon) are based make it tempting to assume that any Unix software will run efficiently in this environment efficiently without change. OPeNDAP and NODC collaborated on a short project to explore how the S3 and Glacier storage systems provided by the Amazon Cloud Computing infrastructure could be used with a data server developed primarily to access data stored in a traditional Unix file system. Our work used the Amazon cloud system, but we strived for designs that could be adapted easily to other systems like OpenStack. Lastly, we evaluated different architectures from a computer security perspective. We found that there are considerable issues associated with treating S3 as if it is a traditional file system, even though doing so is conceptually simple. These issues include performance penalties because using a software tool that emulates a traditional file system to store data in S3 performs poorly when compared to a storing data directly in S3. We also found there are important benefits beyond performance to ensuring that data written to S3 can directly accessed without relying on a specific software tool. To provide a hierarchical organization to the data stored in S3, we wrote 'catalog' files, using XML. These catalog files map discrete files to S3 access keys. Like a traditional file system's directories, the catalogs can also contain references to other catalogs, providing a simple but effective hierarchy overlaid on top of S3's flat storage space. An added benefit to these catalogs is that they can be viewed in a web browser; our storage scheme provides both efficient access for the data server and access via a web browser. We also looked at the Glacier storage system and found that the system's response characteristics are very different from a traditional file system or database; it behaves like a near-line storage system. To be used by a traditional data server, the underlying access protocol must support asynchronous accesses. This is because the Glacier system takes a minimum of four hours to deliver any data object, so systems built with the expectation of instant access (i.e., most web systems) must be fundamentally changed to use Glacier. Part of a related project has been to develop an asynchronous access mode for OPeNDAP, and we have developed a design using that new addition to the DAP protocol with Glacier as a near-line mass store. In summary, we found that both S3 and Glacier require special treatment to be effectively used by a data server. It is important to add (new) interfaces to data servers that enable them to use these storage devices through their native interfaces. We also found that our designs could easily map to a cloud environment based on OpenStack. Lastly, we noted that while these designs invited more liberal use of remote references for data objects, that can expose software to new security risks.

  5. NASA GES DISC support of CO2 Data from OCO-2, ACOS, and AIRS

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer C; Vollmer, Bruce E.; Savtchenko, Andrey K.; Hearty, Thomas J; Albayrak, Rustem Arif; Deshong, Barbara E.

    2013-01-01

    NASA Goddard Earth Sciences Data and Information Services Centers (GES DISC) is the data center assigned to archive and distribute current AIRS, ACOS data and data from the upcoming OCO-2 mission. The GES DISC archives and supports data containing information on CO2 as well as other atmospheric composition, atmospheric dynamics, modeling and precipitation. Along with the data stewardship, an important mission of GES DISC is to facilitate access to and enhance the usability of data as well as to broaden the user base. GES DISC strives to promote the awareness of science content and novelty of the data by working with Science Team members and releasing news articles as appropriate. Analysis of events that are of interest to the general public, and that help in understanding the goals of NASA Earth Observing missions, have been among most popular practices.Users have unrestricted access to a user-friendly search interface, Mirador, that allows temporal, spatial, keyword and event searches, as well as an ontology-driven drill down. Variable subsetting, format conversion, quality screening, and quick browse, are among the services available in Mirador. The majority of the GES DISC data are also accessible through OPeNDAP (Open-source Project for a Network Data Access Protocol) and WMS (Web Map Service). These services add more options for specialized subsetting, format conversion, image viewing and contributing to data interoperability.

  6. Solar Irradiance Data Products at the LASP Interactive Solar IRradiance Datacenter (LISIRD)

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Ware DeWolfe, A.; Wilson, A.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2011-12-01

    The Laboratory for Atmospheric and Space Physics (LASP) has developed the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/) web site to provide access to a comprehensive set of solar irradiance measurements and related datasets. Current data holdings include products from NASA missions SORCE, UARS, SME, and TIMED-SEE. The data provided covers a wavelength range from soft X-ray (XUV) at 0.1 nm up to the near infrared (NIR) at 2400 nm, as well as Total Solar Irradiance (TSI). Other datasets include solar indices, spectral and flare models, solar images, and more. The LISIRD web site features updated plotting, browsing, and download capabilities enabled by dygraphs, JavaScript, and Ajax calls to the LASP Time Series Server (LaTiS). In addition to the web browser interface, most of the LISIRD datasets can be accessed via the LaTiS web service interface that supports the OPeNDAP standard. OPeNDAP clients and other programming APIs are available for making requests that subset, aggregate, or filter data on the server before it is transported to the user. This poster provides an overview of the LISIRD system, summarizes the datasets currently available, and provides details on how to access solar irradiance data products through LISIRD's interfaces.

  7. A Sample Data Publication: Interactive Access, Analysis and Display of Remotely Stored Datasets From Hurricane Charley

    NASA Astrophysics Data System (ADS)

    Weber, J.; Domenico, B.

    2004-12-01

    This paper is an example of what we call data interactive publications. With a properly configured workstation, the readers can click on "hotspots" in the document that launches an interactive analysis tool called the Unidata Integrated Data Viewer (IDV). The IDV will enable the readers to access, analyze and display datasets on remote servers as well as documents describing them. Beyond the parameters and datasets initially configured into the paper, the analysis tool will have access to all the other dataset parameters as well as to a host of other datasets on remote servers. These data interactive publications are built on top of several data delivery, access, discovery, and visualization tools developed by Unidata and its partner organizations. For purposes of illustrating this integrative technology, we will use data from the event of Hurricane Charley over Florida from August 13-15, 2004. This event illustrates how components of this process fit together. The Local Data Manager (LDM), Open-source Project for a Network Data Access Protocol (OPeNDAP) and Abstract Data Distribution Environment (ADDE) services, Thematic Realtime Environmental Distributed Data Service (THREDDS) cataloging services, and the IDV are highlighted in this example of a publication with embedded pointers for accessing and interacting with remote datasets. An important objective of this paper is to illustrate how these integrated technologies foster the creation of documents that allow the reader to learn the scientific concepts by direct interaction with illustrative datasets, and help build a framework for integrated Earth System science.

  8. The DIAS/CEOS Water Portal, distributed system using brokering architecture

    NASA Astrophysics Data System (ADS)

    Miura, Satoko; Sekioka, Shinichi; Kuroiwa, Kaori; Kudo, Yoshiyuki

    2015-04-01

    The DIAS/CEOS Water Portal is a one of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. This portal has two main functions; one is to search and access data and the other is to register and share use cases which use datasets provided via this portal. This presentation focuses on the first function, to search and access data. The Portal system is distributed in the sense that, while the portal system is located in Tokyo, the data is located in archive centers which are globally distributed. For example, some in-situ data is archived at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory in Boulder, Colorado, USA. The NWP station time series and global gridded model output data is archived at the Max Planck Institute for Meteorology (MPIM) in cooperation with the World Data Center for Climate in Hamburg, Germany. Part of satellite data is archived at DIAS storage at the University of Tokyo, Japan. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time, like one-stop shopping. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat), Opensearch protocol and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  9. TES/Aura L2 Water Vapor (H2O) Limb V6 (TL2H2OLS)

    Atmospheric Science Data Center

    2018-03-01

    TES/Aura L2 Water Vapor (H2O) Limb (TL2H2OLS) News:  TES News ... Level:  L2 Platform:  TES/Aura L2 Water Vapor Spatial Coverage:  27 x 23 km Limb ... Access:  OPeNDAP Parameters:  H2O Water Volume Mixing Radio Precision Vertical Resolution Order ...

  10. TES/Aura L2 Water Vapor (H2O) Limb V6 (TL2H2OL)

    Atmospheric Science Data Center

    2018-03-01

    TES/Aura L2 Water Vapor (H2O) Limb (TL2H2OL) News:  TES News ... Level:  L2 Platform:  TES/Aura L2 Water Vapor Spatial Coverage:  27 x 23 km Limb ... Access: OPeNDAP Parameters:  H2O Water Volume Mixing Radio Precision Vertical Resolution Order ...

  11. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  12. Data Publishing and Sharing Via the THREDDS Data Repository

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR/TDS capabilities as well as how users can install this software to create their own repositories.

  13. Collaboration tools and techniques for large model datasets

    USGS Publications Warehouse

    Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.

    2008-01-01

    In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.

  14. LASP Time Series Server (LaTiS): Overcoming Data Access Barriers via a Common Data Model in the Middle Tier (Invited)

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Wilson, A.

    2010-12-01

    The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.

  15. High Availability Applications for NOMADS at the NOAA Web Operations Center Aimed at Providing Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.

    2009-05-01

    The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.

  16. Extending the GI Brokering Suite to Support New Interoperability Specifications

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Papeschi, F.; Santoro, M.; Nativi, S.

    2014-12-01

    The GI brokering suite provides the discovery, access, and semantic Brokers (i.e. GI-cat, GI-axe, GI-sem) that empower a Brokering framework for multi-disciplinary and multi-organizational interoperability. GI suite has been successfully deployed in the framework of several programmes and initiatives, such as European Union funded projects, NSF BCube, and the intergovernmental coordinated effort Global Earth Observation System of Systems (GEOSS). Each GI suite Broker facilitates interoperability for a particular functionality (i.e. discovery, access, semantic extension) among a set of brokered resources published by autonomous providers (e.g. data repositories, web services, semantic assets) and a set of heterogeneous consumers (e.g. client applications, portals, apps). A wide set of data models, encoding formats, and service protocols are already supported by the GI suite, such as the ones defined by international standardizing organizations like OGC and ISO (e.g. WxS, CSW, SWE, GML, netCDF) and by Community specifications (e.g. THREDDS, OpenSearch, OPeNDAP, ESRI APIs). Using GI suite, resources published by a particular Community or organization through their specific technology (e.g. OPeNDAP/netCDF) can be transparently discovered, accessed, and used by different Communities utilizing their preferred tools (e.g. a GIS visualizing WMS layers). Since Information Technology is a moving target, new standards and technologies continuously emerge and are adopted in the Earth Science context too. Therefore, GI Brokering suite was conceived to be flexible and accommodate new interoperability protocols and data models. For example, GI suite has recently added support to well-used specifications, introduced to implement Linked data, Semantic Web and precise community needs. Amongst the others, they included: DCAT: a RDF vocabulary designed to facilitate interoperability between Web data catalogs. CKAN: a data management system for data distribution, particularly used by public administrations. CERIF: used by CRIS (Current Research Information System) instances. HYRAX Server: a scientific dataset publishing component. This presentation will discuss these and other latest GI suite extensions implemented to support new interoperability protocols in use by the Earth Science Communities.

  17. CMCC Data Distribution Centre

    NASA Astrophysics Data System (ADS)

    Aloisio, Giovanni; Fiore, Sandro; Negro, A.

    2010-05-01

    The CMCC Data Distribution Centre (DDC) is the primary entry point (web gateway) to the CMCC. It is a Data Grid Portal providing a ubiquitous and pervasive way to ease data publishing, climate metadata search, datasets discovery, metadata annotation, data access, data aggregation, sub-setting, etc. The grid portal security model includes the use of HTTPS protocol for secure communication with the client (based on X509v3 certificates that must be loaded into the browser) and secure cookies to establish and maintain user sessions. The CMCC DDC is now in a pre-production phase and it is currently used only by internal users (CMCC researchers and climate scientists). The most important component already available in the CMCC DDC is the Search Engine which allows users to perform, through web interfaces, distributed search and discovery activities by introducing one or more of the following search criteria: horizontal extent (which can be specified by interacting with a geographic map), vertical extent, temporal extent, keywords, topics, creation date, etc. By means of this page the user submits the first step of the query process on the metadata DB, then, she can choose one or more datasets retrieving and displaying the complete XML metadata description (from the browser). This way, the second step of the query process is carried out by accessing to a specific XML document of the metadata DB. Finally, through the web interface, the user can access to and download (partially or totally) the data stored on the storage device accessing to OPeNDAP servers and to other available grid storage interfaces. Requests concerning datasets stored in deep storage will be served asynchronously.

  18. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    NASA Astrophysics Data System (ADS)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  19. User-Friendly Data Servers for Climate Studies at the Asia-Pacific Data-Research Center (APDRC)

    NASA Astrophysics Data System (ADS)

    Yuan, G.; Shen, Y.; Zhang, Y.; Merrill, R.; Waseda, T.; Mitsudera, H.; Hacker, P.

    2002-12-01

    The APDRC was recently established within the International Pacific Research Center (IPRC) at the University of Hawaii. The APDRC mission is to increase understanding of climate variability in the Asia-Pacific region by developing the computational, data-management, and networking infrastructure necessary to make data resources readily accessible and usable by researchers, and by undertaking data-intensive research activities that will both advance knowledge and lead to improvements in data preparation and data products. A focus of recent activity is the implementation of user-friendly data servers. The APDRC is currently running a Live Access Server (LAS) developed at NOAA/PMEL to provide access to and visualization of gridded climate products via the web. The LAS also allows users to download the selected data subsets in various formats (such as binary, netCDF and ASCII). Most of the datasets served by the LAS are also served through our OPeNDAP server (formerly DODS), which allows users to directly access the data using their desktop client tools (e.g. GrADS, Matlab and Ferret). In addition, the APDRC is running an OPeNDAP Catalog/Aggregation Server (CAS) developed by Unidata at UCAR to serve climate data and products such as model output and satellite-derived products. These products are often large (> 2 GB) and are therefore stored as multiple files (stored separately in time or in parameters). The CAS remedies the inconvenience of multiple files and allows access to the whole dataset (or any subset that cuts across the multiple files) via a single request command from any DODS enabled client software. Once the aggregation of files is configured at the server (CAS), the process of aggregation is transparent to the user. The user only needs to know a single URL for the entire dataset, which is, in fact, stored as multiple files. CAS even allows aggregation of files on different systems and at different locations. Currently, the APDRC is serving NCEP, ECMWF, SODA, WOCE-Satellite, TMI, GPI and GSSTF products through the CAS. The APDRC is also running an EPIC server developed by PMEL/NOAA. EPIC is a web-based, data search and display system suited for in situ (station versus gridded) data. The process of locating and selecting individual station data from large collections (millions of profiles or time series, etc.) of in situ data is a major challenge. Serving in situ data on the Internet faces two problems: the irregularity of data formats; and the large quantity of data files. To solve the first problem, we have converted the in situ data into netCDF data format. The second problem was solved by using the EPIC server, which allows users to easily subset the files using a friendly graphical interface. Furthermore, we enhanced the capability of EPIC and configured OPeNDAP into EPIC to serve the numerous in situ data files and to export them to users through two different options: 1) an OPeNDAP pointer file of user-selected data files; and 2) a data package that includes meta-information (e.g., location, time, cruise no, etc.), a local pointer file, and the data files that the user selected. Option 1) is for those who do not want to download the selected data but want to use their own application software (such as GrADS, Matlab and Ferret) for access and analysis; option 2) is for users who want to store the data on their own system (e.g. laptops before going for a cruise) for subsequent analysis. Currently, WOCE CTD and bottle data, the WOCE current meter data, and some Argo float data are being served on the EPIC server.

  20. NASA's Earth Science Data Systems Standards Process Experiences

    NASA Technical Reports Server (NTRS)

    Ullman, Richard E.; Enloe, Yonsook

    2007-01-01

    NASA has impaneled several internal working groups to provide recommendations to NASA management on ways to evolve and improve Earth Science Data Systems. One of these working groups is the Standards Process Group (SPC). The SPG is drawn from NASA-funded Earth Science Data Systems stakeholders, and it directs a process of community review and evaluation of proposed NASA standards. The working group's goal is to promote interoperability and interuse of NASA Earth Science data through broader use of standards that have proven implementation and operational benefit to NASA Earth science by facilitating the NASA management endorsement of proposed standards. The SPC now has two years of experience with this approach to identification of standards. We will discuss real examples of the different types of candidate standards that have been proposed to NASA's Standards Process Group such as OPeNDAP's Data Access Protocol, the Hierarchical Data Format, and Open Geospatial Consortium's Web Map Server. Each of the three types of proposals requires a different sort of criteria for understanding the broad concepts of "proven implementation" and "operational benefit" in the context of NASA Earth Science data systems. We will discuss how our Standards Process has evolved with our experiences with the three candidate standards.

  1. Data Access Services that Make Remote Sensing Data Easier to Use

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2010-01-01

    This slide presentation reviews some of the processes that NASA uses to make the remote sensing data easy to use over the World Wide Web. This work involves much research into data formats, geolocation structures and quality indicators, often to be followed by coding a preprocessing program. Only then are the data usable within the analysis tool of choice. The Goddard Earth Sciences Data and Information Services Center is deploying a variety of data access services that are designed to dramatically shorten the time consumed in the data preparation step. On-the-fly conversion to the standard network Common Data Form (netCDF) format with Climate-Forecast (CF) conventions imposes a standard coordinate system framework that makes data instantly readable through several tools, such as the Integrated Data Viewer, Gridded Analysis and Display System, Panoply and Ferret. A similar benefit is achieved by serving data through the Open Source Project for a Network Data Access Protocol (OPeNDAP), which also provides subsetting. The Data Quality Screening Service goes a step further in filtering out data points based on quality control flags, based on science team recommendations or user-specified criteria. Further still is the Giovanni online analysis system which goes beyond handling formatting and quality to provide visualization and basic statistics of the data. This general approach of automating the preparation steps has the important added benefit of enabling use of the data by non-human users (i.e., computer programs), which often make sub-optimal use of the available data due to the need to hard-code data preparation on the client side.

  2. TRMM Precipitation Application Examples Using Data Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, D.; Teng, W.; Kempler, S.; Greene, M.

    2012-01-01

    Data services to support precipitation applications are important for maximizing the NASA TRMM (Tropical Rainfall Measuring Mission) and the future GPM (Global Precipitation Mission) mission's societal benefits. TRMM Application examples using data services at the NASA GES DISC, including samples from users around the world will be presented in this poster. Precipitation applications often require near-real-time support. The GES DISC provides such support through: 1) Providing near-real-time precipitation products through TOVAS; 2) Maps of current conditions for monitoring precipitation and its anomaly around the world; 3) A user friendly tool (TOVAS) to analyze and visualize near-real-time and historical precipitation products; and 4) The GES DISC Hurricane Portal that provides near-real-time monitoring services for the Atlantic basin. Since the launch of TRMM, the GES DISC has developed data services to support precipitation applications around the world. In addition to the near-real-time services, other services include: 1) User friendly TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/); 2) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at GES DISC. Mirador is designed to be fast and easy to learn; 3) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; and 4) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.

  3. Visualization of Coastal Data Through KML

    NASA Astrophysics Data System (ADS)

    Damsma, T.; Baart, F.; de Boer, G.; van Koningsveld, M.; Bruens, A.

    2009-12-01

    As a country that lies mostly below sea level, the Netherlands has a history of coastal engineering, and is world renowned for its leading role in Integrated Coastal Zone Management (ICZM). Within the framework of Building with Nature (a Dutch ICZM research program) an OPeNDAP server is used to host several datasets of the Dutch coast. Among these sets are bathymetric data, cross-shore profiles, water level time series of which some date back to the eighteenth century. The challenge with hosting this amount of data is more in dissemination and accessibility rather than a technical one (tracing, accessing, gathering, unifying and storing). With so many data in different sets, how can one easily know when and where data is available, and of what quality it is? Recent work using Google Earth as a visual front-end for this database has proven very encouraging. Taking full advantage of the four dimensional (3D+time) visualization capabilities allows researchers, consultants and the general public to view, access and interact with the data. Within MATLAB a set of generic tools are developed for easy creation of among others:

    • A high resolution, time animated, historic bathymetry of the entire Dutch coast.
    • 3D curvilinear computation grids.
    • A 3D contour plot of the Westerschelde Estuary.
    • Time animated wind and water flow fields, both with traditional quiver diagrams and arrows that move with the flow field.
    • Various overviews of markers containing direct web links to data and metadata on OPeNDAP server. Wind field (arrows) and water level elevation for model calculations of Katrina (animated over 14 days) Coastal cross sections (with exaggerated hight) and 2D positions of high and low water lines (animated over 40 years)

    • Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

      NASA Astrophysics Data System (ADS)

      Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

      2012-12-01

      The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L. A., Killeffer, T. S., Boden, T. A., ... & Lazer, K. (2014). THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA. Oak Ridge National Laboratory (ORNL).

    • Tropical Rainfall Measuring Mission (TRMM) Precipitation Data and Services for Research and Applications

      NASA Technical Reports Server (NTRS)

      Liu, Zhong; Ostrenga, Dana; Teng, William; Kempler, Steven

      2012-01-01

      Precipitation is a critical component of the Earth's hydrological cycle. Launched on 27 November 1997, TRMM is a joint U.S.-Japan satellite mission to provide the first detailed and comprehensive data set of the four-dimensional distribution of rainfall and latent heating over vastly under-sampled tropical and subtropical oceans and continents (40 S - 40 N). Over the past 14 years, TRMM has been a major data source for meteorological, hydrological and other research and application activities around the world. The purpose of this short article is to inform that the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) provides TRMM archive and near-real-time precipitation data sets and services for research and applications. TRMM data consist of orbital data from TRMM instruments at the sensor s resolution, gridded data at a range of spatial and temporal resolutions, subsets, ground-based instrument data, and ancillary data. Data analysis, display, and delivery are facilitated by the following services: (1) Mirador (data search and access); (2) TOVAS (TRMM Online Visualization and Analysis System); (3) OPeNDAP (Open-source Project for a Network Data Access Protocol); (4) GrADS Data Server (GDS); and (5) Open Geospatial Consortium (OGC) Web Map Service (WMS) for the GIS community. Precipitation data application services are available to support a wide variety of applications around the world. Future plans include enhanced and new services to address data related issues from the user community. Meanwhile, the GES DISC is preparing for the Global Precipitation Measurement (GPM) mission which is scheduled for launch in 2014.

    • Scientific Platform as a Service - Tools and solutions for efficient access to and analysis of oceanographic data

      NASA Astrophysics Data System (ADS)

      Vines, Aleksander; Hansen, Morten W.; Korosov, Anton

      2017-04-01

      Existing infrastructure international and Norwegian projects, e.g., NorDataNet, NMDC and NORMAP, provide open data access through the OPeNDAP protocol following the conventions for CF (Climate and Forecast) metadata, designed to promote the processing and sharing of files created with the NetCDF application programming interface (API). This approach is now also being implemented in the Norwegian Sentinel Data Hub (satellittdata.no) to provide satellite EO data to the user community. Simultaneously with providing simplified and unified data access, these projects also seek to use and establish common standards for use and discovery metadata. This then allows development of standardized tools for data search and (subset) streaming over the internet to perform actual scientific analysis. A combinnation of software tools, which we call a Scientific Platform as a Service (SPaaS), will take advantage of these opportunities to harmonize and streamline the search, retrieval and analysis of integrated satellite and auxiliary observations of the oceans in a seamless system. The SPaaS is a cloud solution for integration of analysis tools with scientific datasets via an API. The core part of the SPaaS is a distributed metadata catalog to store granular metadata describing the structure, location and content of available satellite, model, and in situ datasets. The analysis tools include software for visualization (also online), interactive in-depth analysis, and server-based processing chains. The API conveys search requests between system nodes (i.e., interactive and server tools) and provides easy access to the metadata catalog, data repositories, and the tools. The SPaaS components are integrated in virtual machines, of which provisioning and deployment are automatized using existing state-of-the-art open-source tools (e.g., Vagrant, Ansible, Docker). The open-source code for scientific tools and virtual machine configurations is under version control at https://github.com/nansencenter/, and is coupled to an online continuous integration system (e.g., Travis CI).

    • Lowering the Barriers to Using Data: Enabling Desktop-based HPD Science through Virtual Environments and Web Data Services

      NASA Astrophysics Data System (ADS)

      Druken, K. A.; Trenham, C. E.; Steer, A.; Evans, B. J. K.; Richards, C. J.; Smillie, J.; Allen, C.; Pringle, S.; Wang, J.; Wyborn, L. A.

      2016-12-01

      The Australian National Computational Infrastructure (NCI) provides access to petascale data in climate, weather, Earth observations, and genomics, and terascale data in astronomy, geophysics, ecology and land use, as well as social sciences. The data is centralized in a closely integrated High Performance Computing (HPC), High Performance Data (HPD) and cloud facility. Despite this, there remain significant barriers for many users to find and access the data: simply hosting a large volume of data is not helpful if researchers are unable to find, access, and use the data for their particular need. Use cases demonstrate we need to support a diverse range of users who are increasingly crossing traditional research discipline boundaries. To support their varying experience, access needs and research workflows, NCI has implemented an integrated data platform providing a range of services that enable users to interact with our data holdings. These services include: - A GeoNetwork catalog built on standardized Data Management Plans to search collection metadata, and find relevant datasets; - Web data services to download or remotely access data via OPeNDAP, WMS, WCS and other protocols; - Virtual Desktop Infrastructure (VDI) built on a highly integrated on-site cloud with access to both the HPC peak machine and research data collections. The VDI is a fully featured environment allowing visualization, code development and analysis to take place in an interactive desktop environment; and - A Learning Management System (LMS) containing User Guides, Use Case examples and Jupyter Notebooks structured into courses, so that users can self-teach how to use these facilities with examples from our system across a range of disciplines. We will briefly present these components, and discuss how we engage with data custodians and consumers to develop standardized data structures and services that support the range of needs. We will also highlight some key developments that have improved user experience in utilizing the services, particularly enabling transdisciplinary science. This work combines with other developments at NCI to increase the confidence of scientists from any field to undertake research and analysis on these important data collections regardless of their preferred work environment or level of skill.

    • Evolution of the Data Access Protocol in Response to Community Needs

      NASA Astrophysics Data System (ADS)

      Gallagher, J.; Caron, J. L.; Davis, E.; Fulker, D.; Heimbigner, D.; Holloway, D.; Howe, B.; Moe, S.; Potter, N.

      2012-12-01

      Under the aegis of the OPULS (OPeNDAP-Unidata Linked Servers) Project, funded by NOAA, version 2 of OPeNDAP's Data Access Protocol (DAP2) is being updated to version 4. DAP4 is the first major upgrade in almost two decades and will embody three main areas of advancement. First, the data-model extensions developed by the OPULS team focus on three areas: Better support for coverages, access to HDF5 files and access to relational databases. DAP2 support for coverages (defined as a sampled functions) was limited to simple rectangular coverages that work well for (some) model outputs and processed satellite data but that cannot represent trajectories or satellite swath data, for example. We have extended the coverage concept in DAP4 to remove these limitations. These changes are informed by work at Unidata on the Common Data Model and also by the OGC's abstract coverages specification. In a similar vein, we have extended DAP2's support for relations by including the concept of foreign keys, so that tables can be explicitly related to one another. Second, the web interfaces - web services - that provides access to data using via DAP will be more clearly defined and use other (, orthogonal), standards where they are appropriate. An important case is the XML interface, which provides a cleaner way to build other response media types such as JSON and RDF (for metadata) and to build support for Atom, thus simplify the integration of DAP servers with tools that support OpenSearch. Input from the ESIP federation and work performed with IOOS have informed our choices here. Last, DAP4-compliant servers will support richer data-processing capabilities than DAP2, enabling a wider array of server functions that manipulate data before returning values. Two projects currently are exploring just what can be done even with DAP2's server-function model: The MIIC project at LARC and OPULS itself (with work performed at the University of Washington). Both projects have demonstrated that server functions can be used to perform operations on large volumes of data and return results that are far smaller than would be required to achieve the same outcomes via client-side processing. We are using information from these efforts to inform the design of server functions in DAP4. Each of the three areas of DAP4 advancement is being guided by input from a number of community members, including an OPULS Advisory Committee.

    • Ocean data management in OMP Data Service

      NASA Astrophysics Data System (ADS)

      Fleury, Laurence; André, François; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Ferré, Hélène; Mière, Arnaud

      2014-05-01

      The Observatoire Midi-Pyrénées Data Service (SEDOO) is a development team, dedicated to environmental data management and dissemination application set up, in the framework of intensive field campaigns and long term observation networks. SEDOO developped some applications dealing with ocean data only, but also generic databases that enable to store and distribute multidisciplinary datasets. SEDOO is in charge of the in situ data management and the data portal for international and multidisciplinary programmes as large as African Monsoon Multidisciplinary Analyses (AMMA) and Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS). The AMMA and MISTRALS databases are distributed and the data portals enable to access datasets managed by other data centres (IPSL, CORIOLIS...) through interoperability protocols (OPeNDAP, xml requests...). AMMA and MISTRALS metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). Most of the AMMA and MISTRALS in situ ocean data sets are homogenized and inserted in a relational database, in order to enable accurate data selection and download of different data sets in a shared format. Data selection criteria are location, period, physical property name, physical property range... The data extraction procedure include format output selection among CSV, NetCDF, Nasa Ames... The AMMA database - http://database.amma-international.org/ - contains field campaign observations in the Guinea Gulf (EGEE 2005-2007) and Atlantic Tropical Ocean (AEROSE-II 2006...), as well as long term monitoring data (PIRATA, ARGO...). Operational analysis (MERCATOR) and satellite products (TMI, SSMI...) are managed by IPSL data centre and can be accessed too. They have been projected over regular latitude-longitude grids and converted into the NetCDF format. The MISTRALS data portal - http://mistrals.sedoo.fr/ - enables to access ocean datasets produced by the contributing programmes: Hydrological cycle in the Mediterranean eXperiment (HyMeX), Chemistry-Aerosol Mediterranean eXperiment (ChArMEx), Marine Mediterranean eXperiment (MERMeX)... The programmes include many field campaigns from 2011 to 2015, collecting general and specific properties. Long term monitoring networks, like Mediterranean Ocean Observing System on Environment (MOOSE) or Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE), contribute to the MISTRALS data portal as well. Relevant model outputs and satellite products managed by external data centres (IPSL, ENEA...) can be accessed too. SEDOO manages the SSS (Sea Surface Salinity) national observation service data: http://sss.sedoo.fr/. SSS aims at collecting, validating, archiving and distributing in situ SSS measurements derived from Voluntary Observing Ship programs. The SSS data user interface enables to built multicriteria data request and download relevant datasets. SEDOO contributes to the SOLWARA project that aims at understanding the oceanic circulation in the Coral Sea and the Solomon Sea and their role in both the climate system and the oceanic chemistry. The research programme include in situ measurements, numerical modelling and compiled analyses of past data. The website http://thredds.sedoo.fr/solwara/ enables to access, visualize and download Solwara gridded data and model simulations, using Thredds associated services (OPEnDAP, NCSS and WMS). In order to improve the application user-friendliness, SSS and SOLWARA web interfaces are JEE applications build with GWT Framework, and share many modules.

    • Best Practices for International Collaboration and Applications of Interoperability within a NASA Data Center

      NASA Astrophysics Data System (ADS)

      Moroni, D. F.; Armstrong, E. M.; Tauer, E.; Hausman, J.; Huang, T.; Thompson, C. K.; Chung, N.

      2013-12-01

      The Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is one of 12 data centers sponsored by NASA's Earth Science Data and Information System (ESDIS) project. The PO.DAAC is tasked with archival and distribution of NASA Earth science missions specific to physical oceanography, many of which have interdisciplinary applications for weather forecasting/monitoring, ocean biology, ocean modeling, and climate studies. PO.DAAC has a 20-year history of cross-project and international collaborations with partners in Europe, Japan, Australia, and the UK. Domestically, the PO.DAAC has successfully established lasting partners with non-NASA institutions and projects including the National Oceanic and Atmospheric Administration (NOAA), United States Navy, Remote Sensing Systems, and Unidata. A key component of these partnerships is PO.DAAC's direct involvement with international working groups and science teams, such as the Group for High Resolution Sea Surface Temperature (GHRSST), International Ocean Vector Winds Science Team (IOVWST), Ocean Surface Topography Science Team (OSTST), and the Committee on Earth Observing Satellites (CEOS). To help bolster new and existing collaborations, the PO.DAAC has established a standardized approach to its internal Data Management and Archiving System (DMAS), utilizing a Data Dictionary to provide the baseline standard for entry and capture of dataset and granule metadata. Furthermore, the PO.DAAC has established an end-to-end Dataset Lifecycle Policy, built upon both internal and external recommendations of best practices toward data stewardship. Together, DMAS, the Data Dictionary, and the Dataset Lifecycle Policy provide the infrastructure to enable standardized data and metadata to be fully ingested and harvested to facilitate interoperability and compatibility across data access protocols, tools, and services. The Dataset Lifecycle Policy provides the checks and balances to help ensure all incoming HDF and netCDF-based datasets meet minimum compliance requirements with the Lawrence Livermore National Laboratory's actively maintained Climate and Forecast (CF) conventions with additional goals toward metadata standards provided by the Attribute Convention for Dataset Discovery (ACDD), the International Organization for Standardization (ISO) 19100-series, and the Federal Geographic Data Committee (FGDC). By default, DMAS ensures all datasets are compliant with NASA's Global Change Master Directory (GCMD) and NASA's Reverb data discovery clearinghouse (also known as ECHO). For data access, PO.DAAC offers several widely-used technologies, including File Transfer Protocol (FTP), Open-source Project for a Network Data Access Protocol (OPeNDAP), and Thematic Realtime Environmental Distributed Data Services (THREDDS). These access technologies are available directly to users or through PO.DAAC's web interfaces, specifically the High-level Tool for Interactive Data Extraction (HiTIDE), Live Access Server (LAS), and PO.DAAC's set of search, image, and Consolidated Web Services (CWS). Lastly, PO.DAAC's newly introduced, standards-based CWS provide singular endpoints for search, imaging, and extraction capabilities, respectively, across L2/L3/L4 datasets. Altogether, these tools, services and policies serve to provide flexible, interoperable functionality for both users and data providers.

    • Newly Released TRMM Version 7 Products, Other Precipitation Datasets and Data Services at NASA GES DISC

      NASA Technical Reports Server (NTRS)

      Liu, Zhong; Ostrenga, D.; Teng, W. L.; Trivedi, Bhagirath; Kempler, S.

      2012-01-01

      The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is home of global precipitation product archives, in particular, the Tropical Rainfall Measuring Mission (TRMM) products. TRMM is a joint U.S.-Japan satellite mission to monitor tropical and subtropical (40 S - 40 N) precipitation and to estimate its associated latent heating. The TRMM satellite provides the first detailed and comprehensive dataset on the four dimensional distribution of rainfall and latent heating over vastly undersampled tropical and subtropical oceans and continents. The TRMM satellite was launched on November 27, 1997. TRMM data products are archived at and distributed by GES DISC. The newly released TRMM Version 7 consists of several changes including new parameters, new products, meta data, data structures, etc. For example, hydrometeor profiles in 2A12 now have 28 layers (14 in V6). New parameters have been added to several popular Level-3 products, such as, 3B42, 3B43. Version 2.2 of the Global Precipitation Climatology Project (GPCP) dataset has been added to the TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/), allowing online analysis and visualization without downloading data and software. The GPCP dataset extends back to 1979. Version 3 of the Global Precipitation Climatology Centre (GPCC) monitoring product has been updated in TOVAS as well. The product provides global gauge-based monthly rainfall along with number of gauges per grid. The dataset begins in January 1986. To facilitate data and information access and support precipitation research and applications, we have developed a Precipitation Data and Information Services Center (PDISC; URL: http://disc.gsfc.nasa.gov/precipitation). In addition to TRMM, PDISC provides current and past observational precipitation data. Users can access precipitation data archives consisting of both remote sensing and in-situ observations. Users can use these data products to conduct a wide variety of activities, including case studies, model evaluation, uncertainty investigation, etc. To support Earth science applications, PDISC provides users near-real-time precipitation products over the Internet. At PDISC, users can access tools and software. Documentation, FAQ and assistance are also available. Other capabilities include: 1) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador is designed to be fast and easy to learn; 2)TOVAS; 3) NetCDF data download for the GIS community; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.

    • Newly Released TRMM Version 7 Products, GPCP Version 2.2 Precipitation Dataset and Data Services at NASA GES DISC

      NASA Astrophysics Data System (ADS)

      Ostrenga, D.; Liu, Z.; Teng, W. L.; Trivedi, B.; Kempler, S.

      2011-12-01

      The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is home of global precipitation product archives, in particular, the Tropical Rainfall Measuring Mission (TRMM) products. TRMM is a joint U.S.-Japan satellite mission to monitor tropical and subtropical (40deg S - 40deg N) precipitation and to estimate its associated latent heating. The TRMM satellite provides the first detailed and comprehensive dataset on the four dimensional distribution of rainfall and latent heating over vastly undersampled tropical and subtropical oceans and continents. The TRMM satellite was launched on November 27, 1997. TRMM data products are archived at and distributed by GES DISC. The newly released TRMM Version 7 consists of several changes including new parameters, new products, meta data, data structures, etc. For example, hydrometeor profiles in 2A12 now have 28 layers (14 in V6). New parameters have been added to several popular Level-3 products, such as, 3B42, 3B43. Version 2.2 of the Global Precipitation Climatology Project (GPCP) dataset has been added to the TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/), allowing online analysis and visualization without downloading data and software. The GPCP dataset extends back to 1979. Results of basic intercomparison between the new and the previous versions of both TRMM and GPCP will be presented to help understand changes in data product characteristics. To facilitate data and information access and support precipitation research and applications, we have developed a Precipitation Data and Information Services Center (PDISC; URL: http://disc.gsfc.nasa.gov/precipitation). In addition to TRMM, PDISC provides current and past observational precipitation data. Users can access precipitation data archives consisting of both remote sensing and in-situ observations. Users can use these data products to conduct a wide variety of activities, including case studies, model evaluation, uncertainty investigation, etc. To support Earth science applications, PDISC provides users near-real-time precipitation products over the Internet. At PDISC, users can access tools and software. Documentation, FAQ and assistance are also available. Other capabilities include: 1) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador is designed to be fast and easy to learn; 2)TOVAS; 3) NetCDF data download for the GIS community; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network. More details along with examples will be presented.

    • Oceanotron, Scalable Server for Marine Observations

      NASA Astrophysics Data System (ADS)

      Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

      2013-12-01

      Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to specific data formats or protocols. Oceanotron is deployed at seven European data centres for marine in-situ observations within myOcean. While additional extensions are still being developed, to promote new collaborative initiatives, a work is now done on continuous and distributed integration (jenkins, maven), shared reference documentation (on alfresco) and code and release dissemination (sourceforge, github).

    • Web-based Reanalysis Intercomparison Tools (WRIT): Comparing Reanalyses and Observational data.

      NASA Astrophysics Data System (ADS)

      Compo, G. P.; Smith, C. A.; Hooper, D. K.

      2014-12-01

      While atmospheric reanalysis datasets are widely used in climate science, many technical issues hinder comparing them to each other and to observations. The reanalysis fields are stored in diverse file architectures, data formats, and resolutions, with metadata, such as variable name and units, that also differ. Individual users have to download the fields, convert them to a common format, store them locally, change variable names, re-grid if needed, and convert units. Comparing reanalyses with observational datasets is difficult for similar reasons. Even if a dataset can be read via Open-source Project for a Network Data Access Protocol (OPeNDAP) or a similar protocol, most of this work is still needed. All of these tasks take time, effort, and money. To overcome some of the obstacles in reanalysis intercomparison, our group at the Cooperative Institute for Research in the Environmental Sciences (CIRES) at the University of Colorado and affiliated colleagues at National Oceanic and Atmospheric Administration's (NOAA's) Earth System Research Laboratory Physical Sciences Division (ESRL/PSD) have created a set of Web-based Reanalysis Intercomparison Tools (WRIT) at http://www.esrl.noaa.gov/psd/data/writ/. WRIT allows users to easily plot and compare reanalysis and observational datasets, and to test hypotheses. Currently, there are tools to plot monthly mean maps and vertical cross-sections, timeseries, and trajectories for standard pressure level and surface variables. Users can refine dates, statistics, and plotting options. Reanalysis datasets currently available include the NCEP/NCAR R1, NCEP/DOE R2, MERRA, ERA-Interim, NCEP CFSR and the 20CR. Observational datasets include those containing precipitation (e.g. GPCP), temperature (e.g. GHCNCAMS), winds (e.g. WASWinds), precipitable water (e.g. NASA NVAP), SLP (HadSLP2), and SST (NOAA ERSST). WRIT also facilitates the mission of the Reanalyses.org website as a convenient toolkit for studying the reanalysis datasets.

    • HYDRA Hyperspectral Data Research Application Tom Rink and Tom Whittaker

      NASA Astrophysics Data System (ADS)

      Rink, T.; Whittaker, T.

      2005-12-01

      HYDRA is a freely available, easy to install tool for visualization and analysis of large local or remote hyper/multi-spectral datasets. HYDRA is implemented on top of the open source VisAD Java library via Jython - the Java implementation of the user friendly Python programming language. VisAD provides data integration, through its generalized data model, user-display interaction and display rendering. Jython has an easy to read, concise, scripting-like, syntax which eases software development. HYDRA allows data sharing of large datasets through its support of the OpenDAP and OpenADDE server-client protocols. The users can explore and interrogate data, and subset in physical and/or spectral space to isolate key areas of interest for further analysis without having to download an entire dataset. It also has an extensible data input architecture to recognize new instruments and understand different local file formats, currently NetCDF and HDF4 are supported.

    • Improving the Accessibility and Use of NASA Earth Science Data

      NASA Technical Reports Server (NTRS)

      Tisdale, Matthew; Tisdale, Brian

      2015-01-01

      Many of the NASA Langley Atmospheric Science Data Center (ASDC) Distributed Active Archive Center (DAAC) multidimensional tropospheric and atmospheric chemistry data products are stored in HDF4, HDF5 or NetCDF format, which traditionally have been difficult to analyze and visualize with geospatial tools. With the rising demand from the diverse end-user communities for geospatial tools to handle multidimensional products, several applications, such as ArcGIS, have refined their software. Many geospatial applications now have new functionalities that enable the end user to: Store, serve, and perform analysis on each individual variable, its time dimension, and vertical dimension. Use NetCDF, GRIB, and HDF raster data formats across applications directly. Publish output within REST image services or WMS for time and space enabled web application development. During this webinar, participants will learn how to leverage geospatial applications such as ArcGIS, OPeNDAP and ncWMS in the production of Earth science information, and in increasing data accessibility and usability.

    • Earth Science Data and Applications for K-16 Education from the NASA Langley Atmospheric Science Data Center

      NASA Astrophysics Data System (ADS)

      Phelps, C. S.; Chambers, L. H.; Alston, E. J.; Moore, S. W.; Oots, P. C.

      2005-05-01

      NASA's Science Mission Directorate aims to stimulate public interest in Earth system science and to encourage young scholars to consider careers in science, technology, engineering and mathematics. NASA's Atmospheric Science Data Center (ASDC) at Langley Research Center houses over 700 data sets related to Earth's radiation budget, clouds, aerosols and tropospheric chemistry that are being produced to increase academic understanding of the natural and anthropogenic perturbations that influence global climate change. However, barriers still exist in the use of these actual satellite observations by educators in the classroom to supplement the educational process. Thus, NASA is sponsoring the "Mentoring and inquirY using NASA Data on Atmospheric and earth science for Teachers and Amateurs" (MY NASA DATA) project to systematically support educational activities by reducing the ASDC data holdings to `microsets' that can be easily accessible and explored by the K-16 educators and students. The microsets are available via Web site (http://mynasadata.larc.nasa.gov) with associated lesson plans, computer tools, data information pages, and a science glossary. A MY NASA DATA Live Access Server (LAS) has been populated with ASDC data such that users can create custom microsets online for desired time series, parameters and geographical regions. The LAS interface is suitable for novice to advanced users, teachers or students. The microsets may be visual representations of data or text output for spreadsheet analysis. Currently, over 148 parameters from the Clouds and the Earth's Radiant Energy System (CERES), Multi-angle Imaging SpectroRadiometer (MISR), Surface Radiation Budget (SRB), Tropospheric Ozone Residual (TOR) and the International Satellite Cloud Climatology Project (ISCCP) are available and provide important information on clouds, fluxes and cycles in the Earth system. Additionally, a MY NASA DATA OPeNDAP server has been established to facilitate file transfer of ASDC data for other client applications such as MATLAB, GrADS, and IDV. OPeNDAP has become a very popular alternative for data access especially at the university research level with over 375 OPeNDAP-accessible data sets registered nationally. Teacher workshops will be held each summer for five years to help teachers learn about incorporating NASA microsets in their curriculum. The next MY NASA DATA teacher workshop will be held at Langley Research Center July 25-29, 2005. Workshop participants will create microsets and lesson plans that they believe will help their students understand Earth system concepts. These lesson plans will be reviewed and shared online as user-contributed content.

    • NetCDF-CF-OPeNDAP: Standards for ocean data interoperability and object lessons for community data standards processes

      USGS Publications Warehouse

      Hankin, Steven C.; Blower, Jon D.; Carval, Thierry; Casey, Kenneth S.; Donlon, Craig; Lauret, Olivier; Loubrieu, Thomas; Srinivasan, Ashwanth; Trinanes, Joaquin; Godøy, Øystein; Mendelssohn, Roy; Signell, Richard P.; de La Beaujardiere, Jeff; Cornillon, Peter; Blanc, Frederique; Rew, Russ; Harlan, Jack; Hall, Julie; Harrison, D.E.; Stammer, Detlef

      2010-01-01

      It is generally recognized that meeting society's emerging environmental science and management needs will require the marine data community to provide simpler, more effective and more interoperable access to its data. There is broad agreement, as well, that data standards are the bedrock upon which interoperability will be built. The path that would bring the marine data community to agree upon and utilize such standards, however, is often elusive. In this paper we examine the trio of standards 1) netCDF files; 2) the Climate and Forecast (CF) metadata convention; and 3) the OPeNDAP data access protocol. These standards taken together have brought our community a high level of interoperability for "gridded" data such as model outputs, satellite products and climatological analyses, and they are gaining rapid acceptance for ocean observations. We will provide an overview of the scope of the contribution that has been made. We then step back from the information technology considerations to examine the community or "social" process by which the successes were achieved. We contrast the path by which the World Meteorological Organization (WMO) has advanced the Global Telecommunications System (GTS) - netCDF/CF/OPeNDAP exemplifying a "bottom up" standards process whereas GTS is "top down". Both of these standards are tales of success at achieving specific purposes, yet each is hampered by technical limitations. These limitations sometimes lead to controversy over whether alternative technological directions should be pursued. Finally we draw general conclusions regarding the factors that affect the success of a standards development effort - the likelihood that an IT standard will meet its design goals and will achieve community-wide acceptance. We believe that a higher level of thoughtful awareness by the scientists, program managers and technology experts of the vital role of standards and the merits of alternative standards processes can help us as a community to reach our interoperability goals faster.

    • Use Cases for Combining Web Services with ArcPython Tools for Enabling Quality Control of Land Remote Sensing Data Products.

      NASA Astrophysics Data System (ADS)

      Krehbiel, C.; Maiersperger, T.; Friesz, A.; Harriman, L.; Quenzer, R.; Impecoven, K.

      2016-12-01

      Three major obstacles facing big Earth data users include data storage, management, and analysis. As the amount of satellite remote sensing data increases, so does the need for better data storage and management strategies to exploit the plethora of data now available. Standard GIS tools can help big Earth data users whom interact with and analyze increasingly large and diverse datasets. In this presentation we highlight how NASA's Land Processes Distributed Active Archive Center (LP DAAC) is tackling these big Earth data challenges. We provide a real life use case example to describe three tools and services provided by the LP DAAC to more efficiently exploit big Earth data in a GIS environment. First, we describe the Open-source Project for a Network Data Access Protocol (OPeNDAP), which calls to specific data, minimizing the amount of data that a user downloads and improves the efficiency of data downloading and processing. Next, we cover the LP DAAC's Application for Extracting and Exploring Analysis Ready Samples (AppEEARS), a web application interface for extracting and analyzing land remote sensing data. From there, we review an ArcPython toolbox that was developed to provide quality control services to land remote sensing data products. Locating and extracting specific subsets of larger big Earth datasets improves data storage and management efficiency for the end user, and quality control services provides a straightforward interpretation of big Earth data. These tools and services are beneficial to the GIS user community in terms of standardizing workflows and improving data storage, management, and analysis tactics.

    • Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

      NASA Astrophysics Data System (ADS)

      Wilson, A.; Lindholm, D.; Baltzer, T.

      2005-12-01

      The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  1. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  2. Data Integration Plans for the NOAA National Climate Model Portal (NCMP) (Invited)

    NASA Astrophysics Data System (ADS)

    Rutledge, G. K.; Williams, D. N.; Deluca, C.; Hankin, S. C.; Compo, G. P.

    2010-12-01

    NOAA’s National Climatic Data Center (NCDC) and its collaborators have initiated a five-year development and implementation of an operational access capability for the next generation weather and climate model datasets. The NOAA National Climate Model Portal (NCMP) is being designed using format neutral open web based standards and tools where users at all levels of expertise can gain access and understanding to many of NOAA’s climate and weather model products. NCMP will closely coordinate with and reside under the emerging NOAA Climate Services Portal (NCSP). To carry out its mission, NOAA must be able to successfully integrate model output and other data and information from all of its discipline specific areas to understand and address the complexity of many environmental problems. The NCMP will be an initial access point for the emerging NOAA Climate Services Portal (NCSP), which is the basis for unified access to NOAA climate products and services. NCMP is currently collaborating with the emerging Environmental Projection Center (EPC) expected to be developed at the Earth System Research Laboratory in Boulder CO. Specifically, NCMP is being designed to: - Enable policy makers and resource managers to make informed national and global policy decisions using integrated climate and weather model outputs, observations, information, products, and other services for the scientist and the non-scientist; - Identify model to observational interoperability requirements for climate and weather system analysis and diagnostics; - Promote the coordination of an international reanalysis observational clearinghouse (i.e.., Reanalysis.org) spanning the worlds numerical processing Center’s for an “Ongoing Analysis of the Climate System”. NCMP will initially provide access capabilities to 3 of NOAA’s high volume Reanalysis data sets of the weather and climate systems: 1) NCEP’s Climate Forecast System Reanalysis (CFS-R); 2) NOAA’s Climate Diagnostics Center/ Earth System Research Laboratory (ESRL) Twentieth Century Reanalysis Project data set (20CR, G. Compo, et al.), a historical reanalysis that will provide climate information dating back to 1850 to the present; and 3) the CPC’s Upper Air Reanlaysis. NCMP will advance the highly successful NOAA National Operational Model Archive and Distribution System (NOMADS, Rutledge, BAMS 2006), and standards already in use including Unidata’s THREDDS (TDS), PMEL’s Live Access Server (LAS) and the GrADS Data Server (GDS) from COLA; the Department of Energy (DOE) Earth System Grid (ESG) and the associated IPCC Climate model archive located at the Program for Climate Model Diagnostics and Inter-comparison (PCMDI) through the ESG; and NOAA’s Unified Access Framework (UAF) effort; and core standards developed by Open Geospatial Consortium (OGC). The format neutral OPeNDAP protocol as used in the NOMADS system will also be a key aspect of the design of NCMP.

  3. The GeoDataPortal: A Standards-based Environmental Modeling Data Access and Manipulation Toolkit

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Kunicki, T.; Booth, N.; Suftin, I.; Zoerb, R.; Walker, J.

    2010-12-01

    Environmental modelers from fields of study such as climatology, hydrology, geology, and ecology rely on many data sources and processing methods that are common across these disciplines. Interest in inter-disciplinary, loosely coupled modeling and data sharing is increasing among scientists from the USGS, other agencies, and academia. For example, hydrologic modelers need downscaled climate change scenarios and land cover data summarized for the watersheds they are modeling. Subsequently, ecological modelers are interested in soil moisture information for a particular habitat type as predicted by the hydrologic modeler. The USGS Center for Integrated Data Analytics Geo Data Portal (GDP) project seeks to facilitate this loose model coupling data sharing through broadly applicable open-source web processing services. These services simplify and streamline the time consuming and resource intensive tasks that are barriers to inter-disciplinary collaboration. The GDP framework includes a catalog describing projects, models, data, processes, and how they relate. Using newly introduced data, or sources already known to the catalog, the GDP facilitates access to sub-sets and common derivatives of data in numerous formats on disparate web servers. The GDP performs many of the critical functions needed to summarize data sources into modeling units regardless of scale or volume. A user can specify their analysis zones or modeling units as an Open Geospatial Consortium (OGC) standard Web Feature Service (WFS). Utilities to cache Shapefiles and other common GIS input formats have been developed to aid in making the geometry available for processing via WFS. Dataset access in the GDP relies primarily on the Unidata NetCDF-Java library’s common data model. Data transfer relies on methods provided by Unidata’s Thematic Real-time Environmental Data Distribution System Data Server (TDS). TDS services of interest include the Open-source Project for a Network Data Access Protocol (OPeNDAP) standard for gridded time series, the OGC’s Web Coverage Service for high-density static gridded data, and Unidata’s CDM-remote for point time series. OGC WFS and Sensor Observation Service (SOS) are being explored as mechanisms to serve and access static or time series data attributed to vector geometry. A set of standardized XML-based output formats allows easy transformation into a wide variety of “model-ready” formats. Interested users will have the option of submitting custom transformations to the GDP or transforming the XML output as a post-process. The GDP project aims to support simple, rapid development of thin user interfaces (like web portals) to commonly needed environmental modeling-related data access and manipulation tools. Standalone, service-oriented components of the GDP framework provide the metadata cataloging, data subset access, and spatial-statistics calculations needed to support interdisciplinary environmental modeling.

  4. Using STOQS and stoqstoolbox for in situ Measurement Data Access in Matlab

    NASA Astrophysics Data System (ADS)

    López-Castejón, F.; Schlining, B.; McCann, M. P.

    2012-12-01

    This poster presents the stoqstoolbox, an extension to Matlab that simplifies the loading of in situ measurement data directly from STOQS databases. STOQS (Spatial Temporal Oceanographic Query System) is a geospatial database tool designed to provide efficient access to data following the CF-NetCDF Discrete Samples Geometries convention. Data are loaded from CF-NetCDF files into a STOQS database where indexes are created on depth, spatial coordinates and other parameters, e.g. platform type. STOQS provides consistent, simple and efficient methods to query for data. For example, we can request all measurements with a standard_name of sea_water_temperature between two times and from between two depths. Data access is simpler because the data are retrieved by parameter irrespective of platform or mission file names. Access is more efficient because data are retrieved via the index on depth and only the requested data are retrieved from the database and transferred into the Matlab workspace. Applications in the stoqstoolbox query the STOQS database via an HTTP REST application programming interface; they follow the Data Access Object pattern, enabling highly customizable query construction. Data are loaded into Matlab structures that clearly indicate latitude, longitude, depth, measurement data value, and platform name. The stoqstoolbox is designed to be used in concert with other tools, such as nctoolbox, which can load data from any OPeNDAP data source. With these two toolboxes a user can easily work with in situ and other gridded data, such as from numerical models and remote sensing platforms. In order to show the capability of stoqstoolbox we will show an example of model validation using data collected during the May-June 2012 field experiment conducted by the Monterey Bay Aquarium Research Institute (MBARI) in Monterey Bay, California. The data are available from the STOQS server at http://odss.mbari.org/canon/stoqs_may2012/query/. Over 14 million data points of 18 parameters from 6 platforms measured over a 3-week period are available on this server. The model used for comparison is the Regional Ocean Modeling System developed by Jet Propulsion Laboratory for the Monterey Bay. The model output are loaded into Matlab using nctoolbox from the JPL server at http://ourocean.jpl.nasa.gov:8080/thredds/dodsC/MBNowcast. Model validation with in situ measurements can be difficult because of different file formats and because data may be spread across individual data systems for each platform. With stoqstoolbox the researcher must know only the URL of the STOQS server and the OPeNDAP URL of the model output. With selected depth and time constraints a user's Matlab program searches for all in situ measurements available for the same time, depth and variable of the model. STOQS and stoqstoolbox are open source software projects supported by MBARI and the David and Lucile Packard foundation. For more information please see http://code.google.com/p/stoqs.

  5. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  6. Integrating Data Distribution and Data Assimilation Between the OOI CI and the NOAA DIF

    NASA Astrophysics Data System (ADS)

    Meisinger, M.; Arrott, M.; Clemesha, A.; Farcas, C.; Farcas, E.; Im, T.; Schofield, O.; Krueger, I.; Klacansky, I.; Orcutt, J.; Peach, C.; Chave, A.; Raymer, D.; Vernon, F.

    2008-12-01

    The Ocean Observatories Initiative (OOI) is an NSF funded program to establish the ocean observing infrastructure of the 21st century benefiting research and education. It is currently approaching final design and promises to deliver cyber and physical observatory infrastructure components as well as substantial core instrumentation to study environmental processes of the ocean at various scales, from coastal shelf-slope exchange processes to the deep ocean. The OOI's data distribution network lies at the heart of its cyber- infrastructure, which enables a multitude of science and education applications, ranging from data analysis, to processing, visualization and ontology supported query and mediation. In addition, it fundamentally supports a class of applications exploiting the knowledge gained from analyzing observational data for objective-driven ocean observing applications, such as automatically triggered response to episodic environmental events and interactive instrument tasking and control. The U.S. Department of Commerce through NOAA operates the Integrated Ocean Observing System (IOOS) providing continuous data in various formats, rates and scales on open oceans and coastal waters to scientists, managers, businesses, governments, and the public to support research and inform decision-making. The NOAA IOOS program initiated development of the Data Integration Framework (DIF) to improve management and delivery of an initial subset of ocean observations with the expectation of achieving improvements in a select set of NOAA's decision-support tools. Both OOI and NOAA through DIF collaborate on an effort to integrate the data distribution, access and analysis needs of both programs. We present details and early findings from this collaboration; one part of it is the development of a demonstrator combining web-based user access to oceanographic data through ERDDAP, efficient science data distribution, and scalable, self-healing deployment in a cloud computing environment. ERDDAP is a web-based front-end application integrating oceanographic data sources of various formats, for instance CDF data files as aggregated through NcML or presented using a THREDDS server. The OOI-designed data distribution network provides global traffic management and computational load balancing for observatory resources; it makes use of the OpenDAP Data Access Protocol (DAP) for efficient canonical science data distribution over the network. A cloud computing strategy is the basis for scalable, self-healing organization of an observatory's computing and storage resources, independent of the physical location and technical implementation of these resources.

  7. GES DISC Greenhouse Gas Data Sets and Associated Services

    NASA Technical Reports Server (NTRS)

    Sherman, Elliot; Wei, Jennifer; Vollmer, Bruce; Meyer, David

    2017-01-01

    NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) archives and distributes rich collections of data on atmospheric greenhouse gases from multiple missions. Hosted data include those from the Atmospheric Infrared Sounder (AIRS) mission (which has observed CO2, CH4, ozone, and water vapor since 2002); legacy water vapor and ozone retrievals from TIROS Operational Vertical Sounder (TOVS); and Upper Atmosphere Research Satellite (UARS) going back to the early 1980s. GES DISC also archives and supports data from seven projects of the Making Earth System Data Records for Use in Research Environments (MEaSUREs) program that have ozone and water vapor records. Greenhouse gases data from the A-Train satellite constellation is also available: (1) Aura-Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) ozone, nitrous oxide, and water vapor since 2004; (2) Greenhouse Gases Observing Satellite (GOSAT) CO2 observations since 2009 from the Atmospheric CO2 Observations from Space (ACOS) task; and (3) Orbiting Carbon Observatory-2 (OCO-2) CO2 data since 2014. The most recent related data set that the GES DISC archives is methane flux for North America, as part of NASAs Carbon Monitoring System (CMS) project. This dataset contains estimates of methane emission in North America based on an inversion of the GEOS-Chem chemical transport model constrained by GOSAT observations (Turner et al., 2015). Along with data stewardship, an important focus area of the GES DISC is to enhance the usability of its data and broaden its user base. Users have unrestricted access to a new user-friendly search interface, which includes many services such as variable subsetting, format conversion, quality screening, and quick browse. The majority of the GES DISC data sets are also accessible through Open-source Project for a Network Data Access Protocol (OPeNDAP) and Web Coverage Service (WCS). The latter two services provide more options for specialized subsetting, format conversion, and image viewing. Additional data exploration, data preview, and preliminary analysis capabilities are available via NASA Giovanni, which obviates the need forusers to download the data (Acker and Leptoukh, 2007). Giovanni provides a bridge between the data and science and has been very successful in extending GES DISC data to educational users and to users with limited resources.

  8. The Water SWITCH-ON Spatial Information Platform (SIP)

    NASA Astrophysics Data System (ADS)

    Sala Calero, J., Sr.; Boot, G., Sr.; Dihé, P., Sr.; Arheimer, B.

    2017-12-01

    The amount of hydrological open data is continually growing and providing opportunities to the scientific community. Although the existing data portals (GEOSS Portal, INSPIRE community geoportal and others) enable access to open data, many users still find browsing through them difficult. Moreover, the time spent on gathering and preparing data usually is more significant than the time spent on the experiment itself. Thus, any improvement on searching, understanding, accessing or using open data is greatly beneficial. The Spatial Information Platform (SIP) has been developed to tackle these issues within the SWITCH-ON European Commission funded FP7 project. The SIP has been designed as a set of tools based on open standards that provide to the user all the necessary functionalities as described in the Publish-Find-Bind (PFB) pattern. In other words, this means that the SIP helps users to locate relevant and suitable data for their experiments analysis, to access and transform it (filtering, extraction, selection, conversion, aggregation). Moreover, the SIP can be used to provide descriptive information about the data and to publish it so others can find and use it. The SIP is based on existing open data protocols such as the OGC/CSW, OGC/WMS, OpenDAP and open-source components like PostgreSQL/PostGIS, GeoServer and pyCSW. The SIP is divided in three main user interfaces: the BYOD (Browse your open dataset) web interface, the Expert GUI tool and the Upload Data and Metadata web interface. The BYOD HTML5 client is the main entry point for users that want to browse through open data in the SIP. The BYOD has a map interface based on Leaflet JavaScript libraries so that the users can search more efficiently. The web-based Open Data Registration Tool is a user-friendly upload and metadata description interface (geographical extent, license, DOI generation). The Expert GUI is a desktop application that provides full metadata editing capabilities for the metadata moderators of the project. In conclusion, the Spatial Information Platform (SIP) provides to its community a set of tools for better understanding and ease of use of hydrological open-data. Moreover, the SIP has been based on well-known OGC standards that will allow the connection and data harvesting from popular open data portals such as the GEOSS system of systems.

  9. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. We demonstrate how users can use NOMADS services to select the values of Ensemble model runs over the ith Ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6-Dimensional data cube of access across the internet. The example application called the “Ensemble Probability Tool” make probability predictions of user defined weather events that can be used in remote areas for weather vulnerable circumstances. An application to access data for a verification pilot study is shown in detail in a companion paper (U06) collaboration with the World Bank and is an example of high value, usability and relevance of NCEP products and service capability over a wide spectrum of user and partner needs.

  10. Inventory of GFS Files on NOMADS

    Science.gov Websites

    Inventory of GFS Files on NOMADS GRIB Filter options Description Filename Cycles Available 0.25 .fFFF 00,06,12,18 UTC OPeNDAP options Description Filename Cycles Available 0.25 Degree (3 hourly to 240

  11. The Joy of Playing with Oceanographic Data

    NASA Astrophysics Data System (ADS)

    Smith, A. T.; Xing, Z.; Armstrong, E. M.; Thompson, C. K.; Huang, T.

    2013-12-01

    The web is no longer just an after thought. It is no longer just a presentation layer filled with HTML, CSS, JavaScript, Frameworks, 3D, and more. It has become the medium of our communication. It is the database of all databases. It is the computing platform of all platforms. It has transformed the way we do science. Web service is the de facto method for communication between machines over the web. Representational State Transfer (REST) has standardized the way we architect services and their interfaces. In the Earth Science domain, we are familiar with tools and services such as Open-Source Project for Network Data Access Protocol (OPeNDAP), Thematic Realtime Environmental Distributed Data Services (THREDDS), and Live Access Server (LAS). We are also familiar with various data formats such as NetCDF3/4, HDF4/5, GRIB, TIFF, etc. One of the challenges for the Earth Science community is accessing information within these data. There are community-accepted readers that our users can download and install. However, the Application Programming Interface (API) between these readers is not standardized, which leads to non-portable applications. Webification (w10n) is an emerging technology, developed at the Jet Propulsion Laboratory, which exploits the hierarchical nature of a science data artifact to assign a URL to each element within the artifact. (e.g. a granule file). By embracing standards such as JSON, XML, and HTML5 and predictable URL, w10n provides a simple interface that enables tool-builders and researchers to develop portable tools/applications to interact with artifacts of various formats. The NASA Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is the designated data center for observational products relevant to the physical state of the ocean. Over the past year PO.DAAC has been evaluating w10n technology by webifying its archive holdings to provide simplified access to oceanographic science artifacts and as a service to enable future tools and services development. In this talk, we will focus on a w10n-based system called Distributed Oceanographic Webification Service (DOWS) being developed at PO.DAAC to provide a newer and simpler method for working with observational data artifacts. As a continued effort at PO.DAAC to provide better tools and services to visualize our data, the talk will discuss the latest in web-based data visualization tools/frameworks (such as d3.js, Three.js, Leaflet.js, and more) and techniques for working with webified oceanographic science data in both a 2D and 3D web approach.

  12. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P.; Dimitriou, D.; Hankin, S.

    2005-12-01

    The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.

  13. Inventory of SREF Files on NOMADS

    Science.gov Websites

    Inventory of SREF Files on NOMADS GRIB Filter options Description Filename Cycles Available 40km .PP.fFF.grib2 03,09,15,21 UTC OPeNDAP options Description Filename Cycles Available Grid 212 for all members and

  14. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  15. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  16. Integrating ArcGIS Online with GEOSS Data Access Broker

    NASA Astrophysics Data System (ADS)

    Lucchi, Roberto; Hogeweg, Marten

    2014-05-01

    The Global Earth Observation System of Systems (GEOSS) seeks to address 9 societal benefit areas for Earth observations to address: disasters, health, energy, climate, agriculture, ecosystems, biodiversity, water, and weather. As governments and their partners continue to monitor the face of the Earth, the collection, storage, analysis, and sharing of these observations remain fragmented, incomplete, or redundant. Major observational gaps also remain (particularly as we seek to look beneath the surface of the land and the water). As such, GEO's credo is that "decision makers need a global, coordinated, comprehensive, and sustained system of observing systems." Not surprisingly, one of the largest block of issues facing GEOSS is in the area of data: the access to data (including the building services to make the data more accessible), inadequate data integration and interoperability, error and uncertainty of observations, spatial and temporal gaps in observations, and the related issues of user involvement and capacity building. This is especially for people who stand to gain the most benefit from the datasets, but don't have the resources or knowledge to use them. Esri has millions of GIS and imagery users in hundreds of thousands of organizations around the world that work in the aforementioned 9 GEO societal benefit areas. Esri is therefore proud to have entered into a partnership with GEOSS, more specifically by way of a Memorandum of Understanding (MOU) between Esri and the Earth and Space Science Informatics (ESSI) Laboratory of Prof. Stefano Nativi at the CNR (National Research Council of Italy) Institute of Atmospheric Pollution Research. Esri is working with the ESSI Lab to integrate ArcGIS Online by way of the ArcGIS Online API into the GEOSS Data Access Broker (DAB), resulting in the discoverability of all public content from ArcGIS Online through many of the search portals that participate in this network (e.g., DataOne, CEOS, CUAHSI, OneGeology, IOOS). The synergistic efforts will include: 1) Providing the GEOSS community with access to Esri GIS community content, expertise and technology through the GEOSS DAB, as well as to collaboration tools via the ArcGIS platform. 2) Encouraging the Esri GIS community to participate as contributors and users of GEOSS. 3) Supporting the extension of GEOSS to include ArcGIS Online publicly-available data. 4) Collaboration on outreach to both the GIS and GEO communities on effective use of GEOSS, particularly for environmental decision-making. 5) Collaboration on the evolution of GEOSS as an open and interoperable platform in conjunction with the GEOSS community. Protocols such as OPenDAP and formats such as netCDF will play a critical role. This talk will present the initial results of the collaboration which includes the integration of ArcGIS Online in the GEOSS DAB.

  17. Exploitation of Existing Voice Over Internet Protocol Technology for Department of the Navy Application

    DTIC Science & Technology

    2002-09-01

    Protocol LAN Local Area Network LDAP Lightweight Directory Access Protocol LLQ Low Latency Queuing MAC Media Access Control MarCorSysCom Marine...Description Protocol SIP Session Initiation Protocol SMTP Simple Mail Transfer Protocol SPAWAR Space and Naval Warfare Systems Center SS7 ...PSTN infrastructure previously required to carry the conversation. The cost of accessing the PSTN is thereby eliminated. In cases where Internet

  18. Protecting intellectual property associated with Canadian academic clinical trials--approaches and impact.

    PubMed

    Ross, Sue; Magee, Laura; Walker, Mark; Wood, Stephen

    2012-12-27

    Intellectual property is associated with the creative work needed to design clinical trials. Two approaches have developed to protect the intellectual property associated with multicentre trial protocols prior to site initiation. The 'open access' approach involves publishing the protocol, permitting easy access to the complete protocol. The main advantages of the open access approach are that the protocol is freely available to all stakeholders, permitting them to discuss the protocol widely with colleagues, assess the quality and rigour of the protocol, determine the feasibility of conducting the trial at their centre, and after trial completion, to evaluate the reported findings based on a full understanding of the protocol. The main potential disadvantage of this approach is the potential for plagiarism; however if that occurred, it should be easy to identify because of the open access to the original trial protocol, as well as ensure that appropriate sanctions are used to deal with plagiarism. The 'restricted access' approach involves the use of non-disclosure agreements, legal documents that must be signed between the trial lead centre and collaborative sites. Potential sites must guarantee they will not disclose any details of the study before they are permitted to access the protocol. The main advantages of the restricted access approach are for the lead institution and nominated principal investigator, who protect their intellectual property associated with the trial. The main disadvantages are that ownership of the protocol and intellectual property is assigned to the lead institution; defining who 'needs to know' about the study protocol is difficult; and the use of non-disclosure agreements involves review by lawyers and institutional representatives at each site before access is permitted to the protocol, significantly delaying study implementation and adding substantial indirect costs to research institutes. This extra step may discourage sites from joining a trial. It is possible that the restricted access approach may contribute to the failure of well-designed trials without any significant benefit in protecting intellectual property. Funding agencies should formalize rules around open versus restricted access to the study protocol just as they have around open access to results.

  19. Inventory of NAM Files on NOMADS

    Science.gov Websites

    Inventory of NAM Files on NOMADS GRIB Filter options Description Filename Cycles Available 12km nam.tCCz.priconest.hiresfFF.tm00.grib2 00,06,12,18 UTC OPeNDAP options Description Filename Cycles Available Hourly nam1hr_CCz

  20. Access and accounting schemes of wireless broadband

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Huang, Benxiong; Wang, Yan; Yu, Xing

    2004-04-01

    In this paper, two wireless broadband access and accounting schemes were introduced. There are some differences in the client and the access router module between them. In one scheme, Secure Shell (SSH) protocol is used in the access system. The SSH server makes the authentication based on private key cryptography. The advantage of this scheme is the security of the user's information, and we have sophisticated access control. In the other scheme, Secure Sockets Layer (SSL) protocol is used the access system. It uses the technology of public privacy key. Nowadays, web browser generally combines HTTP and SSL protocol and we use the SSL protocol to implement the encryption of the data between the clients and the access route. The schemes are same in the radius sever part. Remote Authentication Dial in User Service (RADIUS), as a security protocol in the form of Client/Sever, is becoming an authentication/accounting protocol for standard access to the Internet. It will be explained in a flow chart. In our scheme, the access router serves as the client to the radius server.

  1. Investigating Access Performance of Long Time Series with Restructured Big Model Data

    NASA Astrophysics Data System (ADS)

    Shen, S.; Ostrenga, D.; Vollmer, B.; Meyer, D. J.

    2017-12-01

    Data sets generated by models are substantially increasing in volume, due to increases in spatial and temporal resolution, and the number of output variables. Many users wish to download subsetted data in preferred data formats and structures, as it is getting increasingly difficult to handle the original full-size data files. For example, application research users, such as those involved with wind or solar energy, or extreme weather events, are likely only interested in daily or hourly model data at a single point or for a small area for a long time period, and prefer to have the data downloaded in a single file. With native model file structures, such as hourly data from NASA Modern-Era Retrospective analysis for Research and Applications Version-2 (MERRA-2), it may take over 10 hours for the extraction of interested parameters at a single point for 30 years. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is exploring methods to address this particular user need. One approach is to create value-added data by reconstructing the data files. Taking MERRA-2 data as an example, we have tested converting hourly data from one-day-per-file into different data cubes, such as one-month, one-year, or whole-mission. Performance are compared for reading local data files and accessing data through interoperable service, such as OPeNDAP. Results show that, compared to the original file structure, the new data cubes offer much better performance for accessing long time series. We have noticed that performance is associated with the cube size and structure, the compression method, and how the data are accessed. An optimized data cube structure will not only improve data access, but also may enable better online analytic services.

  2. Investigating Access Performance of Long Time Series with Restructured Big Model Data

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Ostrenga, Dana M.; Vollmer, Bruce E.; Meyer, Dave

    2017-01-01

    Data sets generated by models are substantially increasing in volume, due to increases in spatial and temporal resolution, and the number of output variables. Many users wish to download subsetted data in preferred data formats and structures, as it is getting increasingly difficult to handle the original full-size data files. For example, application research users such as those involved with wind or solar energy, or extreme weather events are likely only interested in daily or hourly model data at a single point (or for a small area) for a long time period, and prefer to have the data downloaded in a single file. With native model file structures, such as hourly data from NASA Modern-Era Retrospective analysis for Research and Applications Version-2 (MERRA-2), it may take over 10 hours for the extraction of parameters-of-interest at a single point for 30 years. The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is exploring methods to address this particular user need. One approach is to create value-added data by reconstructing the data files. Taking MERRA-2 data as an example, we have tested converting hourly data from one-day-per-file into different data cubes, such as one-month, or one-year. Performance is compared for reading local data files and accessing data through interoperable services, such as OPeNDAP. Results show that, compared to the original file structure, the new data cubes offer much better performance for accessing long time series. We have noticed that performance is associated with the cube size and structure, the compression method, and how the data are accessed. An optimized data cube structure will not only improve data access, but also may enable better online analysis services

  3. Protecting intellectual property associated with Canadian academic clinical trials - approaches and impact

    PubMed Central

    2012-01-01

    Intellectual property is associated with the creative work needed to design clinical trials. Two approaches have developed to protect the intellectual property associated with multicentre trial protocols prior to site initiation. The ‘open access’ approach involves publishing the protocol, permitting easy access to the complete protocol. The main advantages of the open access approach are that the protocol is freely available to all stakeholders, permitting them to discuss the protocol widely with colleagues, assess the quality and rigour of the protocol, determine the feasibility of conducting the trial at their centre, and after trial completion, to evaluate the reported findings based on a full understanding of the protocol. The main potential disadvantage of this approach is the potential for plagiarism; however if that occurred, it should be easy to identify because of the open access to the original trial protocol, as well as ensure that appropriate sanctions are used to deal with plagiarism. The ‘restricted access’ approach involves the use of non-disclosure agreements, legal documents that must be signed between the trial lead centre and collaborative sites. Potential sites must guarantee they will not disclose any details of the study before they are permitted to access the protocol. The main advantages of the restricted access approach are for the lead institution and nominated principal investigator, who protect their intellectual property associated with the trial. The main disadvantages are that ownership of the protocol and intellectual property is assigned to the lead institution; defining who ‘needs to know’ about the study protocol is difficult; and the use of non-disclosure agreements involves review by lawyers and institutional representatives at each site before access is permitted to the protocol, significantly delaying study implementation and adding substantial indirect costs to research institutes. This extra step may discourage sites from joining a trial. It is possible that the restricted access approach may contribute to the failure of well-designed trials without any significant benefit in protecting intellectual property. Funding agencies should formalize rules around open versus restricted access to the study protocol just as they have around open access to results. PMID:23270486

  4. Medium Access Control Protocols for Cognitive Radio Ad Hoc Networks: A Survey

    PubMed Central

    Islam, A. K. M. Muzahidul; Baharun, Sabariah; Mansoor, Nafees

    2017-01-01

    New wireless network paradigms will demand higher spectrum use and availability to cope with emerging data-hungry devices. Traditional static spectrum allocation policies cause spectrum scarcity, and new paradigms such as Cognitive Radio (CR) and new protocols and techniques need to be developed in order to have efficient spectrum usage. Medium Access Control (MAC) protocols are accountable for recognizing free spectrum, scheduling available resources and coordinating the coexistence of heterogeneous systems and users. This paper provides an ample review of the state-of-the-art MAC protocols, which mainly focuses on Cognitive Radio Ad Hoc Networks (CRAHN). First, a description of the cognitive radio fundamental functions is presented. Next, MAC protocols are divided into three groups, which are based on their channel access mechanism, namely time-slotted protocol, random access protocol and hybrid protocol. In each group, a detailed and comprehensive explanation of the latest MAC protocols is presented, as well as the pros and cons of each protocol. A discussion on future challenges for CRAHN MAC protocols is included with a comparison of the protocols from a functional perspective. PMID:28926952

  5. Utilizing Satellite-derived Precipitation Products in Hydrometeorological Applications

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Ostrenga, D.; Teng, W. L.; Kempler, S. J.; Huffman, G. J.

    2012-12-01

    Each year droughts and floods happen around the world and can cause severe property damages and human casualties. Accurate measurement and forecast are important for preparedness and mitigation efforts. Through multi-satellite blended techniques, significant progress has been made over the past decade in satellite-based precipitation product development, such as, products' spatial and temporal resolutions as well as timely availability. These new products are widely used in various research and applications. In particular, the TRMM Multi-satellite Precipitation Analysis (TMPA) products archived and distributed by the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) provide 3-hourly, daily and monthly near-global (50° N - 50° S) precipitation datasets for research and applications. Two versions of TMPA products are available, research (3B42, 3B43, rain gauge adjusted) and near-real-time (3B42RT). At GES DISC, we have developed precipitation data services to support hydrometeorological applications in order to maximize the TRMM mission's societal benefits. In this presentation, we will present examples of utilizing TMPA precipitation products in hydrometeorological applications including: 1) monitoring global floods and droughts; 2) providing data services to support the USDA Crop Explorer; 3) support hurricane monitoring activities and research; and 4) retrospective analog year analyses to improve USDA's world agricultural supply and demand estimates. We will also present precipitation data services that can be used to support hydrometeorological applications including: 1) User friendly TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/); 2) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at GES DISC; 3) Simple Subset Wizard (http://disc.sci.gsfc.nasa.gov/SSW/ ) for data subsetting and format conversion; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/) that can be used for remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 4) GrADS-DODS Data Server or GDS (http://disc2.nascom.nasa.gov/dods/); 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml) that allows the use of data and enables clients to build customized maps with data coming from a different network; and 6) Providing NASA gridded hydrological data access through CUAHSI HIS (Consortium of Universities for the Advancement of Hydrologic Science, Inc. - Hydrologic Information Systems).

  6. A simple, effective media access protocol system for integrated, high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Khanna, S.; Zhang, L.

    1992-01-01

    The operation and performance of a dual media access protocol for integrated, gigabit networks are described. Unlike other dual protocols, each protocol supports a different class of traffic. The Carrier Sensed Multiple Access-Ring Network (CSMA/RN) protocol and the Circulating Reservation Packet (CRP) protocol support asynchronous and synchronous traffic, respectively. The two protocols operate with minimal impact upon each other. Performance information demonstrates that they support a complete range of integrated traffic loads, do not require call setup/termination or a special node for synchronous traffic control, and provide effective pre-use and recovery. The CRP also provides guaranteed access and fairness control for the asynchronous system. The paper demonstrates that the CSMA-CRP system fulfills many of the requirements for gigabit LAN-MAN networks most effectively and simply. To accomplish this, CSMA-CRP features are compared against similar ring and bus systems, such as Cambridge Fast Ring, Metaring, Cyclic Reservation Multiple Access, and Distributed Dual Queue Data Bus (DQDB).

  7. Connecting long-tail scientists with big data centers using SaaS

    NASA Astrophysics Data System (ADS)

    Percivall, G. S.; Bermudez, L. E.

    2012-12-01

    Big data centers and long tail scientists represent two extremes in the geoscience research community. Interoperability and inter-use based on software-as-a-service (SaaS) increases access to big data holdings by this underserved community of scientists. Large, institutional data centers have long been recognized as vital resources in the geoscience community. Permanent data archiving and dissemination centers provide "access to the data and (are) a critical source of people who have experience in the use of the data and can provide advice and counsel for new applications." [NRC] The "long-tail of science" is the geoscience researchers that work separate from institutional data centers [Heidorn]. Long-tail scientists need to be efficient consumers of data from large, institutional data centers. Discussions in NSF EarthCube capture the challenges: "Like the vast majority of NSF-funded researchers, Alice (a long-tail scientist) works with limited resources. In the absence of suitable expertise and infrastructure, the apparently simple task that she assigns to her graduate student becomes an information discovery and management nightmare. Downloading and transforming datasets takes weeks." [Foster, et.al.] The long-tail metaphor points to methods to bridge the gap, i.e., the Web. A decade ago, OGC began building a geospatial information space using open, web standards for geoprocessing [ORM]. Recently, [Foster, et.al.] accurately observed that "by adopting, adapting, and applying semantic web and SaaS technologies, we can make the use of geoscience data as easy and convenient as consumption of online media." SaaS places web services into Cloud Computing. SaaS for geospatial is emerging rapidly building on the first-generation geospatial web, e.g., OGC Web Coverage Service [WCS] and the Data Access Protocol [DAP]. Several recent examples show progress in applying SaaS to geosciences: - NASA's Earth Data Coherent Web has a goal to improve science user experience using Web Services (e.g. W*S, SOAP, RESTful) to reduce barriers to using EOSDIS data [ECW]. - NASA's LANCE provides direct access to vast amounts of satellite data using the OGC Web Map Tile Service (WMTS). - NOAA's Unified Access Framework for Gridded Data (UAF Grid) is a web service based capability for direct access to a variety of datasets using netCDF, OPeNDAP, THREDDS, WMS and WCS. [UAF] Tools to access SaaS's are many and varied: some proprietary, others open source; some run in browsers, others are stand-alone applications. What's required is interoperability using web interfaces offered by the data centers. NOAA's UAF service stack supports Matlab, ArcGIS, Ferret, GrADS, Google Earth, IDV, LAS. Any SaaS that offers OGC Web Services (WMS, WFS, WCS) can be accessed by scores of clients [OGC]. While there has been much progress in the recent year toward offering web services for the long-tail of scientists, more needs to be done. Web services offer data access but more than access is needed for inter-use of data, e.g. defining data schemas that allow for data fusion, addressing coordinate systems, spatial geometry, and semantics for observations. Connecting long-tail scientists with large, data centers using SaaS and, in the future, semantic web, will address this large and currently underserved user community.

  8. Synthesizing Existing CSMA and TDMA Based MAC Protocols for VANETs

    PubMed Central

    Huang, Jiawei; Li, Qi; Zhong, Shaohua; Liu, Lianhai; Zhong, Ping; Wang, Jianxin; Ye, Jin

    2017-01-01

    Many Carrier Sense Multiple Access (CSMA) and Time Division Multiple Access (TDMA) based medium access control (MAC) protocols for vehicular ad hoc networks (VANETs) have been proposed recently. Contrary to the common perception that they are competitors, we argue that the underlying strategies used in these MAC protocols are complementary. Based on this insight, we design CTMAC, a MAC protocol that synthesizes existing strategies; namely, random accessing channel (used in CSMA-style protocols) and arbitral reserving channel (used in TDMA-based protocols). CTMAC swiftly changes its strategy according to the vehicle density, and its performance is better than the state-of-the-art protocols. We evaluate CTMAC using at-scale simulations. Our results show that CTMAC reduces the channel completion time and increases the network goodput by 45% for a wide range of application workloads and network settings. PMID:28208590

  9. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2013-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products (SEVIRI, TRIMM, PARASOL...) stored in the ICARE data archive using OpeNDAP protocole The website will soon propose new facilities. In particular, many in situ datasets will be homogenized and inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 pre-campaign and the 2013 experiment, a day-to-day quick look and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  10. Best Practices for Preparing Interoperable Geospatial Data

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Beaty, T. W.

    2010-12-01

    Geospatial data is critically important for a wide scope of research and applications: carbon cycle and ecosystem, climate change, land use and urban planning, environmental protecting, etc. Geospatial data is created by different organizations using different methods, from remote sensing observations, field surveys, model simulations, etc., and stored in various formats. So geospatial data is diverse and heterogeneous, which brings a huge barrier for the sharing and using of geospatial data, especially when targeting a broad user community. Many efforts have been taken to address different aspects of using geospatial data by improving its interoperability. For example, the specification for Open Geospatial Consortium (OGC) catalog services defines a standard way for geospatial information discovery; OGC Web Coverage Services (WCS) and OPeNDAP define interoperable protocols for geospatial data access, respectively. But the reality is that only having the standard mechanisms for data discovery and access is not enough. The geospatial data content itself has to be organized in standard, easily understandable, and readily usable formats. The Oak Ridge National Lab Distributed Archived Data Center (ORNL DAAC) archives data and information relevant to biogeochemical dynamics, ecological data, and environmental processes. The Modeling and Synthesis Thematic Data Center (MAST-DC) prepares and distributes both input data and output data of carbon cycle models and provides data support for synthesis and terrestrial model inter-comparison in multi-scales. Both of these NASA-funded data centers compile and distribute a large amount of diverse geospatial data and have broad user communities, including GIS users, Earth science researchers, and ecosystem modeling teams. The ORNL DAAC and MAST-DC address this geospatial data interoperability issue by standardizing the data content and feeding them into a well-designed Spatial Data Infrastructure (SDI) which provides interoperable mechanisms to advertise, visualize, and distribute the standardized geospatial data. In this presentation, we summarize the experiences learned and the best practices for geospatial data standardization. The presentation will describe how diverse and historical data archived in the ORNL DAAC were converted into standard and non-proprietary formats; what tools were used to make the conversion; how the spatial and temporal information are properly captured in a consistent manor; how to name a data file or a variable to make it both human-friendly and semantically interoperable; how NetCDF file format and CF convention can promote the data usage in ecosystem modeling user community; how those standardized geospatial data can be fed into OGC Web Services to support on-demand data visualization and access; and how the metadata should be collected and organized so that they can be discovered through standard catalog services.

  11. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  12. A Cloud-Based Infrastructure for Near-Real-Time Processing and Dissemination of NPP Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.; Chettri, S. S.

    2011-12-01

    We are building a scalable cloud-based infrastructure for generating and disseminating near-real-time data products from a variety of geospatial and meteorological data sources, including the new National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP). Our approach relies on linking Direct Broadcast and other data streams to a suite of scientific algorithms coordinated by NASA's International Polar-Orbiter Processing Package (IPOPP). The resulting data products are directly accessible to a wide variety of end-user applications, via industry-standard protocols such as OGC Web Services, Unidata Local Data Manager, or OPeNDAP, using open source software components. The processing chain employs on-demand computing resources from Amazon.com's Elastic Compute Cloud and NASA's Nebula cloud services. Our current prototype targets short-term weather forecasting, in collaboration with NASA's Short-term Prediction Research and Transition (SPoRT) program and the National Weather Service. Direct Broadcast is especially crucial for NPP, whose current ground segment is unlikely to deliver data quickly enough for short-term weather forecasters and other near-real-time users. Direct Broadcast also allows full local control over data handling, from the receiving antenna to end-user applications: this provides opportunities to streamline processes for data ingest, processing, and dissemination, and thus to make interpreted data products (Environmental Data Records) available to practitioners within minutes of data capture at the sensor. Cloud computing lets us grow and shrink computing resources to meet large and rapid fluctuations in data availability (twice daily for polar orbiters) - and similarly large fluctuations in demand from our target (near-real-time) users. This offers a compelling business case for cloud computing: the processing or dissemination systems can grow arbitrarily large to sustain near-real time data access despite surges in data volumes or user demand, but that computing capacity (and hourly costs) can be dropped almost instantly once the surge passes. Cloud computing also allows low-risk experimentation with a variety of machine architectures (processor types; bandwidth, memory, and storage capacities, etc.) and of system configurations (including massively parallel computing patterns). Finally, our service-based approach (in which user applications invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored products on demand. To maximize the usefulness and impact of our technology, we have emphasized open, industry-standard software interfaces. We are also using and developing open source software to facilitate the widespread adoption of similar, derived, or interoperable systems for processing and serving near-real-time data from NPP and other sources.

  13. Oceans 2.0: Interactive tools for the Visualization of Multi-dimensional Ocean Sensor Data

    NASA Astrophysics Data System (ADS)

    Biffard, B.; Valenzuela, M.; Conley, P.; MacArthur, M.; Tredger, S.; Guillemot, E.; Pirenne, B.

    2016-12-01

    Ocean Networks Canada (ONC) operates ocean observatories on all three of Canada's coasts. The instruments produce 280 gigabytes of data per day with 1/2 petabyte archived so far. In 2015, 13 terabytes were downloaded by over 500 users from across the world. ONC's data management system is referred to as "Oceans 2.0" owing to its interactive, participative features. A key element of Oceans 2.0 is real time data acquisition and processing: custom device drivers implement the input-output protocol of each instrument. Automatic parsing and calibration takes place on the fly, followed by event detection and quality control. All raw data are stored in a file archive, while the processed data are copied to fast databases. Interactive access to processed data is provided through data download and visualization/quick look features that are adapted to diverse data types (scalar, acoustic, video, multi-dimensional, etc). Data may be post or re-processed to add features, analysis or correct errors, update calibrations, etc. A robust storage structure has been developed consisting of an extensive file system and a no-SQL database (Cassandra). Cassandra is a node-based open source distributed database management system. It is scalable and offers improved performance for big data. A key feature is data summarization. The system has also been integrated with web services and an ERDDAP OPeNDAP server, capable of serving scalar and multidimensional data from Cassandra for fixed or mobile devices.A complex data viewer has been developed making use of the big data capability to interactively display live or historic echo sounder and acoustic Doppler current profiler data, where users can scroll, apply processing filters and zoom through gigabytes of data with simple interactions. This new technology brings scientists one step closer to a comprehensive, web-based data analysis environment in which visual assessment, filtering, event detection and annotation can be integrated.

  14. An Innovative Open Data-driven Approach for Improved Interpretation of Coverage Data at NASA JPL's PO.DAA

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Armstrong, E. M.

    2016-12-01

    Figuratively speaking, Scientific Datasets (SD) are shared by data producers in a multitude of shapes, sizes and flavors. Primarily however they exist as machine-independent manifestations supporting the creation, access, and sharing of array-oriented SD that can on occasion be spread across multiple files. Within the Earth Sciences, the most notable general examples include the HDF family, NetCDF, etc. with other formats such as GRIB being used pervasively within specific domains such as the Oceanographic, Atmospheric and Meteorological sciences. Such file formats contain Coverage Data e.g. a digital representation of some spatio-temporal phenomenon. A challenge for large data producers such as NASA and NOAA as well as consumers of coverage datasets (particularly surrounding visualization and interactive use within web clients) is that this is still not a straight-forward issue due to size, serialization and inherent complexity. Additionally existing data formats are either unsuitable for the Web (like netCDF files) or hard to interpret independently due to missing standard structures and metadata (e.g. the OPeNDAP protocol). Therefore alternative, Web friendly manifestations of such datasets are required.CoverageJSON is an emerging data format for publishing coverage data to the web in a web-friendly, way which fits in with the linked data publication paradigm hence lowering the barrier for interpretation by consumers via mobile devices and client applications, etc. as well as data producers who can build next generation Web friendly Web services around datasets. This work will detail how CoverageJSON is being evaluated at NASA JPL's PO.DAAC as an enabling data representation format for publishing SD as Linked Open Data embedded within SD landing pages as well as via semantic data repositories. We are currently evaluating how utilization of CoverageJSON within SD landing pages addresses the long-standing acknowledgement that SD producers are not currently addressing content-based optimization within their SD landing pages for better crawlability by commercial search engines.

  15. A New Look at Data Usage by Using Metadata Attributes as Indicators of Data Quality

    NASA Astrophysics Data System (ADS)

    Won, Y. I.; Wanchoo, L.; Behnke, J.

    2016-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) stores and distributes data from EOS satellites, as well as ancillary, airborne, in-situ, and socio-economic data. Twelve EOSDIS data centers support different scientific disciplines by providing products and services tailored to specific science communities. Although discipline oriented, these data centers provide common data management functions of ingest, archive and distribution, as well as documentation of their data and services on their web-sites. The Earth Science Data and Information System (ESDIS) Project collects these metrics from the EOSDIS data centers on a daily basis through a tool called the ESDIS Metrics System (EMS). These metrics are used in this study. The implementation of the Earthdata Login - formerly known as the User Registration System (URS) - across the various NASA data centers provides the EMS additional information about users obtaining data products from EOSDIS data centers. These additional user attributes collected by the Earthdata login, such as the user's primary area of study can augment the understanding of data usage, which in turn can help the EOSDIS program better understand the users' needs. This study will review the key metrics (users, distributed volume, and files) in multiple ways to gain an understanding of the significance of the metadata. Characterizing the usability of data by key metadata elements such as discipline and study area, will assist in understanding how the users have evolved over time. The data usage pattern based on version numbers may also provide some insight into the level of data quality. In addition, the data metrics by various services such as the Open-source Project for a Network Data Access Protocol (OPeNDAP), Web Map Service (WMS), Web Coverage Service (WCS), and subsets, will address how these services have extended the usage of data. Over-all, this study will present the usage of data and metadata by metrics analyses and will assist data centers in better supporting the needs of the users.

  16. 75 FR 69502 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-12

    ... Connection testing [using current Nasdaq access protocols] during the normal operating hours of the NTF; No Charge--For Idle Connection testing [using current Nasdaq access protocols]; $333/hour--For Active Connection testing [using current Nasdaq access protocols] at all times other than the normal operating hours...

  17. EUMIS - an open portal framework for interoperable marine environmental services

    NASA Astrophysics Data System (ADS)

    Hamre, T.; Sandven, S.; Leadbetter, A.; Gouriou, V.; Dunne, D.; Grant, M.; Treguer, M.; Torget, Ø.

    2012-04-01

    NETMAR (Open service network for marine environmental data) is an FP7 project that aims to develop a pilot European Marine Information System (EUMIS) for searching, downloading and integrating satellite, in situ and model data from ocean and coastal areas. EUMIS will use a semantic framework coupled with ontologies for identifying and accessing distributed data, such as near-real time, forecast and historical data. Four pilots have been defined to clarify the needs for satellite, in situ and model based products and services in selected user communities. The pilots are: · Pilot 1: Arctic Sea Ice Monitoring and Forecasting · Pilot 2: Oil spill drift forecast and shoreline cleanup assessment services in France · Pilot 3: Ocean colour - Marine Ecosystem, Research and Monitoring · Pilot 4: International Coastal Atlas Network (ICAN) for coastal zone management NETMAR is developing a set of data delivery services for the targeted user communities by means of standard web-GIS and OPeNDAP protocols. Processing services and adaptive service chaining services will also be developed, to enable users to generate new products suited to their needs. Both data retrieved from online repositories as well as the products generated dynamically can be accessed and visualised in the EUMIS portal. For this purpose, a GIS Viewer, a Service Chaining Editor and a Ontology Browser/Discovery Client have been developed and integrated in EUMIS. The EUMIS portal is developed using a portal framework that is compliant with the JSR-168 (Java Portlet Specification 1.0) and JSR-286 (Java Portlet Specification, 2.0) standards. These standards defines the interface (contract) and lifecycle management for a portal system component, a portlet, which can be implemented in a number of programming languages, not only Java. The GIS Viewer is developed using a combination of Java, JavaScript and JSF (e.g. MapFaces). The Service chaining editor is implemented in JavaScript (using different libraries like jQuery and WireIt), and the Ontology Browser/Discovery Client by means of Adobe Flex. In addition to the portlets developed in the project, we have also used several of the pre-built portlets that come with the Liferay Community Edition portal framework, notably the wiki, forum and RSS feed portlets. The presentation will focus on the developed system components and show some examples of products and services from the defined pilots.

  18. Testing the US Integrated Ocean Observing System Data Discovery and Distribution Infrastructure with Real-World Problems

    NASA Astrophysics Data System (ADS)

    Snowden, D. P.; Signell, R.; Knee, K.; Kupiec, J.; Bird, A.; Fratantonio, B.; Koeppen, W.; Wilcox, K.

    2014-12-01

    The distributed, service-oriented architecture of the US Integrated Ocean Observing System (US IOOS) has been implemented mostly independently by US IOOS partners, using different software approaches and different levels of compliance to standards. Some uniformity has been imparted by documenting the intended output data formats and content and service interface behavior. But to date, a rigorous testing of the distributed system of systems has not been done. To assess the functionality of this system, US IOOS is conducting a system integration test (http://github.com/ioos/system-test) that evaluates whether the services (i.e. SOS, OPeNDAP, WMS, CS/W) deployed to the 17 Federal partners and 11 Regional Associations can solve real-world problems. Scenarios were selected that both address IOOS societal goals and test different functionality of the data architecture. For example, one scenario performs an assessment of water level forecast skill by prompting the user for a bounding box and a temporal extent, searching metadata catalogs via a Catalog Services for the Web (CS/W) interface to discover available sea level observations and model results, extracting data from the identified service endpoints (either OPeNDAP or SOS), interpolating both modeled and observed data onto a common time base, and then comparing the skill of the various models. Other scenarios explore issues such as hypoxia and wading bird habitats. For each scenario, the entire workflow (user input, search, access, analysis and visualization) is captured in an IPython Notebook on GitHub. This allows the scenarios to be self-documenting as well as reproducible by anyone, using free software. The Python packages required to run the scenarios are all available on GitHub and Conda packages are available on binstar.org so that users can easily run the scenarios using the free Anaconda Python distribution. With the advent of hosted services such as Wakari, it is possible for anyone to reproduce these workflows for free, without installing any software locally, using just their web browser. Thus in addition to performing as a system integration test, this project serves to provide examples that anyone in the geoscience community can adapt to solve other real-world problems.

  19. FD/DAMA Scheme For Mobile/Satellite Communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Wang, Charles C.; Cheng, Unjeng; Rafferty, William; Dessouky, Khaled I.

    1992-01-01

    Integrated-Adaptive Mobile Access Protocol (I-AMAP) proposed to allocate communication channels to subscribers in first-generation MSAT-X mobile/satellite communication network. Based on concept of frequency-division/demand-assigned multiple access (FD/DAMA) where partition of available spectrum adapted to subscribers' demands for service. Requests processed, and competing requests resolved according to channel-access protocol, or free-access tree algorithm described in "Connection Protocol for Mobile/Satellite Communications" (NPO-17735). Assigned spectrum utilized efficiently.

  20. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  1. On Ramps: Options and Issues in Accessing the Internet.

    ERIC Educational Resources Information Center

    Bocher, Bob

    1995-01-01

    Outlines the basic options that schools and libraries have for accessing the Internet, focusing on four models: direct connection; dial access using SLIP/PPP (Serial Line Internet Protocol/Point-to-Point Protocol); dial-up using terminal emulation mode; and dial access through commercial online services. Discusses access option issues such as…

  2. Validity of Assessments of Youth Access to Tobacco: The Familiarity Effect

    PubMed Central

    Landrine, Hope; Klonoff, Elizabeth A.

    2003-01-01

    Objectives. We examined the standard compliance protocol and its validity as a measure of youth access to tobacco. Methods. In Study 1, youth smokers reported buying cigarettes in stores where they are regular customers. In Study 2, youths attempted to purchase cigarettes by using the Standard Protocol, in which they appeared at stores once for cigarettes, and by using the Familiarity Protocol, in which they were rendered regular customers by purchasing nontobacco items 4 times and then requested cigarettes during their fifth visit. Results. Sales to youths aged 17 years in the Familiarity Protocol were significantly higher than sales to the same age group in the Standard Protocols (62.5% vs. 6%, respectively). Conclusions. The Standard Protocol does not match how youths obtain cigarettes. Access is low for stranger youths within compliance studies, but access is high for familiar youths outside of compliance studies. PMID:14600057

  3. A slotted access control protocol for metropolitan WDM ring networks

    NASA Astrophysics Data System (ADS)

    Baziana, P. A.; Pountourakis, I. E.

    2009-03-01

    In this study we focus on the serious scalability problems that many access protocols for WDM ring networks introduce due to the use of a dedicated wavelength per access node for either transmission or reception. We propose an efficient slotted MAC protocol suitable for WDM ring metropolitan area networks. The proposed network architecture employs a separate wavelength for control information exchange prior to the data packet transmission. Each access node is equipped with a pair of tunable transceivers for data communication and a pair of fixed tuned transceivers for control information exchange. Also, each access node includes a set of fixed delay lines for synchronization reasons; to keep the data packets, while the control information is processed. An efficient access algorithm is applied to avoid both the data wavelengths and the receiver collisions. In our protocol, each access node is capable of transmitting and receiving over any of the data wavelengths, facing the scalability issues. Two different slot reuse schemes are assumed: the source and the destination stripping schemes. For both schemes, performance measures evaluation is provided via an analytic model. The analytical results are validated by a discrete event simulation model that uses Poisson traffic sources. Simulation results show that the proposed protocol manages efficient bandwidth utilization, especially under high load. Also, comparative simulation results prove that our protocol achieves significant performance improvement as compared with other WDMA protocols which restrict transmission over a dedicated data wavelength. Finally, performance measures evaluation is explored for diverse numbers of buffer size, access nodes and data wavelengths.

  4. Spacelab system analysis: The modified free access protocol: An access protocol for communication systems with periodic and Poisson traffic

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Owens, John; Daniel, Steven

    1989-01-01

    The protocol definition and terminal hardware for the modified free access protocol, a communications protocol similar to Ethernet, are developed. A MFA protocol simulator and a CSMA/CD math model are also developed. The protocol is tailored to communication systems where the total traffic may be divided into scheduled traffic and Poisson traffic. The scheduled traffic should occur on a periodic basis but may occur after a given event such as a request for data from a large number of stations. The Poisson traffic will include alarms and other random traffic. The purpose of the protocol is to guarantee that scheduled packets will be delivered without collision. This is required in many control and data collection systems. The protocol uses standard Ethernet hardware and software requiring minimum modifications to an existing system. The modification to the protocol only affects the Ethernet transmission privileges and does not effect the Ethernet receiver.

  5. Scalable Lunar Surface Networks and Adaptive Orbit Access

    NASA Technical Reports Server (NTRS)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  6. Interoperability in the Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Rios Diaz, C.

    2017-09-01

    The protocols and standards currently being supported by the recently released new version of the Planetary Science Archive at this time are the Planetary Data Access Protocol (PDAP), the EuroPlanet- Table Access Protocol (EPN-TAP) and Open Geospatial Consortium (OGC) standards. We explore these protocols in more detail providing scientifically useful examples of their usage within the PSA.

  7. Visualization of ocean forecast in BYTHOS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  8. Tools, Services & Support of NASA Salinity Mission Data Archival Distribution through PO.DAAC

    NASA Astrophysics Data System (ADS)

    Tsontos, V. M.; Vazquez, J.

    2017-12-01

    The Physical Oceanography Distributed Active Center (PO.DAAC) serves as the designated NASA repository and distribution node for all Aquarius/SAC-D and SMAP sea surface salinity (SSS) mission data products in close collaboration with the projects. In addition to these official mission products, that by December 2017 will include the Aquarius V5.0 end-of-mission data, PO.DAAC archives and distributes high-value, principal investigator led satellite SSS products, and also datasets from NASA's "Salinity Processes in the Upper Ocean Regional Study" (SPURS 1 & 2) field campaigns in the N. Atlantic salinity maximum and high rainfall E. Tropical Pacific regions. Here we report on the status of these data holdings at PO.DAAC, and the range of data services and access tools that are provided in support of NASA salinity. These include user support and data discovery services, OPeNDAP and THREDDS web services for subsetting/extraction, and visualization via LAS and SOTO. Emphasis is placed on newer capabilities, including PODAAC's consolidated web services (CWS) and advanced L2 subsetting tool called HiTIDE.

  9. Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.

    2010-01-01

    The Coordinated Data Analysis Web (CDAWeb) data browsing system provides plotting, listing and open access v ia FTP, HTTP, and web services (REST, SOAP, OPeNDAP) for data from mo st NASA Heliophysics missions and is heavily used by the community. C ombining data from many instruments and missions enables broad resear ch analysis and correlation and coordination with other experiments a nd missions. Crucial to its effectiveness is the use of a standard se lf-describing data format, in this case, the Common Data Format (CDF) , also developed at the Space Physics Data facility , and the use of metadata standa rds (easily edited with SKTeditor ). CDAweb is based on a set of IDL routines, CDAWlib . . The CDF project also maintains soft ware and services for translating between many standard formats (CDF. netCDF, HDF, FITS, XML) .

  10. Experiences with http/WebDAV protocols for data access in high throughput computing

    NASA Astrophysics Data System (ADS)

    Bernabeu, Gerard; Martinez, Francisco; Acción, Esther; Bria, Arnau; Caubet, Marc; Delfino, Manuel; Espinal, Xavier

    2011-12-01

    In the past, access to remote storage was considered to be at least one order of magnitude slower than local disk access. Improvement on network technologies provide the alternative of using remote disk. For those accesses one can today reach levels of throughput similar or exceeding those of local disks. Common choices as access protocols in the WLCG collaboration are RFIO, [GSI]DCAP, GRIDFTP, XROOTD and NFS. HTTP protocol shows a promising alternative as it is a simple, lightweight protocol. It also enables the use of standard technologies such as http caching or load balancing which can be used to improve service resilience and scalability or to boost performance for some use cases seen in HEP such as the "hot files". WebDAV extensions allow writing data, giving it enough functionality to work as a remote access protocol. This paper will show our experiences with the WebDAV door for dCache, in terms of functionality and performance, applied to some of the HEP work flows in the LHC Tier1 at PIC.

  11. Development of ITSASGIS-5D: seeking interoperability between Marine GIS layers and scientific multidimensional data using open source tools and OGC services for multidisciplinary research.

    NASA Astrophysics Data System (ADS)

    Sagarminaga, Y.; Galparsoro, I.; Reig, R.; Sánchez, J. A.

    2012-04-01

    Since 2000, an intense effort was conducted in AZTI's Marine Research Division to set up a data management system which could gather all the marine datasets that were being produced by different in-house research projects. For that, a corporative GIS was designed that included a data and metadata repository, a database, a layer catalog & search application and an internet map viewer. Several layers, mostly dealing with physical, chemical and biological in-situ sampling, and basic and thematic cartography including bathymetry, geomorphology, different species habitat maps, and human pressure and activities maps, were successfully gathered in this system. Very soon, it was realised that new marine technologies yielding continuous multidimensional data, sometimes called FES (Fluid Earth System) data, were difficult to handle in this structure. The data affected, mainly included numerical oceanographic and meteorological models, remote sensing data, coastal RADAR data, and some in-situ observational systems such as CTD's casts, moored or lagrangian buoys, etc. A management system for gridded multidimensional data was developed using standardized formats (netcdf using CF conventions) and tools such as THREDDS catalog (UNIDATA/UCAR) providing web services such as OPENDAP, NCSS, and WCS, as well as ncWMS service developed by the Reading e-science Center. At present, a system (ITSASGIS-5D) is being developed, based on OGC standards and open-source tools to allow interoperability between all the data types mentioned before. This system includes, in the server side, postgresql/postgis databases and geoserver for GIS layers, and THREDDS/Opendap and ncWMS services for FES gridded data. Moreover, an on-line client is being developed to allow joint access, user configuration, data visualisation & query and data distribution. This client is using mapfish, ExtJS - GeoEXT, and openlayers libraries. Through this presentation the elements of the first released version of this system will be described and showed, together with the new topics to be developed in new versions that include among others, the integration of geoNetwork libraries and tools for both FES and GIS metadata management, and the use of new OGC Sensor Observation Services (SOS) to integrate non gridded multidimensional data such as time series, depth profiles or trajectories provided by different observational systems. The final aim of this approach is to contribute to the multidisciplinary access and use of marine data for management and research activities, and facilitate the implementation of integrated ecosystem based approaches in the fields of fisheries advice and management, marine spatial planning, or the implementation of the European policies such as the Water Framework Directive, the Marine Strategy Framework Directive or the Habitat Framework Directive.

  12. Distributed reservation control protocols for random access broadcasting channels

    NASA Technical Reports Server (NTRS)

    Greene, E. P.; Ephremides, A.

    1981-01-01

    Attention is given to a communication network consisting of an arbitrary number of nodes which can communicate with each other via a time-division multiple access (TDMA) broadcast channel. The reported investigation is concerned with the development of efficient distributed multiple access protocols for traffic consisting primarily of single packet messages in a datagram mode of operation. The motivation for the design of the protocols came from the consideration of efficient multiple access utilization of moderate to high bandwidth (4-40 Mbit/s capacity) communication satellite channels used for the transmission of short (1000-10,000 bits) fixed length packets. Under these circumstances, the ratio of roundtrip propagation time to packet transmission time is between 100 to 10,000. It is shown how a TDMA channel can be adaptively shared by datagram traffic and constant bandwidth users such as in digital voice applications. The distributed reservation control protocols described are a hybrid between contention and reservation protocols.

  13. Unleashing Geophysics Data with Modern Formats and Services

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Brodie, Ross C.; Druken, Kelsey; Bastrakova, Irina; Evans, Ben; Kemp, Carina; Richardson, Murray; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    Geoscience Australia (GA) is the national steward of large volumes of geophysical data extending over the entire Australasian region and spanning many decades. The volume and variety of data which must be managed, coupled with the increasing need to support machine-to-machine data access, mean that the old "click-and-ship" model delivering data as downloadable files for local analysis is rapidly becoming unviable - a "big data" problem not unique to geophysics. The Australian Government, through the Research Data Services (RDS) Project, recently funded the Australian National Computational Infrastructure (NCI) to organize a wide range of Earth Systems data from diverse collections including geoscience, geophysics, environment, climate, weather, and water resources onto a single High Performance Data (HPD) Node. This platform, which now contains over 10 petabytes of data, is called the National Environmental Research Data Interoperability Platform (NERDIP), and is designed to facilitate broad user access, maximise reuse, and enable integration. GA has contributed several hundred terabytes of geophysical data to the NERDIP. Historically, geophysical datasets have been stored in a range of formats, with metadata of varying quality and accessibility, and without standardised vocabularies. This has made it extremely difficult to aggregate original data from multiple surveys (particularly un-gridded geophysics point/line data) into standard formats suited to High Performance Computing (HPC) environments. To address this, it was decided to use the NERDIP-preferred Hierarchical Data Format (HDF) 5, which is a proven, standard, open, self-describing and high-performance format supported by extensive software tools, libraries and data services. The Network Common Data Form (NetCDF) 4 API facilitates the use of data in HDF5, whilst the NetCDF Climate & Forecasting conventions (NetCDF-CF) further constrain NetCDF4/HDF5 data so as to provide greater inherent interoperability. The first geophysical data collection selected for transformation by GA was Airborne ElectroMagnetics (AEM) data which was held in proprietary-format files, with associated ISO 19115 metadata held in a separate relational database. Existing NetCDF-CF metadata profiles were enhanced to cover AEM and other geophysical data types, and work is underway to formalise the new geophysics vocabulary as a proposed extension to the Climate & Forecasting conventions. The richness and flexibility of HDF5's internal indexing mechanisms has allowed lossless restructuring of the AEM data for efficient storage, subsetting and access via either the NetCDF4/HDF5 APIs or Open-source Project for a Network Data Access Protocol (OPeNDAP) data services. This approach not only supports large-scale HPC processing, but also interactive access to a wide range of geophysical data in user-friendly environments such as iPython notebooks and more sophisticated cloud-enabled portals such as the Virtual Geophysics Laboratory (VGL). As multidimensional AEM datasets are relatively complex compared to other geophysical data types, the general approach employed in this project for modernizing AEM data is likely to be applicable to other geophysics data types. When combined with the use of standards-based data services and APIs, a coordinated, systematic modernisation will result in vastly improved accessibility to, and usability of, geophysical data in a wide range of computational environments both within and beyond the geophysics community.

  14. A hybrid MAC protocol design for energy-efficient very-high-throughput millimeter wave, wireless sensor communication networks

    NASA Astrophysics Data System (ADS)

    Jian, Wei; Estevez, Claudio; Chowdhury, Arshad; Jia, Zhensheng; Wang, Jianxin; Yu, Jianguo; Chang, Gee-Kung

    2010-12-01

    This paper presents an energy-efficient Medium Access Control (MAC) protocol for very-high-throughput millimeter-wave (mm-wave) wireless sensor communication networks (VHT-MSCNs) based on hybrid multiple access techniques of frequency division multiplexing access (FDMA) and time division multiplexing access (TDMA). An energy-efficient Superframe for wireless sensor communication network employing directional mm-wave wireless access technologies is proposed for systems that require very high throughput, such as high definition video signals, for sensing, processing, transmitting, and actuating functions. Energy consumption modeling for each network element and comparisons among various multi-access technologies in term of power and MAC layer operations are investigated for evaluating the energy-efficient improvement of proposed MAC protocol.

  15. J-Plus Web Portal

    NASA Astrophysics Data System (ADS)

    Civera Lorenzo, Tamara

    2017-10-01

    Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol

  16. ESMPy and OpenClimateGIS: Python Interfaces for High Performance Grid Remapping and Geospatial Dataset Manipulation

    NASA Astrophysics Data System (ADS)

    O'Kuinghttons, Ryan; Koziol, Benjamin; Oehmke, Robert; DeLuca, Cecelia; Theurich, Gerhard; Li, Peggy; Jacob, Joseph

    2016-04-01

    The Earth System Modeling Framework (ESMF) Python interface (ESMPy) supports analysis and visualization in Earth system modeling codes by providing access to a variety of tools for data manipulation. ESMPy started as a Python interface to the ESMF grid remapping package, which provides mature and robust high-performance and scalable grid remapping between 2D and 3D logically rectangular and unstructured grids and sets of unconnected data. ESMPy now also interfaces with OpenClimateGIS (OCGIS), a package that performs subsetting, reformatting, and computational operations on climate datasets. ESMPy exposes a subset of ESMF grid remapping utilities. This includes bilinear, finite element patch recovery, first-order conservative, and nearest neighbor grid remapping methods. There are also options to ignore unmapped destination points, mask points on source and destination grids, and provide grid structure in the polar regions. Grid remapping on the sphere takes place in 3D Cartesian space, so the pole problem is not an issue as it can be with other grid remapping software. Remapping can be done between any combination of 2D and 3D logically rectangular and unstructured grids with overlapping domains. Grid pairs where one side of the regridding is represented by an appropriate set of unconnected data points, as is commonly found with observational data streams, is also supported. There is a developing interoperability layer between ESMPy and OpenClimateGIS (OCGIS). OCGIS is a pure Python, open source package designed for geospatial manipulation, subsetting, and computation on climate datasets stored in local NetCDF files or accessible remotely via the OPeNDAP protocol. Interfacing with OCGIS has brought GIS-like functionality to ESMPy (i.e. subsetting, coordinate transformations) as well as additional file output formats (i.e. CSV, ESRI Shapefile). ESMPy is distinguished by its strong emphasis on open source, community governance, and distributed development. The user base has grown quickly, and the package is integrating with several other software tools and frameworks. These include the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), Iris, PyFerret, cfpython, and the Community Surface Dynamics Modeling System (CSDMS). ESMPy minimum requirements include Python 2.6, Numpy 1.6.1 and an ESMF installation. Optional dependencies include NetCDF and OCGIS-related dependencies: GDAL, Shapely, and Fiona. ESMPy is regression tested nightly, and supported on Darwin, Linux and Cray systems with the GNU compiler suite and MPI communications. OCGIS is supported on Linux, and also undergoes nightly regression testing. Both packages are installable from Anaconda channels. Upcoming development plans for ESMPy involve development of a higher order conservative grid remapping method. Future OCGIS development will focus on mesh and location stream interoperability and streamlined access to ESMPy's MPI implementation.

  17. Data Standardization for Carbon Cycle Modeling: Lessons Learned

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Liu, S.; Cook, R. B.; Post, W. M.; Huntzinger, D. N.; Schwalm, C.; Schaefer, K. M.; Jacobson, A. R.; Michalak, A. M.

    2012-12-01

    Terrestrial biogeochemistry modeling is a crucial component of carbon cycle research and provides unique capabilities to understand terrestrial ecosystems. The Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) aims to identify key differences in model formulation that drive observed differences in model predictions of biospheric carbon exchange. To do so, the MsTMIP framework provides standardized prescribed environmental driver data and a standard model protocol to facilitate comparisons of modeling results from nearly 30 teams. Model performance is then evaluated against a variety of carbon-cycle related observations (remote sensing, atmospheric, and flux tower-based observations) using quantitative performance measures and metrics in an integrated evaluation framework. As part of this effort, we have harmonized highly diverse and heterogeneous environmental driver data, model outputs, and observational benchmark data sets to facilitate use and analysis by the MsTMIP team. In this presentation, we will describe the lessons learned from this data-intensive carbon cycle research. The data harmonization activity itself can be made more efficient with the consideration of proper tools, version control, workflow management, and collaboration within the whole team. The adoption of on-demand and interoperable protocols (e.g. OPeNDAP and Open Geospatial Consortium) makes data visualization and distribution more flexible. Users can customize and download data in specific spatial extent, temporal period, and different resolutions. The effort to properly organize data in an open and standard format (e.g. Climate & Forecast compatible netCDF) allows the data to be analysed by a dispersed set of researchers more efficiently, and maximizes the longevity and utilization of the data. The lessons learned from this specific experience can benefit efforts by the broader community to leverage diverse data resources more efficiently in scientific research.

  18. Health care access for rural youth on equal terms? A mixed methods study protocol in northern Sweden.

    PubMed

    Goicolea, Isabel; Carson, Dean; San Sebastian, Miguel; Christianson, Monica; Wiklund, Maria; Hurtig, Anna-Karin

    2018-01-11

    The purpose of this paper is to propose a protocol for researching the impact of rural youth health service strategies on health care access. There has been no published comprehensive assessment of the effectiveness of youth health strategies in rural areas, and there is no clearly articulated model of how such assessments might be conducted. The protocol described here aims to gather information to; i) Assess rural youth access to health care according to their needs, ii) Identify and understand the strategies developed in rural areas to promote youth access to health care, and iii) Propose actions for further improvement. The protocol is described with particular reference to research being undertaken in the four northernmost counties of Sweden, which contain a widely dispersed and diverse youth population. The protocol proposes qualitative and quantitative methodologies sequentially in four phases. First, to map youth access to health care according to their health care needs, including assessing horizontal equity (equal use of health care for equivalent health needs,) and vertical equity (people with greater health needs should receive more health care than those with lesser needs). Second, a multiple case study design investigates strategies developed across the region (youth clinics, internet applications, public health programs) to improve youth access to health care. Third, qualitative comparative analysis of the 24 rural municipalities in the region identifies the best combination of conditions leading to high youth access to health care. Fourth, a concept mapping study involving rural stakeholders, care providers and youth provides recommended actions to improve rural youth access to health care. The implementation of this research protocol will contribute to 1) generating knowledge that could contribute to strengthening rural youth access to health care, as well as to 2) advancing the application of mixed methods to explore access to health care.

  19. Structural barriers in access to medical marijuana in the USA-a systematic review protocol.

    PubMed

    Valencia, Celina I; Asaolu, Ibitola O; Ehiri, John E; Rosales, Cecilia

    2017-08-07

    There are 43 state medical marijuana programs in the USA, yet limited evidence is available on the demographic characteristics of the patient population accessing these programs. Moreover, insights into the social and structural barriers that inform patients' success in accessing medical marijuana are limited. A current gap in the scientific literature exists regarding generalizable data on the social, cultural, and structural mechanisms that hinder access to medical marijuana among qualifying patients. The goal of this systematic review, therefore, is to identify the aforementioned mechanisms that inform disparities in access to medical marijuana in the USA. This scoping review protocol outlines the proposed study design for the systematic review and evaluation of peer-reviewed scientific literature on structural barriers to medical marijuana access. The protocol follows the guidelines set forth by the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) checklist. The overarching goal of this study is to rigorously evaluate the existing peer-reviewed data on access to medical marijuana in the USA. Income, ethnic background, stigma, and physician preferences have been posited as the primary structural barriers influencing medical marijuana patient population demographics in the USA. Identification of structural barriers to accessing medical marijuana provides a framework for future policies and programs. Evidence-based policies and programs for increasing medical marijuana access help minimize the disparity of access among qualifying patients.

  20. Advanced teleprocessing systems

    NASA Astrophysics Data System (ADS)

    Kleinrock, L.; Gerla, M.

    1982-09-01

    This Annual Technical Report covers research covering the period from October 1, 1981 to September 30, 1982. This contract has three primary designated research areas: packet radio systems, resource sharing and allocation, and distributed processing and control. This report contains abstracts of publications which summarize research results in these areas followed by the main body of the report which is devoted to a study of channel access protocols that are executed by the nodes of a network to schedule their transmissions on multi-access broadcast channel. In particular the main body consists of a Ph.D. dissertation, Channel Access Protocols for Multi-Hop Broadcast Packet Radio Networks. This work discusses some new channel access protocols useful for mobile radio networks. Included is an analysis of slotted ALOHA and some tight bounds on the performance of all possible protocols in a mobile environment.

  1. 75 FR 69492 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-12

    ... month.[the following charges: $285/hour--For Active Connection testing using current Exchange access... using current Exchange access protocols; $333/hour--For Active Connection testing using current Exchange... a fee of $285 per hour for active connection testing using current BX access protocols during the...

  2. Data and Data Products for Climate Research: Web Services at the Asia-Pacific Data-Research Center (APDRC)

    NASA Astrophysics Data System (ADS)

    DeCarlo, S.; Potemra, J. T.; Wang, K.

    2012-12-01

    The International Pacific Research Center (IPRC) at the University of Hawaii maintains a data center for climate studies called the Asia-Pacific Data-Research Center (APDRC). This data center was designed within a center of excellence in climate research with the intention of serving the needs of the research scientist. The APDRC provides easy access to a wide collection of climate data and data products for a wide variety of users. The data center maintains an archive of approximately 100 data sets including in-situ and remote data, as well as a range of model-based output. All data are available via on-line browsing tools such as a Live Access Server (LAS) and DChart, and direct binary access is available through OPeNDAP services. On-line tutorials on how to use these services are now available. Users can keep up-to-date with new data and product announcements via the APDRC facebook page. The main focus of the APDRC has been climate scientists, and the services are therefore streamlined to such users, both in the number and types of data served, but also in the way data are served. In addition, due to the integration of the APDRC within the IPRC, several value-added data products (see figure for an example using Argo floats) have been developed via a variety of research activities. The APDRC, therefore, has three main foci: 1. acquisition of climate-related data, 2. maintenance of integrated data servers, and 3. development and distribution of data products The APDRC can be found at http://apdrc.soest.hawaii.edu. The presentation will provide an overview along with specific examples of the data, data products and data services available at the APDRC.; APDRC product example: gridded field from Argo profiling floats

  3. A Brief Survey of Media Access Control, Data Link Layer, and Protocol Technologies for Lunar Surface Communications

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.

    2009-01-01

    This paper surveys and describes some of the existing media access control and data link layer technologies for possible application in lunar surface communications and the advanced wideband Direct Sequence Code Division Multiple Access (DSCDMA) conceptual systems utilizing phased-array technology that will evolve in the next decade. Time Domain Multiple Access (TDMA) and Code Division Multiple Access (CDMA) are standard Media Access Control (MAC) techniques that can be incorporated into lunar surface communications architectures. Another novel hybrid technique that is recently being developed for use with smart antenna technology combines the advantages of CDMA with those of TDMA. The relatively new and sundry wireless LAN data link layer protocols that are continually under development offer distinct advantages for lunar surface applications over the legacy protocols which are not wireless. Also several communication transport and routing protocols can be chosen with characteristics commensurate with smart antenna systems to provide spacecraft communications for links exhibiting high capacity on the surface of the Moon. The proper choices depend on the specific communication requirements.

  4. Research on a Queue Scheduling Algorithm in Wireless Communications Network

    NASA Astrophysics Data System (ADS)

    Yang, Wenchuan; Hu, Yuanmei; Zhou, Qiancai

    This paper proposes a protocol QS-CT, Queue Scheduling Mechanism based on Multiple Access in Ad hoc net work, which adds queue scheduling mechanism to RTS-CTS-DATA using multiple access protocol. By endowing different queues different scheduling mechanisms, it makes networks access to the channel much more fairly and effectively, and greatly enhances the performance. In order to observe the final performance of the network with QS-CT protocol, we simulate it and compare it with MACA/C-T without QS-CT protocol. Contrast to MACA/C-T, the simulation result shows that QS-CT has greatly improved the throughput, delay, rate of packets' loss and other key indicators.

  5. A New Cellular Architecture for Information Retrieval from Sensor Networks through Embedded Service and Security Protocols

    PubMed Central

    Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon

    2016-01-01

    Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network. PMID:27314351

  6. A New Cellular Architecture for Information Retrieval from Sensor Networks through Embedded Service and Security Protocols.

    PubMed

    Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon

    2016-06-14

    Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network.

  7. Development of a database system for near-future climate change projections under the Japanese National Project SI-CAT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Y.; Kawahara, S.; Araki, F.; Matsuoka, D.; Ishikawa, Y.; Fujita, M.; Sugimoto, S.; Okada, Y.; Kawazoe, S.; Watanabe, S.; Ishii, M.; Mizuta, R.; Murata, A.; Kawase, H.

    2017-12-01

    Analyses of large ensemble data are quite useful in order to produce probabilistic effect projection of climate change. Ensemble data of "+2K future climate simulations" are currently produced by Japanese national project "Social Implementation Program on Climate Change Adaptation Technology (SI-CAT)" as a part of a database for Policy Decision making for Future climate change (d4PDF; Mizuta et al. 2016) produced by Program for Risk Information on Climate Change. Those data consist of global warming simulations and regional downscaling simulations. Considering that those data volumes are too large (a few petabyte) to download to a local computer of users, a user-friendly system is required to search and download data which satisfy requests of the users. We develop "a database system for near-future climate change projections" for providing functions to find necessary data for the users under SI-CAT. The database system for near-future climate change projections mainly consists of a relational database, a data download function and user interface. The relational database using PostgreSQL is a key function among them. Temporally and spatially compressed data are registered on the relational database. As a first step, we develop the relational database for precipitation, temperature and track data of typhoon according to requests by SI-CAT members. The data download function using Open-source Project for a Network Data Access Protocol (OPeNDAP) provides a function to download temporally and spatially extracted data based on search results obtained by the relational database. We also develop the web-based user interface for using the relational database and the data download function. A prototype of the database system for near-future climate change projections are currently in operational test on our local server. The database system for near-future climate change projections will be released on Data Integration and Analysis System Program (DIAS) in fiscal year 2017. Techniques of the database system for near-future climate change projections might be quite useful for simulation and observational data in other research fields. We report current status of development and some case studies of the database system for near-future climate change projections.

  8. Development of an Oceanographic Data Archiving and Service System for the Korean Researchers

    NASA Astrophysics Data System (ADS)

    Kim, Sung Dae; Park, Hyuk Min; Baek, Sang Ho

    2014-05-01

    Oceanographic Data and Information Center of Korea Institute of Ocean Science and Technology (KIOST) started to develop an oceanographic data archiving and service system in 2010 to support the Korean ocean researchers by providing quality controlled data continuously. Many physical oceanographic data available in the public domain and Korean domestic data were collected periodically, quality controlled, manipulated and provided to ocean modelers who need ocean data continuously and marine biologists who don't know well physical data but need it. The northern limit and the southern limit of the spatial coverage are 20°N and 55°N, and the western limit and the eastern limit are 110°E and 150°E, respectively. To archive TS (Temperature and Salinity) profile data, ARGO data were gathered from ARGO GDACs (France and USA) and many historical TS profile data observed by CTD, OSD and BT were retrieved from World Ocean Database 2009. The quality control software for TS profile data, which meets QC criteria suggested by the ARGO program and the GTSPP (Global Temperature-Salinity Profile Program), was programmed and applied to the collected data. By the end of 2013, the total number of vertical profile data from the ARGO GDACs was 59,642 and total number of station data from WOD 2009 was 1,604,422. We also collected the global satellite SST data produced by NCDC and global SSH data from AVISO every day. An automatic program was coded to collect satellite data, extract sub data sets of the North West Pacific area and produce distribution maps. The total number of collected satellite data sets was 3,613 by the end of 2013. We use 3 different data services to provide archived data to the Korean experts. A FTP service was prepared to allow data users to download data in the original format. We developed TS database system using Oracle RDBMS to contain all collected temperature salinity data and support SQL data retrieval with various conditions. The KIOST ocean data portal was used as the data retrieving service of TS DB, which uses GIS interface made by open source GIS software. We also installed Live Access Service developed by US PMEL for service of the satellite netCDF data files, which support on-the-fly visualization and OPeNDAP (Open-source Project for a Network Data Access Protocol) service for remote connection and sub-setting of large data set

  9. An improved ATAC-seq protocol reduces background and enables interrogation of frozen tissues.

    PubMed

    Corces, M Ryan; Trevino, Alexandro E; Hamilton, Emily G; Greenside, Peyton G; Sinnott-Armstrong, Nicholas A; Vesuna, Sam; Satpathy, Ansuman T; Rubin, Adam J; Montine, Kathleen S; Wu, Beijing; Kathiria, Arwa; Cho, Seung Woo; Mumbach, Maxwell R; Carter, Ava C; Kasowski, Maya; Orloff, Lisa A; Risca, Viviana I; Kundaje, Anshul; Khavari, Paul A; Montine, Thomas J; Greenleaf, William J; Chang, Howard Y

    2017-10-01

    We present Omni-ATAC, an improved ATAC-seq protocol for chromatin accessibility profiling that works across multiple applications with substantial improvement of signal-to-background ratio and information content. The Omni-ATAC protocol generates chromatin accessibility profiles from archival frozen tissue samples and 50-μm sections, revealing the activities of disease-associated DNA elements in distinct human brain structures. The Omni-ATAC protocol enables the interrogation of personal regulomes in tissue context and translational studies.

  10. Abbreviated MRI Protocols: Wave of the Future for Breast Cancer Screening.

    PubMed

    Chhor, Chloe M; Mercado, Cecilia L

    2017-02-01

    The purpose of this article is to describe the use of abbreviated breast MRI protocols for improving access to screening for women at intermediate risk. Breast MRI is not a cost-effective modality for screening women at intermediate risk, including those with dense breast tissue as the only risk. Abbreviated breast MRI protocols have been proposed as a way of achieving efficiency and rapid throughput. Use of these abbreviated protocols may increase availability and provide women with greater access to breast MRI.

  11. Xrootd in dCache - design and experiences

    NASA Astrophysics Data System (ADS)

    Behrmann, Gerd; Ozerov, Dmitry; Zangerl, Thomas

    2011-12-01

    dCache is a well established distributed storage solution used in both high energy physics computing and other disciplines. An overview of the implementation of the xrootd data access protocol within dCache is presented. The performance of various access mechanisms is studied and compared and it is concluded that our implementation is as perfomant as other protocols. This makes dCache a compelling alternative to the Scalla software suite implementation of xrootd, with added value from broad protocol support, including the IETF approved NFS 4.1 protocol.

  12. Design of the frame structure for a multiservice interactive system using ATM-PON

    NASA Astrophysics Data System (ADS)

    Nam, Jae-Hyun; Jang, Jongwook; Lee, Jung-Tae

    1998-10-01

    The MAC (Medium Access Control) protocol controls B-NT1s' (Optical Network Unit) access to the shared capacity on the PON, this protocol is very important if TDMA (Time Division Multiple Access) multiplexing is used on the upstream. To control the upstream traffic some kind of access protocol has to be implemented. There are roughly two different approaches to use request cells: in a collision free way or such that collisions in a request slot are allowed. It is the objective of this paper to describe a MAC-protocol structure that supports both approaches and hybrids of it. In our paper we grantee the QoS (Quality of Service) of each B-NT1 through LOC, LOV, LOA field that are the length field of the transmitted cell at each B-NT1. Each B-NT1 transmits its status of request on request cell.

  13. Jade: using on-demand cloud analysis to give scientists back their flow

    NASA Astrophysics Data System (ADS)

    Robinson, N.; Tomlinson, J.; Hilson, A. J.; Arribas, A.; Powell, T.

    2017-12-01

    The UK's Met Office generates 400 TB weather and climate data every day by running physical models on its Top 20 supercomputer. As data volumes explode, there is a danger that analysis workflows become dominated by watching progress bars, and not thinking about science. We have been researching how we can use distributed computing to allow analysts to process these large volumes of high velocity data in a way that's easy, effective and cheap.Our prototype analysis stack, Jade, tries to encapsulate this. Functionality includes: An under-the-hood Dask engine which parallelises and distributes computations, without the need to retrain analysts Hybrid compute clusters (AWS, Alibaba, and local compute) comprising many thousands of cores Clusters which autoscale up/down in response to calculation load using Kubernetes, and balances the cluster across providers based on the current price of compute Lazy data access from cloud storage via containerised OpenDAP This technology stack allows us to perform calculations many orders of magnitude faster than is possible on local workstations. It is also possible to outperform dedicated local compute clusters, as cloud compute can, in principle, scale to much larger scales. The use of ephemeral compute resources also makes this implementation cost efficient.

  14. GES DISC Datalist Enables Easy Data Selection For Natural Phenomena Studies

    NASA Technical Reports Server (NTRS)

    Li, Angela; Shie, Chung-Lin; Hegde, Mahabaleshwa; Petrenko, Maksym; Teng, William; Bryant, Keith; Liu, Zhong; Hearty, Thomas; Shen, Suhung; Seiler, Edward; hide

    2017-01-01

    In order to investigate and assess natural hazards such as tropical storms, winter storms, volcanic eruptions, floods, and drought in a timely manner, the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) has been developing an efficient data search and access service. Called "Datalist," this service enables users to acquire their data of interest "all at once," with minimum effort. A Datalist is a virtual collection of predefined or user-defined data variables from one or more archived data sets. Datalists are more than just data. Datalists effectively provide users with a sophisticated integrated data and services package, including metadata, citation, documentation, visualization, and data-specific services (e.g., subset and OPeNDAP), all available from one-stop shopping. The predefined Datalists, created by the experienced GES DISC science support team, should save a significant amount of time that users would otherwise have to spend. The Datalist service is an extension of the new GES DISC website, which is completely data-driven. A Datalist, also known as "data bundle," is treated just as any other data set. Being a virtual collection, a Datalist requires no extra storage space.

  15. Accessing Suomi NPP OMPS Products Through the GES DISC Online Data Services

    NASA Astrophysics Data System (ADS)

    Johnson, J. E.; Wei, J. C.; Garasimov, I.; Vollmer, B.

    2017-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is the primary archive of the latest versions of atmospheric composition data from the Suomi National Polar-orbiting Partnership (NPP) Ozone Mapping Profiler Suite (OMPS) mission. OMPS consists of three spectrometers: a Nadir Mapper (300-420 nm) with 50×50 km2 resolution and 2600 km wide swath, a Nadir Profiler (250-310 nm) with 250×250 km2 footprint, and a three-slit Limb Profiler (290-1000 nm) making 3 vertical profiles spaced about 250 km apart with 1-2 km vertical resolution up to 65 km altitude. OMPS measures primarily ozone, both total column and vertical profiles, but also includes measurements of NO2 and SO2 total and tropospheric columns, aerosol extinction profiles. Also available from OMPS are the Level-1B calibrated and geolocated radiances. All data products are generated at the OMPS Science Investigator Processing System (SIPS) at NASA/GSFC. This presentation will provide an overview of the OMPS products available at the GES DISC archive, as well as demonstrate the various data services provided by the GES DISC. Traditionally users have accessed data by downloading data files using anonymous FTP. Although one may still download the full OMPS data products from the archive (using HTTPS instead), the GES DISC now also offers online data services that allow users to not have to physically download the full data files to their desktop computer. Users can access the data through a desktop client tool (such as IDL, Matlab or Panoply) using OPeNDAP. Other data services include file subsetters (spatially, temporally, and/or by variable), as well as data visualization and exploration services for users to preview or quickly analyze the data. Since TOMS and EOS Aura data products are also available from the GES DISC archive, these can be easily accessed and compared with the OMPS data.

  16. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.

    2015-01-01

    The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).

  17. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    PubMed

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  18. Redactions in protocols for drug trials: what industry sponsors concealed.

    PubMed

    Marquardsen, Mikkel; Ogden, Michelle; Gøtzsche, Peter C

    2018-04-01

    Objective To describe the redactions in contemporary protocols for industry-sponsored randomised drug trials with patient relevant outcomes and to evaluate whether there was a legitimate rationale for the redactions. Design Cohort study. Under the Freedom of Information Act, we requested access to trial protocols approved by a research ethics committee in Denmark from October 2012 to March 2013. We received 17 consecutive protocols, which had been redacted before we got them, and nine protocols without redactions. In five additional cases, the companies refused to let the committees give us access, and in three other cases, documents were missing. Participants Not applicable. Setting Not applicable. Main outcome measure Amount and nature of redactions in 22 predefined key protocol variables. Results The redactions were most widespread in those sections of the protocol where there is empirical evidence of substantial problems with the trustworthiness of published drug trials: data analysis, handling of missing data, detection and analysis of adverse events, definition of the outcomes, interim analyses and premature termination of the study, sponsor's access to incoming data while the study is running, ownership to the data and investigators' publication rights. The parts of the text that were redacted differed widely, both between companies and within the same company. Conclusions We could not identify any legitimate rationale for the redactions. The current mistrust in industry-sponsored drug trials can only change if the industry offers unconditional access to its trial protocols and other relevant documents and data.

  19. Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.

    2011-12-01

    Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.

  20. A universal data access and protocol integration mechanism for smart home

    NASA Astrophysics Data System (ADS)

    Shao, Pengfei; Yang, Qi; Zhang, Xuan

    2013-03-01

    With the lack of standardized or completely missing communication interfaces in home electronics, there is no perfect solution to address every aspect in smart homes based on existing protocols and technologies. In addition, the central control unit (CCU) of smart home system working point-to-point between the multiple application interfaces and the underlying hardware interfaces leads to its complicated architecture and unpleasant performance. A flexible data access and protocol integration mechanism is required. The current paper offers a universal, comprehensive data access and protocol integration mechanism for a smart home. The universal mechanism works as a middleware adapter with unified agreements of the communication interfaces and protocols, offers an abstraction of the application level from the hardware specific and decoupling the hardware interface modules from the application level. Further abstraction for the application interfaces and the underlying hardware interfaces are executed based on adaption layer to provide unified interfaces for more flexible user applications and hardware protocol integration. This new universal mechanism fundamentally changes the architecture of the smart home and in some way meets the practical requirement of smart homes more flexible and desirable.

  1. Interoperable Access to Near Real Time Ocean Observations with the Observing System Monitoring Center

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Hankin, S.; Mendelssohn, R.; Simons, R.; Smith, B.; Kern, K. J.

    2013-12-01

    The Observing System Monitoring Center (OSMC), a project funded by the National Oceanic and Atmospheric Administration's Climate Observations Division (COD), exists to join the discrete 'networks' of In Situ ocean observing platforms -- ships, surface floats, profiling floats, tide gauges, etc. - into a single, integrated system. The OSMC is addressing this goal through capabilities in three areas focusing on the needs of specific user groups: 1) it provides real time monitoring of the integrated observing system assets to assist management in optimizing the cost-effectiveness of the system for the assessment of climate variables; 2) it makes the stream of real time data coming from the observing system available to scientific end users into an easy-to-use form; and 3) in the future, it will unify the delayed-mode data from platform-focused data assembly centers into a standards- based distributed system that is readily accessible to interested users from the science and education communities. In this presentation, we will be focusing on the efforts of the OSMC to provide interoperable access to the near real time data stream that is available via the Global Telecommunications System (GTS). This is a very rich data source, and includes data from nearly all of the oceanographic platforms that are actively observing. We will discuss how the data is being served out using a number of widely used 'web services' (including OPeNDAP and SOS) and downloadable file formats (KML, csv, xls, netCDF), so that it can be accessed in web browsers and popular desktop analysis tools. We will also be discussing our use of the Environmental Research Division's Data Access Program (ERDDAP), available from NOAA/NMFS, which has allowed us to achieve our goals of serving the near real time data. From an interoperability perspective, it's important to note that access to the this stream of data is not just for humans, but also for machine-to-machine requests. We'll also delve into how we configured access to the near real time ocean observations in accordance with the Climate and Forecast (CF) metadata conventions describing the various 'feature types' associated with particular in situ observation types, or discrete sampling geometries (DSG). Wrapping up, we'll discuss some of the ways this data source is already being used.

  2. Pace: Privacy-Protection for Access Control Enforcement in P2P Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Artigas, Marc; García-López, Pedro

    In open environments such as peer-to-peer (P2P) systems, the decision to collaborate with multiple users — e.g., by granting access to a resource — is hard to achieve in practice due to extreme decentralization and the lack of trusted third parties. The literature contains a plethora of applications in which a scalable solution for distributed access control is crucial. This fact motivates us to propose a protocol to enforce access control, applicable to networks consisting entirely of untrusted nodes. The main feature of our protocol is that it protects both sensitive permissions and sensitive policies, and does not rely on any centralized authority. We analyze the efficiency (computational effort and communication overhead) as well as the security of our protocol.

  3. Research of Ad Hoc Networks Access Algorithm

    NASA Astrophysics Data System (ADS)

    Xiang, Ma

    With the continuous development of mobile communication technology, Ad Hoc access network has become a hot research, Ad Hoc access network nodes can be used to expand capacity of multi-hop communication range of mobile communication system, even business adjacent to the community, improve edge data rates. When the ad hoc network is the access network of the internet, the gateway discovery protocol is very important to choose the most appropriate gateway to guarantee the connectivity between ad hoc network and IP based fixed networks. The paper proposes a QoS gateway discovery protocol which uses the time delay and stable route to the gateway selection conditions. And according to the gateway discovery protocol, it also proposes a fast handover scheme which can decrease the handover time and improve the handover efficiency.

  4. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    NASA Astrophysics Data System (ADS)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  5. Changing knowledge perspective in a changing world: The Adriatic multidisciplinary TDS approach

    NASA Astrophysics Data System (ADS)

    Bergamasco, Andrea; Carniel, Sandro; Nativi, Stefano; Signell, Richard P.; Benetazzo, Alvise; Falcieri, Francesco M.; Bonaldo, Davide; Minuzzo, Tiziano; Sclavo, Mauro

    2013-04-01

    The use and exploitation of the marine environment in recent years has been increasingly high, therefore calling for the need of a better description, monitoring and understanding of its behavior. However, marine scientists and managers often spend too much time in accessing and reformatting data instead of focusing on discovering new knowledge from the processes observed and data acquired. There is therefore the need to make more efficient our approach to data mining, especially in a world where rapid climate change imposes rapid and quick choices. In this context, it is mandatory to explore ways and possibilities to make large amounts of distributed data usable in an efficient and easy way, an effort that requires standardized data protocols, web services and standards-based tools. Following the US-IOOS approach, which has been adopted in many oceanographic and meteorological sectors, we present a CNR experience in the direction of setting up a national Italian IOOS framework (at the moment confined at the Adriatic Sea environment), using the THREDDS (THematic Real-time Environmental Distributed Data Services) Data Server (TDS). A TDS is a middleware designed to fill the gap between data providers and data users, and provides services allowing data users to find the data sets pertaining to their scientific needs, to access, visualize and use them in an easy way, without the need of downloading files to the local workspace. In order to achieve this results, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata so that the data can be read and searched for in a standard way. The TDS core is a NetCDF- Java Library implementing a Common Data Model (CDM), as developed by Unidata (http://www.unidata.ucar.edu), allowing the access to "array-based" scientific data. Climate and Forecast (CF) compliant NetCDF files can be read directly with no modification, while non-compliant files can be modified to meet appropriate metadata requirements. Once standardized in the CDM, the TDS makes datasets available through a series of web services such as OPeNDAP or Open Geospatial Consortium Web Coverage Service (WCS), allowing the data users to easily obtain small subsets from large datasets, and to quickly visualize their content by using tools such as GODIVA2 or Integrated Data Viewer (IDV). In addition, an ISO metadata service is available through the TDS that can be harvested by catalogue broker services (e.g. GI-cat) to enable distributed search across federated data servers. Example of TDS datasets from oceanographic evolutions (currents, waves, sediments...) will be described and discussed, while some examples can be accessed directly to the Venice site http://tds.ve.ismar.cnr.it:8080/thredds/catalog.html (Bergamasco et al., 2012) also within the framework of RITMARE Project. References Bergamasco A., Benetazzo A., Carniel S., Falcieri F., Minuzzo T., Signell R.P. and M. Sclavo, 2012. From interoperability to knowledge discovery using large model datasets in the marine environment: the THREDDS Data Server example. Advances in Oceanography and Limnology, 3(1), 41-50. DOI:10.1080/19475721.2012.669637

  6. Browsing for the Best Internet Access Provider?

    ERIC Educational Resources Information Center

    Weil, Marty

    1996-01-01

    Highlights points to consider when choosing an Internet Service Provider. Serial Line Internet Protocol (SLIP) and Point to Point Protocol (PPP) are compared regarding price, performance, bandwidth, speed, and technical support. Obtaining access via local, national, consumer online, and telephone-company providers is discussed. A pricing chart and…

  7. Direct data access protocols benchmarking on DPM

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  8. Performance comparison of token ring protocols for hard-real-time communication

    NASA Technical Reports Server (NTRS)

    Kamat, Sanjay; Zhao, Wei

    1992-01-01

    The ability to guarantee the deadlines of synchronous messages while maintaining a good aggregate throughput is an important consideration in the design of distributed real-time systems. In this paper, we study two token ring protocols, the priority driven protocol and the timed token protocol, for their suitability for hard real-time systems. Both these protocols use a token to control access to the transmission medium. In a priority driven protocol, messages are assigned priorities and the protocol ensures that messages are transmitted in the order of their priorities. Timed token protocols do not provide for priority arbitration but ensure that the maximum access delay for a station is bounded. For both protocols, we first derive the schedulability conditions under which the transmission deadlines of a given set of synchronous messages can be guaranteed. Subsequently, we use these schedulability conditions to quantitatively compare the average case behavior of the protocols. This comparison demonstrates that each of the protocols has its domain of superior performance and neither dominates the other for the entire range of operating conditions.

  9. 76 FR 24862 - Proposed Information Collection; Comment Request; Protocol for Access to Tissue Specimen Samples...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-03

    ... Collection; Comment Request; Protocol for Access to Tissue Specimen Samples From the National Marine Mammal Tissue Bank AGENCY: National Oceanic and Atmospheric Administration (NOAA), Commerce. ACTION: Notice... National Marine Mammal Tissue Bank (NMMTB) was established by the National Marine Fisheries Service (NMFS...

  10. Securing TCP/IP and Dial-up Access to Administrative Data.

    ERIC Educational Resources Information Center

    Conrad, L. Dean

    1992-01-01

    This article describes Arizona State University's solution to security risk inherent in general access systems such as TCP/IP (Transmission Control Protocol/INTERNET Protocol). Advantages and disadvantages of various options are compared, and the process of selecting a log-on authentication approach involving generation of a different password at…

  11. Comparison between publicly accessible publications, registries, and protocols of phase III trials indicated persistence of selective outcome reporting.

    PubMed

    Zhang, Sheng; Liang, Fei; Li, Wenfeng

    2017-11-01

    The decision to make protocols of phase III randomized controlled trials (RCTs) publicly accessible by leading journals was a landmark event in clinical trial reporting. Here, we compared primary outcomes defined in protocols with those in publications describing the trials and in trial registration. We identified phase III RCTs published between January 1, 2012, and June 30, 2015, in The New England Journal of Medicine, The Lancet, The Journal of the American Medical Association, and The BMJ with available protocols. Consistency in primary outcomes between protocols and registries (articles) was evaluated. We identified 299 phase III RCTs with available protocols in this analysis. Out of them, 25 trials (8.4%) had some discrepancy for primary outcomes between publications and protocols. Types of discrepancies included protocol-defined primary outcome reported as nonprimary outcome in publication (11 trials, 3.7%), protocol-defined primary outcome omitted in publication (10 trials, 3.3%), new primary outcome introduced in publication (8 trials, 2.7%), protocol-defined nonprimary outcome reported as primary outcome in publication (4 trials, 1.3%), and different timing of assessment of primary outcome (4 trials, 1.3%). Out of trials with discrepancies in primary outcome, 15 trials (60.0%) had discrepancies that favored statistically significant results. Registration could be seen as a valid surrogate of protocol in 237 of 299 trials (79.3%) with regard to primary outcome. Despite unrestricted public access to protocols, selective outcome reporting persists in a small fraction of phase III RCTs. Only studies from four leading journals were included, which may cause selection bias and limit the generalizability of this finding. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Channel Access in Erlang

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicklaus, Dennis J.

    2013-10-13

    We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS process variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PVmore » monitors.« less

  13. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data via Hyrax Server / THREDDS Data Server

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Jelenak, Aleksander; Lee, Joe; Yang, Kent; Gallagher, James; Potter, Nathan

    2017-01-01

    As part of the overall effort to understand implications of migrating ESDIS data and services to the cloud we are testing several common OPeNDAP and HDF use cases against three architectures for general performance and cost characteristics. The architectures include retrieving entire files, retrieving datasets using HTTP range gets, and retrieving elements of datasets (chunks) with HTTP range gets. We will describe these architectures and discuss our approach to estimating cost.

  14. The Intersystem - Internetworking for space systems

    NASA Astrophysics Data System (ADS)

    Landauer, C.

    This paper is a description of the Intersystem, which is a mechanism for internetworking among existing and planned military satellite communication systems. The communication systems interconnected with this mechanism are called member systems, and the interconnected set of communication systems is called the Intersystem. The Intersystem is implemented with higher layer protocols that impose a common organization on the different signaling conventions, so that end users of different systems can communicate with each other. The Intersystem provides its coordination of member system access and resource requests with Intersystem Resource Controllers (IRCs), which are processors that implement the Intersystem protocols and have interfaces to the member systems' own access and resource control mechanisms. The IRCs are connected to each other to form the IRC Subnetwork. Terminals request services from the IRC Subnetwork using the Intersystem Access Control Protocols, and the IRC Subnetwork responses to the requests are coordinated using the RCRC (Resource Controller to Resource Controller) Protocols.

  15. On the designing of a tamper resistant prescription RFID access control system.

    PubMed

    Safkhani, Masoumeh; Bagheri, Nasour; Naderi, Majid

    2012-12-01

    Recently, Chen et al. have proposed a novel tamper resistant prescription RFID access control system, published in the Journal of Medical Systems. In this paper we consider the security of the proposed protocol and identify some existing weaknesses. The main attack is a reader impersonation attack which allows an active adversary to impersonate a legitimate doctor, e.g. the patient's doctor, to access the patient's tag and change the patient prescription. The presented attack is quite efficient. To impersonate a doctor, the adversary should eavesdrop one session between the doctor and the patient's tag and then she can impersonate the doctor with the success probability of '1'. In addition, we present efficient reader-tag to back-end database impersonation, de-synchronization and traceability attacks against the protocol. Finally, we propose an improved version of protocol which is more efficient compared to the original protocol while provides the desired security against the presented attacks.

  16. A Fair Contention Access Scheme for Low-Priority Traffic in Wireless Body Area Networks

    PubMed Central

    Sajeel, Muhammad; Bashir, Faisal; Asfand-e-yar, Muhammad; Tauqir, Muhammad

    2017-01-01

    Recently, wireless body area networks (WBANs) have attracted significant consideration in ubiquitous healthcare. A number of medium access control (MAC) protocols, primarily derived from the superframe structure of the IEEE 802.15.4, have been proposed in literature. These MAC protocols aim to provide quality of service (QoS) by prioritizing different traffic types in WBANs. A contention access period (CAP)with high contention in priority-based MAC protocols can result in higher number of collisions and retransmissions. During CAP, traffic classes with higher priority are dominant over low-priority traffic; this has led to starvation of low-priority traffic, thus adversely affecting WBAN throughput, delay, and energy consumption. Hence, this paper proposes a traffic-adaptive priority-based superframe structure that is able to reduce contention in the CAP period, and provides a fair chance for low-priority traffic. Simulation results in ns-3 demonstrate that the proposed MAC protocol, called traffic- adaptive priority-based MAC (TAP-MAC), achieves low energy consumption, high throughput, and low latency compared to the IEEE 802.15.4 standard, and the most recent priority-based MAC protocol, called priority-based MAC protocol (PA-MAC). PMID:28832495

  17. An Adaptive OFDMA-Based MAC Protocol for Underwater Acoustic Wireless Sensor Networks

    PubMed Central

    Khalil, Issa M.; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols. PMID:23012517

  18. An adaptive OFDMA-based MAC protocol for underwater acoustic wireless sensor networks.

    PubMed

    Khalil, Issa M; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols.

  19. Channel MAC Protocol for Opportunistic Communication in Ad Hoc Wireless Networks

    NASA Astrophysics Data System (ADS)

    Ashraf, Manzur; Jayasuriya, Aruna; Perreau, Sylvie

    2008-12-01

    Despite significant research effort, the performance of distributed medium access control methods has failed to meet theoretical expectations. This paper proposes a protocol named "Channel MAC" performing a fully distributed medium access control based on opportunistic communication principles. In this protocol, nodes access the channel when the channel quality increases beyond a threshold, while neighbouring nodes are deemed to be silent. Once a node starts transmitting, it will keep transmitting until the channel becomes "bad." We derive an analytical throughput limit for Channel MAC in a shared multiple access environment. Furthermore, three performance metrics of Channel MAC—throughput, fairness, and delay—are analysed in single hop and multihop scenarios using NS2 simulations. The simulation results show throughput performance improvement of up to 130% with Channel MAC over IEEE 802.11. We also show that the severe resource starvation problem (unfairness) of IEEE 802.11 in some network scenarios is reduced by the Channel MAC mechanism.

  20. Traffic Adaptive Energy Efficient and Low Latency Medium Access Control for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Yadav, Rajesh; Varma, Shirshu; Malaviya, N.

    2008-05-01

    Medium access control for wireless sensor networks has been a very active research area in the recent years. The traditional wireless medium access control protocol such as IEEE 802.11 is not suitable for the sensor network application because these are battery powered. The recharging of these sensor nodes is expensive and also not possible. The most of the literature in the medium access for the sensor network focuses on the energy efficiency. The proposed MAC protocol solves the energy inefficiency caused by idle listening, control packet overhead and overhearing taking nodes latency into consideration based on the network traffic. Simulation experiments have been performed to demonstrate the effectiveness of the proposed approach. The validation of the simulation results of the proposed MAC has been done by comparing it with the analytical model. This protocol has been simulated in Network Simulator ns-2.

  1. Simple Spectral Lines Data Model Version 1.0

    NASA Astrophysics Data System (ADS)

    Osuna, Pedro; Salgado, Jesus; Guainazzi, Matteo; Dubernet, Marie-Lise; Roueff, Evelyne; Osuna, Pedro; Salgado, Jesus

    2010-12-01

    This document presents a Data Model to describe Spectral Line Transitions in the context of the Simple Line Access Protocol defined by the IVOA (c.f. Ref[13] IVOA Simple Line Access protocol) The main objective of the model is to integrate with and support the Simple Line Access Protocol, with which it forms a compact unit. This integration allows seamless access to Spectral Line Transitions available worldwide in the VO context. This model does not provide a complete description of Atomic and Molecular Physics, which scope is outside of this document. In the astrophysical sense, a line is considered as the result of a transition between two energy levels. Under the basis of this assumption, a whole set of objects and attributes have been derived to define properly the necessary information to describe lines appearing in astrophysical contexts. The document has been written taking into account available information from many different Line data providers (see acknowledgments section).

  2. Web-based visualization of gridded dataset usings OceanBrowser

    NASA Astrophysics Data System (ADS)

    Barth, Alexander; Watelet, Sylvain; Troupin, Charles; Beckers, Jean-Marie

    2015-04-01

    OceanBrowser is a web-based visualization tool for gridded oceanographic data sets. Those data sets are typically four-dimensional (longitude, latitude, depth and time). OceanBrowser allows one to visualize horizontal sections at a given depth and time to examine the horizontal distribution of a given variable. It also offers the possibility to display the results on an arbitrary vertical section. To study the evolution of the variable in time, the horizontal and vertical sections can also be animated. Vertical section can be generated by using a fixed distance from coast or fixed ocean depth. The user can customize the plot by changing the color-map, the range of the color-bar, the type of the plot (linearly interpolated color, simple contours, filled contours) and download the current view as a simple image or as Keyhole Markup Language (KML) file for visualization in applications such as Google Earth. The data products can also be accessed as NetCDF files and through OPeNDAP. Third-party layers from a web map service can also be integrated. OceanBrowser is used in the frame of the SeaDataNet project (http://gher-diva.phys.ulg.ac.be/web-vis/) and EMODNET Chemistry (http://oceanbrowser.net/emodnet/) to distribute gridded data sets interpolated from in situ observation using DIVA (Data-Interpolating Variational Analysis).

  3. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  4. The US Culture Collection Network responding to the requirements of the Nagoya Protocol on Access and Benefit Sharing

    USDA-ARS?s Scientific Manuscript database

    The US Culture Collection Network held a meeting to share information about how collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Bio...

  5. Numerical simulation of the optimal two-mode attacks for two-way continuous-variable quantum cryptography in reverse reconciliation

    NASA Astrophysics Data System (ADS)

    Zhang, Yichen; Li, Zhengyu; Zhao, Yijia; Yu, Song; Guo, Hong

    2017-02-01

    We analyze the security of the two-way continuous-variable quantum key distribution protocol in reverse reconciliation against general two-mode attacks, which represent all accessible attacks at fixed channel parameters. Rather than against one specific attack model, the expression of secret key rates of the two-way protocol are derived against all accessible attack models. It is found that there is an optimal two-mode attack to minimize the performance of the protocol in terms of both secret key rates and maximal transmission distances. We identify the optimal two-mode attack, give the specific attack model of the optimal two-mode attack and show the performance of the two-way protocol against the optimal two-mode attack. Even under the optimal two-mode attack, the performances of two-way protocol are still better than the corresponding one-way protocol, which shows the advantage of making double use of the quantum channel and the potential of long-distance secure communication using a two-way protocol.

  6. Standardised online data access and publishing for Earth Systems and Climate data in Australia

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Druken, K. A.; Trenham, C.; Wang, J.; Wyborn, L. A.; Smillie, J.; Allen, C.; Porter, D.

    2015-12-01

    The National Computational Infrastructure (NCI) hosts Australia's largest repository (10+ PB) of research data collections spanning a wide range of fields from climate, coasts, oceans, and geophysics through to astronomy, bioinformatics, and the social sciences. Spatial scales range from global to local ultra-high resolution, requiring storage volumes from MB to PB. The data have been organised to be highly connected to both the NCI HPC and cloud resources (e.g., interactive visualisation and analysis environments). Researchers can login to utilise the high performance infrastructure for these data collections, or access the data via standards-based web services. Our aim is to provide a trusted platform to support interdisciplinary research across all the collections as well as services for use of the data within individual communities. We thus cater to a wide range of researcher needs, whilst needing to maintain a consistent approach to data management and publishing. All research data collections hosted at NCI are governed by a data management plan, prior to being published through a variety of platforms and web services such as OPeNDAP, HTTP, and WMS. The data management plan ensures the use of standard formats (when available) that comply with relevant data conventions (e.g., CF-Convention) and metadata standards (e.g., ISO19115). Digital Object Identifiers (DOIs) can be minted at NCI and assigned to datasets and collections. Large scale data growth and use in a variety of research fields has led to a rise in, and acceptance of, open spatial data formats such as NetCDF4/HDF5, prompting a need to extend these data conventions to fields such as geophysics and satellite Earth observations. The fusion of DOI-minted data that is discoverable and accessible via metadata and web services, creates a complete picture of data hosting, discovery, use, and citation. This enables standardised and reproducible data analysis.

  7. Data aggregation in wireless sensor networks using the SOAP protocol

    NASA Astrophysics Data System (ADS)

    Al-Yasiri, A.; Sunley, A.

    2007-07-01

    Wireless sensor networks (WSN) offer an increasingly attractive method of data gathering in distributed system architectures and dynamic access via wireless connectivity. Wireless sensor networks have physical and resource limitations, this leads to increased complexity for application developers and often results in applications that are closely coupled with network protocols. In this paper, a data aggregation framework using SOAP (Simple Object Access Protocol) on wireless sensor networks is presented. The framework works as a middleware for aggregating data measured by a number of nodes within a network. The aim of the study is to assess the suitability of the protocol in such environments where resources are limited compared to traditional networks.

  8. Wireless access to a pharmaceutical database: a demonstrator for data driven Wireless Application Protocol (WAP) applications in medical information processing.

    PubMed

    Schacht Hansen, M; Dørup, J

    2001-01-01

    The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.

  9. Wireless access to a pharmaceutical database: A demonstrator for data driven Wireless Application Protocol applications in medical information processing

    PubMed Central

    Hansen, Michael Schacht

    2001-01-01

    Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946

  10. The HyMeX database

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Mastrorillo, Laurence; Ramage, Karim; Boichard, Jean-Luc; Cloché, Sophie; Fleury, Laurence; Klenov, Ludmila; Labatut, Laurent; Mière, Arnaud

    2013-04-01

    The international HyMeX (HYdrological cycle in the Mediterranean EXperiment) project aims at a better understanding and quantification of the hydrological cycle and related processes in the Mediterranean, with emphasis on high-impact weather events, inter-annual to decadal variability of the Mediterranean coupled system, and associated trends in the context of global change. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data, modelling studies, as well as post event field surveys and value-added products processing. Therefore HyMeX database incorporates various dataset types from different disciplines, either operational or research. The database relies on a strong collaboration between OMP and IPSL data centres. Field data, which are 1D time series, maps or pictures, are managed by OMP team while gridded data (satellite products, model outputs, radar data...) are managed by IPSL team. At present, the HyMeX database contains about 150 datasets, including 80 hydrological, meteorological, ocean and soil in situ datasets, 30 radar datasets, 15 satellite products, 15 atmosphere, ocean and land surface model outputs from operational (re-)analysis or forecasts and from research simulations, and 5 post event survey datasets. The data catalogue complies with international standards (ISO 19115; INSPIRE; Directory Interchange Format; Global Change Master Directory Thesaurus). It includes all the datasets stored in the HyMeX database, as well as external datasets relevant for the project. All the data, whatever the type is, are accessible through a single gateway. The database website http://mistrals.sedoo.fr/HyMeX offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - Forms to document observations or products that will be provided to the database. - A shopping-cart web interface to order in situ data files. - Ftp facilities to access gridded data. The website will soon propose new facilities. Many in situ datasets have been homogenized and inserted in a relational database yet, in order to enable more accurate data selection and download of different datasets in a shared format. Interoperability between the two data centres will be enhanced by the OpenDAP communication protocol associated with the Thredds catalogue software, which may also be implemented in other data centres that manage data of interest for the HyMeX project. In order to meet the operational needs for the HyMeX 2012 campaigns, a day-to-day quick look and report display website has been developed too: http://sop.hymex.org. It offers a convenient way to browse meteorological conditions and data during the campaign periods.

  11. Wireless Distribution Systems To Support Medical Response to Disasters

    PubMed Central

    Arisoylu, Mustafa; Mishra, Rajesh; Rao, Ramesh; Lenert, Leslie A.

    2005-01-01

    We discuss the design of multi-hop access networks with multiple gateways that supports medical response to disasters. We examine and implement protocols to ensure high bandwidth, robust, self-healing and secure wireless multi-hop access networks for extreme conditions. Address management, path setup, gateway discovery and selection protocols are described. Future directions and plans are also considered. PMID:16779171

  12. Enabling data-driven provenance in NetCDF, via OGC WPS operations. Climate Analysis services use case.

    NASA Astrophysics Data System (ADS)

    Mihajlovski, A.; Spinuso, A.; Plieger, M.; Som de Cerff, W.

    2016-12-01

    Modern Climate analysis platforms provide generic and standardized ways of accessing data and processing services. These are typically supported by a wide range of OGC formats and interfaces. However, the problem of instrumentally tracing the lineage of the transformations occurring on a dataset and its provenance remains an open challenge. It requires standard-driven and interoperable solutions to facilitate understanding, sharing of self-describing data products, fostering collaboration among peers. The CLIPC portal provided us real use case, where the need of an instrumented provenance management is fundamental. CLIPC provides a single point of access for scientific information on climate change. The data about the physical environment which is used to inform climate change policy and adaptation measures comes from several categories: satellite measurements, terrestrial observing systems, model projections and simulations and from re-analyses. This is made possible through the Copernicus Earth Observation Programme for Europe. With a backbone combining WPS and OPeNDAP services, CLIPC has two themes: 1. Harmonized access to climate datasets derived from models, observations and re-analyses 2. A climate impact tool kit to evaluate, rank and aggregate indicators The climate impact tool kit is realised with the orchestration of a number of WPS that ingest, normalize and combine NetCDF files. The WPS allowing this specific computation are hosted by the climate4impact portal, which is a more generic climate data-access and processing service. In this context, guaranteeing validation and reproducibility of results, is a clearly stated requirement to improve the quality of the results obtained by the combined analysis Two core contributions made, are the enabling of a provenance wrapper around WPS services and the enabling of provenance tracing within the NetCDF format, which adopts and extends the W3C's PROV model. To disseminate indicator data and create transformed data products, a standardized provenance, metadata and processing infrastructure is researched for CLIPC. These efforts will lead towards the provision of tools for further web service processing development and optimisation, opening up possibilities to scale and administer abstract users and data driven workflows.

  13. Provenance in Data Interoperability for Multi-Sensor Intercomparison

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Leptoukh, Greg; Berrick, Steve; Shen, Suhung; Prados, Ana; Fox, Peter; Yang, Wenli; Min, Min; Holloway, Dan; Enloe, Yonsook

    2008-01-01

    As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted from server to client in an out-of-band mechanism. The out of band mechanism is more flexible in the richness of provenance information that can be accomodated, but it relies on a persistent framework and can be difficult for legacy clients to use. We are prototyping the embedded model, incorporating provenance within metadata objects in the data payload. Thus, it always remains with the data. The downside is a limit to the size of provenance metadata that we can include, an issue that will eventually need resolution to encompass the richness of provenance information required for daata intercomparison and merging.

  14. Bundle Data Approach at GES DISC Targeting Natural Hazards

    NASA Technical Reports Server (NTRS)

    Shie, Chung-Lin; Shen, Suhung; Kempler, Steven J.

    2015-01-01

    Severe natural phenomena such as hurricane, volcano, blizzard, flood and drought have the potential to cause immeasurable property damages, great socioeconomic impact, and tragic loss of human life. From searching to assessing the Big, i.e., massive and heterogeneous scientific data (particularly, satellite and model products) in order to investigate those natural hazards, it has, however, become a daunting task for Earth scientists and applications researchers, especially during recent decades. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has served Big Earth science data, and the pertinent valuable information and services to the aforementioned users of diverse communities for years. In order to help and guide our users to online readily (i.e., with a minimum effort) acquire their requested data from our enormous resource at GES DISC for studying their targeted hazard event, we have thus initiated a Bundle Data approach in 2014, first targeting the hurricane event topic. We have recently worked on new topics such as volcano and blizzard. The bundle data of a specific hazard event is basically a sophisticated integrated data package consisting of a series of proper datasets containing a group of relevant (knowledge--based) data variables readily accessible to users via a system-prearranged table linking those data variables to the proper datasets (URLs). This online approach has been developed by utilizing a few existing data services such as Mirador as search engine; Giovanni for visualization; and OPeNDAP for data access, etc. The online Data Cookbook site at GES DISC is the current host for the bundle data. We are now also planning on developing an Automated Virtual Collection Framework that shall eventually accommodate the bundle data, as well as further improve our management in Big Data.

  15. New Solutions for Enabling Discovery of User-Centric Virtual Data Products in NASA's Common Metadata Repository

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Shum, D.

    2015-12-01

    This talk introduces a new NASA Earth Observing System Data and Information System (EOSDIS) capability to automatically generate and maintain derived, Virtual Product information allowing DAACs and Data Providers to create tailored and more discoverable variations of their products. After this talk the audience will be aware of the new EOSDIS Virtual Product capability, applications of it, and how to take advantage of it. Much of the data made available in the EOSDIS are organized for generation and archival rather than for discovery and use. The EOSDIS Common Metadata Repository (CMR) is launching a new capability providing automated generation and maintenance of user-oriented Virtual Product information. DAACs can easily surface variations on established data products tailored to specific uses cases and users, leveraging DAAC exposed services such as custom ordering or access services like OPeNDAP for on-demand product generation and distribution. Virtual Data Products enjoy support for spatial and temporal information, keyword discovery, association with imagery, and are fully discoverable by tools such as NASA Earthdata Search, Worldview, and Reverb. Virtual Product generation has applicability across many use cases: - Describing derived products such as Surface Kinetic Temperature information (AST_08) from source products (ASTER L1A) - Providing streamlined access to data products (e.g. AIRS) containing many (>800) data variables covering an enormous variety of physical measurements - Attaching additional EOSDIS offerings such as Visual Metadata, external services, and documentation metadata - Publishing alternate formats for a product (e.g. netCDF for HDF products) with the actual conversion happening on request - Publishing granules to be modified by on-the-fly services, like GES-DISC's Data Quality Screening Service - Publishing "bundled" products where granules from one product correspond to granules from one or more other related products

  16. National Climate Assessment - Land Data Assimilation System (NCA-LDAS) Data and Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Rui, Hualan; Vollmer, Bruce; Teng, Bill; Jasinski, Michael; Mocko, David; Loeser, Carlee; Kempler, Steven

    2016-01-01

    The National Climate Assessment-Land Data Assimilation System (NCA-LDAS) is an Integrated Terrestrial Water Analysis, and is one of NASAs contributions to the NCA of the United States. The NCA-LDAS has undergone extensive development, including multi-variate assimilation of remotely-sensed water states and anomalies as well as evaluation and verification studies, led by the Goddard Space Flight Centers Hydrological Sciences Laboratory (HSL). The resulting NCA-LDAS data have recently been released to the general public and include those from the Noah land-surface model (LSM) version 3.3 (Noah-3.3) and the Catchment LSM version Fortuna-2.5 (CLSM-F2.5). Standard LSM output variables including soil moistures temperatures, surface fluxes, snow cover depth, groundwater, and runoff are provided, as well as streamflow using a river routing system. The NCA-LDAS data are archived at and distributed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). The data can be accessed via HTTP, OPeNDAP, Mirador search and download, and NASA Earth data Search. To further facilitate access and use, the NCA-LDAS data are integrated into the NASA Giovanni, for quick visualization and analysis, and into the Data Rods system, for retrieval of time series of long time periods. The temporal and spatial resolutions of the NCA-LDAS data are, respectively, daily-averages and 0.125x0.125 degree, covering North America (25N 53N; 125W 67W) and the period January 1979 to December 2015. The data files are in self-describing, machine-independent, CF-compliant netCDF-4 format.

  17. "Bundle Data" Approach at GES DISC Targeting Natural Hazards

    NASA Astrophysics Data System (ADS)

    Shie, C. L.; Shen, S.; Kempler, S. J.

    2015-12-01

    Severe natural phenomena such as hurricane, volcano, blizzard, flood and drought have the potential to cause immeasurable property damages, great socioeconomic impact, and tragic loss of human life. From searching to assessing the "Big", i.e., massive and heterogeneous scientific data (particularly, satellite and model products) in order to investigate those natural hazards, it has, however, become a daunting task for Earth scientists and applications researchers, especially during recent decades. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has served "Big" Earth science data, and the pertinent valuable information and services to the aforementioned users of diverse communities for years. In order to help and guide our users to online readily (i.e., with a minimum effort) acquire their requested data from our enormous resource at GES DISC for studying their targeted hazard/event, we have thus initiated a "Bundle Data" approach in 2014, first targeting the hurricane event/topic. We have recently worked on new topics such as volcano and blizzard. The "bundle data" of a specific hazard/event is basically a sophisticated integrated data package consisting of a series of proper datasets containing a group of relevant ("knowledge-based") data variables readily accessible to users via a system-prearranged table linking those data variables to the proper datasets (URLs). This online approach has been developed by utilizing a few existing data services such as Mirador as search engine; Giovanni for visualization; and OPeNDAP for data access, etc. The online "Data Cookbook" site at GES DISC is the current host for the "bundle data". We are now also planning on developing an "Automated Virtual Collection Framework" that shall eventually accommodate the "bundle data", as well as further improve our management in "Big Data".

  18. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    Treesearch

    Kevin McCluskey; Katharine B. Barker; Hazel A. Barton; Kyria Boundy-Mills; Daniel R. Brown; Jonathan A. Coddington; Kevin Cook; Philippe Desmeth; David Geiser; Jessie A. Glaeser; Stephanie Greene; Seogchan Kang; Michael W. Lomas; Ulrich Melcher; Scott E. Miller; David R. Nobles; Kristina J. Owens; Jerome H. Reichman; Manuela da Silva; John Wertz; Cale Whitworth; David Smith; Steven E. Lindow

    2017-01-01

    The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD). The meeting included representatives...

  19. Development and validation of a remote home safety protocol.

    PubMed

    Romero, Sergio; Lee, Mi Jung; Simic, Ivana; Levy, Charles; Sanford, Jon

    2018-02-01

    Environmental assessments and subsequent modifications conducted by healthcare professionals can enhance home safety and promote independent living. However, travel time, expense and the availability of qualified professionals can limit the broad application of this intervention. Remote technology has the potential to increase access to home safety evaluations. This study describes the development and validation of a remote home safety protocol that can be used by a caregiver of an elderly person to video-record their home environment for later viewing and evaluation by a trained professional. The protocol was developed based on literature reviews and evaluations from clinical and content experts. Cognitive interviews were conducted with a group of six caregivers to validate the protocol. The final protocol included step-by-step directions to record indoor and outdoor areas of the home. The validation process resulted in modifications related to safety, clarity of the protocol, readability, visual appearance, technical descriptions and usability. Our final protocol includes detailed instructions that a caregiver should be able to follow to record a home environment for subsequent evaluation by a home safety professional. Implications for Rehabilitation The results of this study have several implications for rehabilitation practice The remote home safety evaluation protocol can potentially improve access to rehabilitation services for clients in remote areas and prevent unnecessary delays for needed care. Using our protocol, a patient's caregiver can partner with therapists to quickly and efficiently evaluate a patient's home before they are released from the hospital. Caregiver narration, which reflects a caregiver's own perspective, is critical to evaluating home safety. In-home safety evaluations, currently not available to all who need them due to access barriers, can enhance a patient's independence and provide a safer home environment.

  20. Analyzing the effect of routing protocols on media access control protocols in radio networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C. L.; Drozda, M.; Marathe, A.

    2002-01-01

    We study the effect of routing protocols on the performance of media access control (MAC) protocols in wireless radio networks. Three well known MAC protocols: 802.11, CSMA, and MACA are considered. Similarly three recently proposed routing protocols: AODV, DSR and LAR scheme 1 are considered. The experimental analysis was carried out using GloMoSim: a tool for simulating wireless networks. The main focus of our experiments was to study how the routing protocols affect the performance of the MAC protocols when the underlying network and traffic parameters are varied. The performance of the protocols was measured w.r.t. five important parameters: (i)more » number of received packets, (ii) average latency of each packet, (iii) throughput (iv) long term fairness and (v) number of control packets at the MAC layer level. Our results show that combinations of routing and MAC protocols yield varying performance under varying network topology and traffic situations. The result has an important implication; no combination of routing protocol and MAC protocol is the best over all situations. Also, the performance analysis of protocols at a given level in the protocol stack needs to be studied not locally in isolation but as a part of the complete protocol stack. A novel aspect of our work is the use of statistical technique, ANOVA (Analysis of Variance) to characterize the effect of routing protocols on MAC protocols. This technique is of independent interest and can be utilized in several other simulation and empirical studies.« less

  1. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    PubMed Central

    Barker, Katharine B.; Barton, Hazel A.; Boundy-Mills, Kyria; Brown, Daniel R.; Coddington, Jonathan A.; Cook, Kevin; Desmeth, Philippe; Geiser, David; Glaeser, Jessie A.; Greene, Stephanie; Kang, Seogchan; Lomas, Michael W.; Melcher, Ulrich; Miller, Scott E.; Nobles, David R.; Owens, Kristina J.; Reichman, Jerome H.; da Silva, Manuela; Wertz, John; Whitworth, Cale; Smith, David

    2017-01-01

    ABSTRACT The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD). The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol. PMID:28811341

  2. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing.

    PubMed

    McCluskey, Kevin; Barker, Katharine B; Barton, Hazel A; Boundy-Mills, Kyria; Brown, Daniel R; Coddington, Jonathan A; Cook, Kevin; Desmeth, Philippe; Geiser, David; Glaeser, Jessie A; Greene, Stephanie; Kang, Seogchan; Lomas, Michael W; Melcher, Ulrich; Miller, Scott E; Nobles, David R; Owens, Kristina J; Reichman, Jerome H; da Silva, Manuela; Wertz, John; Whitworth, Cale; Smith, David

    2017-08-15

    The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD). The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol.

  3. A Mobile Satellite Experiment (MSAT-X) network definition

    NASA Technical Reports Server (NTRS)

    Wang, Charles C.; Yan, Tsun-Yee

    1990-01-01

    The network architecture development of the Mobile Satellite Experiment (MSAT-X) project for the past few years is described. The results and findings of the network research activities carried out under the MSAT-X project are summarized. A framework is presented upon which the Mobile Satellite Systems (MSSs) operator can design a commercial network. A sample network configuration and its capability are also included under the projected scenario. The Communication Interconnection aspect of the MSAT-X network is discussed. In the MSAT-X network structure two basic protocols are presented: the channel access protocol, and the link connection protocol. The error-control techniques used in the MSAT-X project and the packet structure are also discussed. A description of two testbeds developed for experimentally simulating the channel access protocol and link control protocol, respectively, is presented. A sample network configuration and some future network activities of the MSAT-X project are also presented.

  4. Distributed reservation-based code division multiple access

    NASA Astrophysics Data System (ADS)

    Wieselthier, J. E.; Ephremides, A.

    1984-11-01

    The use of spread spectrum signaling, motivated primarily by its antijamming capabilities in military applications, leads naturally to the use of Code Division Multiple Access (CDMA) techniques that permit the successful simultaneous transmission by a number of users over a wideband channel. In this paper we address some of the major issues that are associated with the design of multiple access protocols for spread spectrum networks. We then propose, analyze, and evaluate a distributed reservation-based multiple access protocol that does in fact exploit CDMA properties. Especially significant is the fact that no acknowledgment or feedback information from the destination is required (thus facilitating communication with a radio-silent mode), nor is any form of coordination among the users necessary.

  5. CSMA/RN: A universal protocol for gigabit networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, Kurt J.; Overstreet, C. Michael; Khanna, S.; Paterra, Frank

    1990-01-01

    Networks must provide intelligent access for nodes to share the communications resources. In the range of 100 Mbps to 1 Gbps, the demand access class of protocols were studied extensively. Many use some form of slot or reservation system and many the concept of attempt and defer to determine the presence or absence of incoming information. The random access class of protocols like shared channel systems (Ethernet), also use the concept of attempt and defer in the form of carrier sensing to alleviate the damaging effects of collisions. In CSMA/CD, the sensing of interference is on a global basis. All systems discussed above have one aspect in common, they examine activity on the network either locally or globally and react in an attempt and whatever mechanism. Of the attempt + mechanisms discussed, one is obviously missing; that is attempt and truncate. Attempt and truncate was studied in a ring configuration called the Carrier Sensed Multiple Access Ring Network (CSMA/RN). The system features of CSMA/RN are described including a discussion of the node operations for inserting and removing messages and for handling integrated traffic. The performance and operational features based on analytical and simulation studies which indicate that CSMA/RN is a useful and adaptable protocol over a wide range of network conditions are discussed. Finally, the research and development activities necessary to demonstrate and realize the potential of CSMA/RN as a universal, gigabit network protocol is outlined.

  6. Tag Content Access Control with Identity-based Key Exchange

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Rong, Chunming

    2010-09-01

    Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.

  7. Securing the AliEn File Catalogue - Enforcing authorization with accountable file operations

    NASA Astrophysics Data System (ADS)

    Schreiner, Steffen; Bagnasco, Stefano; Sankar Banerjee, Subho; Betev, Latchezar; Carminati, Federico; Vladimirovna Datskova, Olga; Furano, Fabrizio; Grigoras, Alina; Grigoras, Costin; Mendez Lorenzo, Patricia; Peters, Andreas Joachim; Saiz, Pablo; Zhu, Jianlin

    2011-12-01

    The AliEn Grid Services, as operated by the ALICE Collaboration in its global physics analysis grid framework, is based on a central File Catalogue together with a distributed set of storage systems and the possibility to register links to external data resources. This paper describes several identified vulnerabilities in the AliEn File Catalogue access protocol regarding fraud and unauthorized file alteration and presents a more secure and revised design: a new mechanism, called LFN Booking Table, is introduced in order to keep track of access authorization in the transient state of files entering or leaving the File Catalogue. Due to a simplification of the original Access Envelope mechanism for xrootd-protocol-based storage systems, fundamental computational improvements of the mechanism were achieved as well as an up to 50% reduction of the credential's size. By extending the access protocol with signed status messages from the underlying storage system, the File Catalogue receives trusted information about a file's size and checksum and the protocol is no longer dependent on client trust. Altogether, the revised design complies with atomic and consistent transactions and allows for accountable, authentic, and traceable file operations. This paper describes these changes as part and beyond the development of AliEn version 2.19.

  8. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.

  9. An Ultra-low-power Medium Access Control Protocol for Body Sensor Network.

    PubMed

    Li, Huaming; Tan, Jindong

    2005-01-01

    In this paper, a medium access control (MAC) protocol designed for Body Sensor Network (BSN-MAC) is proposed. BSN-MAC is an adaptive, feedback-based and IEEE 802.15.4-compatible MAC protocol. Due to the traffic coupling and sensor diversity characteristics of BSNs, common MAC protocols can not satisfy the unique requirements of the biomedical sensors in BSN. BSN-MAC exploits the feedback information from the deployed sensors to form a closed-loop control of the MAC parameters. A control algorithm is proposed to enable the BSN coordinator to adjust parameters of the IEEE 802.15.4 superframe to achieve both energy efficiency and low latency on energy critical nodes. We evaluate the performance of BSN-MAC using energy efficiency as the primary metric.

  10. Space Network IP Services (SNIS): An Architecture for Supporting Low Earth Orbiting IP Satellite Missions

    NASA Technical Reports Server (NTRS)

    Israel, David J.

    2005-01-01

    The NASA Space Network (SN) supports a variety of missions using the Tracking and Data Relay Satellite System (TDRSS), which includes ground stations in White Sands, New Mexico and Guam. A Space Network IP Services (SNIS) architecture is being developed to support future users with requirements for end-to-end Internet Protocol (IP) communications. This architecture will support all IP protocols, including Mobile IP, over TDRSS Single Access, Multiple Access, and Demand Access Radio Frequency (RF) links. This paper will describe this architecture and how it can enable Low Earth Orbiting IP satellite missions.

  11. An access technology delivery protocol for children with severe and multiple disabilities: a case demonstration.

    PubMed

    Mumford, Leslie; Lam, Rachel; Wright, Virginia; Chau, Tom

    2014-08-01

    This study applied response efficiency theory to create the Access Technology Delivery Protocol (ATDP), a child and family-centred collaborative approach to the implementation of access technologies. We conducted a descriptive, mixed methods case study to demonstrate the ATDP method with a 12-year-old boy with no reliable means of access to an external device. Evaluations of response efficiency, satisfaction, goal attainment, technology use and participation were made after 8 and 16 weeks of training with a custom smile-based access technology. At the 16 week mark, the new access technology offered better response quality; teacher satisfaction was high; average technology usage was 3-4 times per week for up to 1 h each time; switch sensitivity and specificity reached 78% and 64%, respectively, and participation scores increased by 38%. This case supports further development and testing of the ATDP with additional children with multiple or severe disabilities.

  12. Application of an access technology delivery protocol to two children with cerebral palsy.

    PubMed

    Mumford, Leslie; Chau, Tom

    2015-07-14

    This study further delineates the merits and limitations of the Access Technology Delivery Protocol (ATDP) through its application to two children with severe disabilities. We conducted mixed methods case studies to demonstrate the ATDP with two children with no reliable means of access to an external device. Evaluations of response efficiency, satisfaction, goal attainment, technology use and participation were made after 8 and 16 weeks of training with custom access technologies. After 16 weeks, one child's switch offered improved response efficiency, high teacher satisfaction and increased participation. The other child's switch resulted in improved satisfaction and switch effectiveness but lower overall efficiency. The latter child was no longer using his switch by the end of the study. These contrasting findings indicate that changes to any contextual factors that may impact the user's switch performance should mandate a reassessment of the access pathway. Secondly, it is important to ensure that individuals who will be responsible for switch training be identified at the outset and engaged throughout the ATDP. Finally, the ATDP should continue to be tested with individuals with severe disabilities to build an evidence base for the delivery of response efficient access solutions. Implications for Rehabilitation A data-driven, comprehensive access technology delivery protocol for children with complex communication needs could help to mitigate technology abandonment. Successful adoption of an access technology requires personalized design, training of the technology user, the teaching staff, the caregivers and other communication partners, and integration with functional activities.

  13. Honest broker protocol streamlines research access to data while safeguarding patient privacy.

    PubMed

    Silvey, Scott A; Silvey, Scott Andrew; Schulte, Janet; Smaltz, Detlev H; Smaltz, Detlev Herb; Kamal, Jyoti

    2008-11-06

    At Ohio State University Medical Center, The Honest Broker Protocol provides a streamlined mechanism whereby investigators can obtain de-identified clinical data for non-FDA research without having to invest the significant time and effort necessary to craft a formalized protocol for IRB approval.

  14. 28 CFR 115.221 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Department of Justice's Office on Violence Against Women publication, “A National Protocol for Sexual Assault... for investigating allegations of sexual abuse, the agency shall follow a uniform evidence protocol... developed after 2011. (c) The agency shall offer all victims of sexual abuse access to forensic medical...

  15. 28 CFR 115.21 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Office on Violence Against Women publication, “A National Protocol for Sexual Assault Medical Forensic... allegations of sexual abuse, the agency shall follow a uniform evidence protocol that maximizes the potential.... (c) The agency shall offer all victims of sexual abuse access to forensic medical examinations...

  16. 28 CFR 115.221 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Department of Justice's Office on Violence Against Women publication, “A National Protocol for Sexual Assault... for investigating allegations of sexual abuse, the agency shall follow a uniform evidence protocol... developed after 2011. (c) The agency shall offer all victims of sexual abuse access to forensic medical...

  17. 28 CFR 115.221 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Department of Justice's Office on Violence Against Women publication, “A National Protocol for Sexual Assault... for investigating allegations of sexual abuse, the agency shall follow a uniform evidence protocol... developed after 2011. (c) The agency shall offer all victims of sexual abuse access to forensic medical...

  18. 28 CFR 115.21 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Office on Violence Against Women publication, “A National Protocol for Sexual Assault Medical Forensic... allegations of sexual abuse, the agency shall follow a uniform evidence protocol that maximizes the potential.... (c) The agency shall offer all victims of sexual abuse access to forensic medical examinations...

  19. 28 CFR 115.21 - Evidence protocol and forensic medical examinations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Office on Violence Against Women publication, “A National Protocol for Sexual Assault Medical Forensic... allegations of sexual abuse, the agency shall follow a uniform evidence protocol that maximizes the potential.... (c) The agency shall offer all victims of sexual abuse access to forensic medical examinations...

  20. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  1. Security Analysis and Improvements of Authentication and Access Control in the Internet of Things

    PubMed Central

    Ndibanje, Bruce; Lee, Hoon-Jae; Lee, Sang-Gon

    2014-01-01

    Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al (Authentication and Access Control in the Internet of Things. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18–21 June 2012, pp. 588–592). According to our analysis, Jing et al.'s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost. PMID:25123464

  2. Security analysis and improvements of authentication and access control in the Internet of Things.

    PubMed

    Ndibanje, Bruce; Lee, Hoon-Jae; Lee, Sang-Gon

    2014-08-13

    Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al. (Authentication and Access Control in the Internet of Things. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18-21 June 2012, pp. 588-592). According to our analysis, Jing et al.'s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost.

  3. 47 CFR 79.109 - Activating accessibility features.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.109 Activating accessibility features. (a) Requirements... video programming transmitted in digital format simultaneously with sound, including apparatus designed to receive or display video programming transmitted in digital format using Internet protocol, with...

  4. VoIP Accessibility: A Usability Study of Voice over Internet Protocol (VoIP) Systems and A Survey of VoIP Users with Vision Loss

    ERIC Educational Resources Information Center

    Packer, Jaclyn; Reuschel, William

    2018-01-01

    Introduction: Accessibility of Voice over Internet Protocol (VoIP) systems was tested with a hands-on usability study and an online survey of VoIP users who are visually impaired. The survey examined the importance of common VoIP features, and both methods assessed difficulty in using those features. Methods: The usability test included four paid…

  5. Comparison of two MAC protocols based on LEO satellite networks

    NASA Astrophysics Data System (ADS)

    Guan, Mingxiang; Wang, Ruichun

    2009-12-01

    With the development of LEO satellite communication, it is the basic requirement that various kinds of services will be provided. Considering that weak channel collision detection ability, long propagation delay and heavy load in LEO satellite communication system, a valid adaptive access control protocol APRMA is proposed. Different access probability functions for different services are obtained and appropriate access probabilities for voice and data users are updated slot by slot based on the estimation of the voice traffic and the channel status. Finally simulation results demonstrate that the performance of system is improved by the APRMA compared with the conventional PRMA, with an acceptable trade-off between QoS of voice and delay of data. Also the APRMA protocol will be suitable for HAPS (high altitude platform station) with the characters of weak channel collision detection ability, long propagation delay and heavy load.

  6. Peer Review and Publication of Research Protocols and Proposals: A Role for Open Access Journals

    PubMed Central

    2004-01-01

    Peer-review and publication of research protocols offer several advantages to all parties involved. Among these are the following opportunities for authors: external expert opinion on the methods, demonstration to funding agencies of prior expert review of the protocol, proof of priority of ideas and methods, and solicitation of potential collaborators. We think that review and publication of protocols is an important role for Open Access journals. Because of their electronic form, openness for readers, and author-pays business model, they are better suited than traditional journals to ensure the sustainability and quality of protocol reviews and publications. In this editorial, we describe the workflow for investigators in eHealth research, from protocol submission to a funding agency, to protocol review and (optionally) publication at JMIR, to registration of trials at the International eHealth Study Registry (IESR), and to publication of the report. One innovation at JMIR is that protocol peer reviewers will be paid a honorarium, which will be drawn partly from a new submission fee for protocol reviews. Separating the article processing fee into a submission and a publishing fee will allow authors to opt for “peer-review only” (without subsequent publication) at reduced costs, if they wish to await a funding decision or for other reasons decide not to make the protocol public. PMID:15471763

  7. Peer-review and publication of research protocols and proposals: a role for open access journals.

    PubMed

    Eysenbach, Gunther

    2004-09-30

    Peer-review and publication of research protocols offer several advantages to all parties involved. Among these are the following opportunities for authors: external expert opinion on the methods, demonstration to funding agencies of prior expert review of the protocol, proof of priority of ideas and methods, and solicitation of potential collaborators. We think that review and publication of protocols is an important role for Open Access journals. Because of their electronic form, openness for readers, and author-pays business model, they are better suited than traditional journals to ensure the sustainability and quality of protocol reviews and publications. In this editorial, we describe the workflow for investigators in eHealth research, from protocol submission to a funding agency, to protocol review and (optionally) publication at JMIR, to registration of trials at the International eHealth Study Registry (IESR), and to publication of the report. One innovation at JMIR is that protocol peer reviewers will be paid a honorarium, which will be drawn partly from a new submission fee for protocol reviews. Separating the article processing fee into a submission and a publishing fee will allow authors to opt for "peer-review only" (without subsequent publication) at reduced costs, if they wish to await a funding decision or for other reasons decide not to make the protocol public.

  8. A carrier sensed multiple access protocol for high data base rate ring networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, Kurt J.; Overstreet, C. Michael; Khanna, S.; Paterra, Frank

    1990-01-01

    The results of the study of a simple but effective media access protocol for high data rate networks are presented. The protocol is based on the fact that at high data rates networks can contain multiple messages simultaneously over their span, and that in a ring, nodes used to detect the presence of a message arriving from the immediate upstream neighbor. When an incoming signal is detected, the node must either abort or truncate a message it is presently sending. Thus, the protocol with local carrier sensing and multiple access is designated CSMA/RN. The performance of CSMA/RN with TTattempt and truncate is studied using analytic and simulation models. Three performance factors, wait or access time, service time and response or end-to-end travel time are presented. The service time is basically a function of the network rate, it changes by a factor of 1 between no load and full load. Wait time, which is zero for no load, remains small for load factors up to 70 percent of full load. Response time, which adds travel time while on the network to wait and service time, is mainly a function of network length, especially for longer distance networks. Simulation results are shown for CSMA/RN where messages are removed at the destination. A wide range of local and metropolitan area network parameters including variations in message size, network length, and node count are studied. Finally, a scaling factor based upon the ratio of message to network length demonstrates that the results, and hence, the CSMA/RN protocol, are applicable to wide area networks.

  9. Review and publication of protocol submissions to Trials - what have we learned in 10 years?

    PubMed

    Li, Tianjing; Boutron, Isabelle; Al-Shahi Salman, Rustam; Cobo, Erik; Flemyng, Ella; Grimshaw, Jeremy M; Altman, Douglas G

    2016-12-16

    Trials has 10 years of experience in providing open access publication of protocols for randomised controlled trials. In this editorial, the senior editors and editors-in-chief of Trials discuss editorial issues regarding managing trial protocol submissions, including the content and format of the protocol, timing of submission, approaches to tracking protocol amendments, and the purpose of peer reviewing a protocol submission. With the clarification and guidance provided, we hope we can make the process of publishing trial protocols more efficient and useful to trial investigators and readers.

  10. O How Wondrous Is E-Mail!

    ERIC Educational Resources Information Center

    Buchanan, Larry

    1998-01-01

    Addresses the use of e-mail for communication and collaborative projects in schools. Discusses the effectiveness of an e-mail system based on a UNIX host; problems with POP (post office protocol) client programs; and the new Internet Mail Access Protocol (IMAP) which addresses most of the shortcomings of the POP protocol while keeping advantages…

  11. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling.

    PubMed

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-09-18

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.

  12. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling

    PubMed Central

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-01-01

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols. PMID:26393608

  13. Zero-Copy Objects System

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Zero-Copy Objects System software enables application data to be encapsulated in layers of communication protocol without being copied. Indirect referencing enables application source data, either in memory or in a file, to be encapsulated in place within an unlimited number of protocol headers and/or trailers. Zero-copy objects (ZCOs) are abstract data access representations designed to minimize I/O (input/output) in the encapsulation of application source data within one or more layers of communication protocol structure. They are constructed within the heap space of a Simple Data Recorder (SDR) data store to which all participating layers of the stack must have access. Each ZCO contains general information enabling access to the core source data object (an item of application data), together with (a) a linked list of zero or more specific extents that reference portions of this source data object, and (b) linked lists of protocol header and trailer capsules. The concatenation of the headers (in ascending stack sequence), the source data object extents, and the trailers (in descending stack sequence) constitute the transmitted data object constructed from the ZCO. This scheme enables a source data object to be encapsulated in a succession of protocol layers without ever having to be copied from a buffer at one layer of the protocol stack to an encapsulating buffer at a lower layer of the stack. For large source data objects, the savings in copy time and reduction in memory consumption may be considerable.

  14. Quantum Tomography Protocols with Positivity are Compressed Sensing Protocols (Open Access)

    DTIC Science & Technology

    2015-12-08

    ARTICLE OPEN Quantum tomography protocols with positivity are compressed sensing protocols Amir Kalev1, Robert L Kosut2 and Ivan H Deutsch1...Characterising complex quantum systems is a vital task in quantum information science. Quantum tomography, the standard tool used for this purpose, uses a well...designed measurement record to reconstruct quantum states and processes. It is, however, notoriously inefficient. Recently, the classical signal

  15. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to the...

  16. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to the...

  17. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to the...

  18. Guidelines for Outsourcing Remote Access.

    ERIC Educational Resources Information Center

    Hassler, Ardoth; Neuman, Michael

    1996-01-01

    Discusses the advantages and disadvantages of outsourcing remote access to campus computer networks and the Internet, focusing on improved service, cost-sharing, partnerships with vendors, supported protocols, bandwidth, scope of access, implementation, support, network security, and pricing. Includes a checklist for a request for proposals on…

  19. Implementation of a written protocol for management of central venous access devices: a theoretical and practical education, including bedside examinations.

    PubMed

    Ahlin, Catharina; Klang-Söderkvist, Birgitta; Brundin, Seija; Hellström, Birgitta; Pettersson, Karin; Johansson, Eva

    2006-01-01

    The objectives of this study were to evaluate registered nurses' (RN) compliance with a local clinical central venous access device (CVAD) protocol after completing an educational program and to determine RNs' perception of the program. Seventy-five RNs working in hematology participated in the educational part of the program. Sixty-eight RNs were examined while changing CVAD dressings or placing a Huber needle into a port on actual patients. Sixty percent of the RNs passed the examination and reported that the program increased their knowledge. The results indicated that the educational program could be recommended for use when implementing a new clinical protocol.

  20. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.

    2004-12-01

    With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.

  1. Bearer channel control protocol for the dynamic VB5.2 interface in ATM access networks

    NASA Astrophysics Data System (ADS)

    Fragoulopoulos, Stratos K.; Mavrommatis, K. I.; Venieris, Iakovos S.

    1996-12-01

    In the multi-vendor systems, a customer connected to an Access network (AN) must be capable of selecting a specific Service Node (SN) according to the services the SN provides. The multiplicity of technologically varying AN calls for the definition of a standard reference point between the AN and the SN widely known as the VB interface. Two versions are currently offered. The VB5.1 is simpler to implement but is not as flexible as the VB5.2, which supports switched connections. The VB5.2 functionality is closely coupled to the Broadband Bearer Channel Connection Protocol (B-BCCP). The B-BCCP is used for conveying the necessary information for dynamic resource allocation, traffic policing and routing in the AN as well as for information exchange concerning the status of the AN before a new call is established by the SN. By relying on such a protocol for the exchange of information instead of intercepting and interpreting signalling messages in the AN, the architecture of the AN is simplified because the functionality related to processing is not duplicated. In this paper a prominent B- BCCP candidate is defined, called the Service node Access network Interaction Protocol.

  2. The deployment of a large scale object store at the RAL Tier-1

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Johnson, I.; Adams, J.; Canning, B.; Vasilakakos, G.; Packer, A.

    2017-10-01

    Since 2014, the RAL Tier-1 has been working on deploying a Ceph backed object store. The aim is to replace Castor for disk-only storage. This new service must be scalable to meet the data demands of the LHC to 2020 and beyond. As well as offering access protocols the LHC experiments currently use, it must also provide industry standard access protocols. In order to keep costs down the service must use erasure coding rather than replication to ensure data reliability. This paper will present details of the storage service setup, which has been named Echo, as well as the experience gained from running it. The RAL Tier-1 has also been developing XrootD and GridFTP plugins for Ceph. Both plugins are built on top of the same libraries that write striped data into Ceph and therefore data written by one protocol will be accessible by the other. In the long term we hope the LHC experiments will migrate to industry standard protocols, therefore these plugins will only provide the features needed by the LHC experiments. This paper will report on the development and testing of these plugins.

  3. Increasing the Accessibility of Science for All Students

    ERIC Educational Resources Information Center

    Langley-Turnbaugh, S. J.; Wilson, G.; Lovewell, L.

    2009-01-01

    This paper evaluates the accessibility of selected field and laboratory high school science activities, and provides suggestions for increasing accessibility for students with disabilities. We focused on GLOBE (Global Learning Observations to Benefit the Environment) protocols, specifically the new Seasons and Biomes investigation currently being…

  4. Traffic placement policies for a multi-band network

    NASA Technical Reports Server (NTRS)

    Maly, Kurt J.; Foudriat, E. C.; Game, David; Mukkamala, R.; Overstreet, C. Michael

    1990-01-01

    Recently protocols were introduced that enable the integration of synchronous traffic (voice or video) and asynchronous traffic (data) and extend the size of local area networks without loss in speed or capacity. One of these is DRAMA, a multiband protocol based on broadband technology. It provides dynamic allocation of bandwidth among clusters of nodes in the total network. A number of traffic placement policies for such networks are proposed and evaluated. Metrics used for performance evaluation include average network access delay, degree of fairness of access among the nodes, and network throughput. The feasibility of the DRAMA protocol is established through simulation studies. DRAMA provides effective integration of synchronous and asychronous traffic due to its ability to separate traffic types. Under the suggested traffic placement policies, the DRAMA protocol is shown to handle diverse loads, mixes of traffic types, and numbers of nodes, as well as modifications to the network structure and momentary traffic overloads.

  5. A MAC Protocol for Medical Monitoring Applications of Wireless Body Area Networks

    PubMed Central

    Shu, Minglei; Yuan, Dongfeng; Zhang, Chongqing; Wang, Yinglong; Chen, Changfang

    2015-01-01

    Targeting the medical monitoring applications of wireless body area networks (WBANs), a hybrid medium access control protocol using an interrupt mechanism (I-MAC) is proposed to improve the energy and time slot utilization efficiency and to meet the data delivery delay requirement at the same time. Unlike existing hybrid MAC protocols, a superframe structure with a longer length is adopted to avoid unnecessary beacons. The time slots are mostly allocated to nodes with periodic data sources. Short interruption slots are inserted into the superframe to convey the urgent data and to guarantee the real-time requirements of these data. During these interruption slots, the coordinator can break the running superframe and start a new superframe. A contention access period (CAP) is only activated when there are more data that need to be delivered. Experimental results show the effectiveness of the proposed MAC protocol in WBANs with low urgent traffic. PMID:26046596

  6. Agreements between Industry and Academia on Publication Rights: A Retrospective Study of Protocols and Publications of Randomized Clinical Trials.

    PubMed

    Kasenda, Benjamin; von Elm, Erik; You, John J; Blümle, Anette; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J; Stegert, Mihaela; Olu, Kelechi K; Tikkinen, Kari A O; Neumann, Ignacio; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail M; Mertz, Dominik; Akl, Elie A; Bassler, Dirk; Busse, Jason W; Ferreira-González, Ignacio; Lamontagne, Francois; Nordmann, Alain; Gloy, Viktoria; Raatz, Heike; Moja, Lorenzo; Ebrahim, Shanil; Schandelmaier, Stefan; Sun, Xin; Vandvik, Per O; Johnston, Bradley C; Walter, Martin A; Burnand, Bernard; Schwenkglenks, Matthias; Hemkens, Lars G; Bucher, Heiner C; Guyatt, Gordon H; Briel, Matthias

    2016-06-01

    Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner's right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. Publication agreements constraining academic authors' independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.

  7. PANATIKI: A Network Access Control Implementation Based on PANA for IoT Devices

    PubMed Central

    Sanchez, Pedro Moreno; Lopez, Rafa Marin; Gomez Skarmeta, Antonio F.

    2013-01-01

    Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices. PMID:24189332

  8. PANATIKI: a network access control implementation based on PANA for IoT devices.

    PubMed

    Moreno Sanchez, Pedro; Marin Lopez, Rafa; Gomez Skarmeta, Antonio F

    2013-11-01

    Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices.

  9. Using Internet Audio to Enhance Online Accessibility

    ERIC Educational Resources Information Center

    Schwartz, Linda Matula

    2004-01-01

    Accessibility to online education programs is an important factor that requires continued research, improvement, and regulation. Particularly valuable in the enhancement of online accessibility is the Voice-over Internet Protocol (VOIP) medium. VOIP compresses analog voice data and converts it into digital packets for transmission over the…

  10. NADA Protocol for Behavioral Health. Putting Tools in the Hands of Behavioral Health Providers: The Case for Auricular Detoxification Specialists.

    PubMed

    Stuyt, Elizabeth B; Voyles, Claudia A; Bursac, Sara

    2018-02-07

    Background: The National Acupuncture Detoxification Association (NADA) protocol, a simple standardized auricular treatment has the potential to provide vast public health relief on issues currently challenging our world. This includes but is not limited to addiction, such as the opioid epidemic, but also encompasses mental health, trauma, PTSD, chronic stress, and the symptoms associated with these conditions. Simple accessible tools that improve outcomes can make profound differences. We assert that the NADA protocol can have greatest impact when broadly applied by behavioral health professionals, Auricular Detoxification Specialists (ADSes). Methods: The concept of ADS is described and how current laws vary from state to state. Using available national data, a survey of practitioners in three selected states with vastly different laws regarding ADSes, and interviews of publicly funded programs which are successfully incorporating the NADA protocol, we consider possible effects of ADS-friendly conditions. Results: Data presented supports the idea that conditions conducive to ADS practice lead to greater implementation. Program interviews reflect settings in which adding ADSes can in turn lead to improved outcomes. Discussion: The primary purpose of non-acupuncturist ADSes is to expand the access of this simple but effective treatment to all who are suffering from addictions, stress, or trauma and to allow programs to incorporate acupuncture in the form of the NADA protocol at minimal cost, when and where it is needed. States that have changed laws to allow ADS practice for this standardized ear acupuncture protocol have seen increased access to this treatment, benefiting both patients and the programs.

  11. Similarity searching and scaffold hopping in synthetically accessible combinatorial chemistry spaces.

    PubMed

    Boehm, Markus; Wu, Tong-Ying; Claussen, Holger; Lemmen, Christian

    2008-04-24

    Large collections of combinatorial libraries are an integral element in today's pharmaceutical industry. It is of great interest to perform similarity searches against all virtual compounds that are synthetically accessible by any such library. Here we describe the successful application of a new software tool CoLibri on 358 combinatorial libraries based on validated reaction protocols to create a single chemistry space containing over 10 (12) possible products. Similarity searching with FTrees-FS allows the systematic exploration of this space without the need to enumerate all product structures. The search result is a set of virtual hits which are synthetically accessible by one or more of the existing reaction protocols. Grouping these virtual hits by their synthetic protocols allows the rapid design and synthesis of multiple follow-up libraries. Such library ideas support hit-to-lead design efforts for tasks like follow-up from high-throughput screening hits or scaffold hopping from one hit to another attractive series.

  12. Geospatial Brokering - Challenges and Future Directions

    NASA Astrophysics Data System (ADS)

    White, C. E.

    2012-12-01

    An important feature of many brokers is to facilitate straightforward human access to scientific data while maintaining programmatic access to it for system solutions. Standards-based protocols are critical for this, and there are a number of protocols to choose from. In this discussion, we will present a web application solution that leverages certain protocols - e.g., OGC CSW, REST, and OpenSearch - to provide programmatic as well as human access to geospatial resources. We will also discuss managing resources to reduce duplication yet increase discoverability, federated search solutions, and architectures that combine human-friendly interfaces with powerful underlying data management. The changing requirements witnessed in brokering solutions over time, our recent experience participating in the EarthCube brokering hack-a-thon, and evolving interoperability standards provide insight to future technological and philosophical directions planned for geospatial broker solutions. There has been much change over the past decade, but with the unprecedented data collaboration of recent years, in many ways the challenges and opportunities are just beginning.

  13. Patient-Reported Outcome (PRO) Assessment in Clinical Trials: A Systematic Review of Guidance for Trial Protocol Writers

    PubMed Central

    Calvert, Melanie; Kyte, Derek; Duffy, Helen; Gheorghe, Adrian; Mercieca-Bebber, Rebecca; Ives, Jonathan; Draper, Heather; Brundage, Michael; Blazeby, Jane; King, Madeleine

    2014-01-01

    Background Evidence suggests there are inconsistencies in patient-reported outcome (PRO) assessment and reporting in clinical trials, which may limit the use of these data to inform patient care. For trials with a PRO endpoint, routine inclusion of key PRO information in the protocol may help improve trial conduct and the reporting and appraisal of PRO results; however, it is currently unclear exactly what PRO-specific information should be included. The aim of this review was to summarize the current PRO-specific guidance for clinical trial protocol developers. Methods and Findings We searched the MEDLINE, EMBASE, CINHAL and Cochrane Library databases (inception to February 2013) for PRO-specific guidance regarding trial protocol development. Further guidance documents were identified via Google, Google scholar, requests to members of the UK Clinical Research Collaboration registered clinical trials units and international experts. Two independent investigators undertook title/abstract screening, full text review and data extraction, with a third involved in the event of disagreement. 21,175 citations were screened and 54 met the inclusion criteria. Guidance documents were difficult to access: electronic database searches identified just 8 documents, with the remaining 46 sourced elsewhere (5 from citation tracking, 27 from hand searching, 7 from the grey literature review and 7 from experts). 162 unique PRO-specific protocol recommendations were extracted from included documents. A further 10 PRO recommendations were identified relating to supporting trial documentation. Only 5/162 (3%) recommendations appeared in ≥50% of guidance documents reviewed, indicating a lack of consistency. Conclusions PRO-specific protocol guidelines were difficult to access, lacked consistency and may be challenging to implement in practice. There is a need to develop easily accessible consensus-driven PRO protocol guidance. Guidance should be aimed at ensuring key PRO information is routinely included in appropriate trial protocols, in order to facilitate rigorous collection/reporting of PRO data, to effectively inform patient care. PMID:25333995

  14. Strategies for Optimal MAC Parameters Tuning in IEEE 802.15.6 Wearable Wireless Sensor Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes

    2015-09-01

    Wireless body area networks (WBAN) has penetrated immensely in revolutionizing the classical heath-care system. Recently, number of WBAN applications has emerged which introduce potential limits to existing solutions. In particular, IEEE 802.15.6 standard has provided great flexibility, provisions and capabilities to deal emerging applications. In this paper, we investigate the application-specific throughput analysis by fine-tuning the physical (PHY) and medium access control (MAC) parameters of the IEEE 802.15.6 standard. Based on PHY characterizations in narrow band, at the MAC layer, carrier sense multiple access collision avoidance (CSMA/CA) and scheduled access protocols are extensively analyzed. It is concluded that, IEEE 802.15.6 standard can satisfy most of the WBANs applications throughput requirements by maximum achieving 680 Kbps. However, those emerging applications which require high quality audio or video transmissions, standard is not able to meet their constraints. Moreover, delay, energy efficiency and successful packet reception are considered as key performance metrics for comparing the MAC protocols. CSMA/CA protocol provides the best results to meet the delay constraints of medical and non-medical WBAN applications. Whereas, the scheduled access approach, performs very well both in energy efficiency and packet reception ratio.

  15. Internet and Intranet Use with a PC: Effects of Adapter Cards, Windows Versions and TCP/IP Software on Networking Performance.

    ERIC Educational Resources Information Center

    Nieuwenhuysen, Paul

    1997-01-01

    Explores data transfer speeds obtained with various combinations of hardware and software components through a study of access to the Internet from a notebook computer connected to a local area network based on Ethernet and TCP/IP (transmission control protocol/Internet protocol) network protocols. Upgrading is recommended for higher transfer…

  16. Applications of Multi-Channel Safety Authentication Protocols in Wireless Networks.

    PubMed

    Chen, Young-Long; Liau, Ren-Hau; Chang, Liang-Yu

    2016-01-01

    People can use their web browser or mobile devices to access web services and applications which are built into these servers. Users have to input their identity and password to login the server. The identity and password may be appropriated by hackers when the network environment is not safe. The multiple secure authentication protocol can improve the security of the network environment. Mobile devices can be used to pass the authentication messages through Wi-Fi or 3G networks to serve as a second communication channel. The content of the message number is not considered in a multiple secure authentication protocol. The more excessive transmission of messages would be easier to collect and decode by hackers. In this paper, we propose two schemes which allow the server to validate the user and reduce the number of messages using the XOR operation. Our schemes can improve the security of the authentication protocol. The experimental results show that our proposed authentication protocols are more secure and effective. In regard to applications of second authentication communication channels for a smart access control system, identity identification and E-wallet, our proposed authentication protocols can ensure the safety of person and property, and achieve more effective security management mechanisms.

  17. Common Data Models and Efficient Reproducible Workflows for Distributed Ocean Model Skill Assessment

    NASA Astrophysics Data System (ADS)

    Signell, R. P.; Snowden, D. P.; Howlett, E.; Fernandes, F. A.

    2014-12-01

    Model skill assessment requires discovery, access, analysis, and visualization of information from both sensors and models, and traditionally has been possible only by a few experts. The US Integrated Ocean Observing System (US-IOOS) consists of 17 Federal Agencies and 11 Regional Associations that produce data from various sensors and numerical models; exactly the information required for model skill assessment. US-IOOS is seeking to develop documented skill assessment workflows that are standardized, efficient, and reproducible so that a much wider community can participate in the use and assessment of model results. Standardization requires common data models for observational and model data. US-IOOS relies on the CF Conventions for observations and structured grid data, and on the UGRID Conventions for unstructured (e.g. triangular) grid data. This allows applications to obtain only the data they require in a uniform and parsimonious way using web services: OPeNDAP for model output and OGC Sensor Observation Service (SOS) for observed data. Reproducibility is enabled with IPython Notebooks shared on GitHub (http://github.com/ioos). These capture the entire skill assessment workflow, including user input, search, access, analysis, and visualization, ensuring that workflows are self-documenting and reproducible by anyone, using free software. Python packages for common data models are Pyugrid and the British Met Office Iris package. Python packages required to run the workflows (pyugrid, pyoos, and the British Met Office Iris package) are also available on GitHub and on Binstar.org so that users can run scenarios using the free Anaconda Python distribution. Hosted services such as Wakari enable anyone to reproduce these workflows for free, without installing any software locally, using just their web browser. We are also experimenting with Wakari Enterprise, which allows multi-user access from a web browser to an IPython Server running where large quantities of model output reside, increasing the efficiency. The open development and distribution of these workflows, and the software on which they depend, is an educational resource for those new to the field and a center of focus where practitioners can contribute new software and ideas.

  18. The TDR: A Repository for Long Term Storage of Geophysical Data and Metadata

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Baltzer, T.; Caron, J.

    2006-12-01

    For many years Unidata has provided easy, low cost data access to universities and research labs. Historically Unidata technology provided access to data in near real time. In recent years Unidata has additionally turned to providing middleware to serve longer term data and associated metadata via its THREDDS technology, the most recent offering being the THREDDS Data Server (TDS). The TDS provides middleware for metadata access and management, OPeNDAP data access, and integration with the Unidata Integrated Data Viewer (IDV), among other benefits. The TDS was designed to support rolling archives of data, that is, data that exist only for a relatively short, predefined time window. Now we are creating an addition to the TDS, called the THREDDS Data Repository (TDR), which allows users to store and retrieve data and other objects for an arbitrarily long time period. Data in the TDR can also be served by the TDS. The TDR performs important functions of locating storage for the data, moving the data to and from the repository, assigning unique identifiers, and generating metadata. The TDR framework supports pluggable components that allow tailoring an implementation for a particular application. The Linked Environments for Atmospheric Discovery (LEAD) project provides an excellent use case for the TDR. LEAD is a multi-institutional Large Information Technology Research project funded by the National Science Foundation (NSF). The goal of LEAD is to create a framework based on Grid and Web Services to support mesoscale meteorology research and education. This includes capabilities such as launching forecast models, mining data for meteorological phenomena, and dynamic workflows that are automatically reconfigurable in response to changing weather. LEAD presents unique challenges in managing and storing large data volumes from real-time observational systems as well as data that are dynamically created during the execution of adaptive workflows. For example, in order to support storage of many large data products, the LEAD implementation of the TDR will provide a variety of data movement options, including gridftp. It will have a web service interface and will be callable programmatically as well as via interactive user requests. Future plans include the use of a mass storage device to provide robust long term storage. This talk will present the current state of the TDR effort.

  19. 21 CFR 1311.130 - Requirements for establishing logical access control-Institutional practitioner.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... practitioner's hard token or any other authentication factor required by the practitioner's two-factor authentication protocol is lost, stolen, or compromised. Such access must be terminated immediately upon...

  20. Improving accessibility and discovery of ESA planetary data through the new planetary science archive

    NASA Astrophysics Data System (ADS)

    Macfarlane, A. J.; Docasal, R.; Rios, C.; Barbarisi, I.; Saiz, J.; Vallejo, F.; Besse, S.; Arviset, C.; Barthelemy, M.; De Marchi, G.; Fraga, D.; Grotheer, E.; Heather, D.; Lim, T.; Martinez, S.; Vallat, C.

    2018-01-01

    The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific data sets through various interfaces at http://psa.esa.int. Mostly driven by the evolution of the PDS standards which all new ESA planetary missions shall follow and the need to update the interfaces to the archive, the PSA has undergone an important re-engineering. In order to maximise the scientific exploitation of ESA's planetary data holdings, significant improvements have been made by utilising the latest technologies and implementing widely recognised open standards. To facilitate users in handling and visualising the many products stored in the archive which have spatial data associated, the new PSA supports Geographical Information Systems (GIS) by implementing the standards approved by the Open Geospatial Consortium (OGC). The modernised PSA also attempts to increase interoperability with the international community by implementing recognised planetary science specific protocols such as the PDAP (Planetary Data Access Protocol) and EPN-TAP (EuroPlanet-Table Access Protocol). In this paper we describe some of the methods by which the archive may be accessed and present the challenges that are being faced in consolidating data sets of the older PDS3 version of the standards with the new PDS4 deliveries into a single data model mapping to ensure transparent access to the data for users and services whilst maintaining a high performance.

  1. OAS :: Secretariat for Strengthening Democracy (SSD)

    Science.gov Websites

    Structure Our Locations Contact Us Access to Information Offices in the Member States Our History Logo Authorities Services Legal Protocol Topics A Access to Information Access to Rights Actions against Corruption Management Public Security R Racism and Intolerance Refugees S Scholarships School of Governance Science and

  2. The Many Faces of the Economic Bulletin Board.

    ERIC Educational Resources Information Center

    Boettcher, Jennifer

    1996-01-01

    The Economic Bulletin Board (EBB), a one-stop site for economic statistics and government-sponsored business information, can be accessed on the World Wide Web, gopher, telnet, file transfer protocol, dial-up, and fax. Each access method has advantages and disadvantages related to connections, pricing, depth of access, retrieval, and system…

  3. An Efficient and QoS Supported Multichannel MAC Protocol for Vehicular Ad Hoc Networks

    PubMed Central

    Tan, Guozhen; Yu, Chao

    2017-01-01

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety (transport efficiency and infotainment) applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. Different types of applications require different levels Quality-of-Service (QoS) support. Recently, transport efficiency and infotainment applications (e.g., electronic map download and Internet access) have received more and more attention, and this kind of applications is expected to become a big market driver in a near future. In this paper, we propose an Efficient and QoS supported Multichannel Medium Access Control (EQM-MAC) protocol for VANETs in a highway environment. The EQM-MAC protocol utilizes the service channel resources for non-safety message transmissions during the whole synchronization interval, and it dynamically adjusts minimum contention window size for different non-safety services according to the traffic conditions. Theoretical model analysis and extensive simulation results show that the EQM-MAC protocol can support QoS services, while ensuring the high saturation throughput and low transmission delay for non-safety applications. PMID:28991217

  4. [A security protocol for the exchange of personal medical data via Internet: monitoring treatment and drug effects].

    PubMed

    Viviani, R; Fischer, J; Spitzer, M; Freudenmann, R W

    2004-04-01

    We present a security protocol for the exchange of medical data via the Internet, based on the type/domain model. We discuss two applications of the protocol: in a system for the exchange of data for quality assurance, and in an on-line database of adverse reactions to drug use. We state that a type/domain security protocol can successfully comply with the complex requirements for data privacy and accessibility typical of such applications.

  5. Data Democratization - Promoting Real-Time Data Sharing and Use throughout the Americas

    NASA Astrophysics Data System (ADS)

    Yoksas, T. C.

    2006-05-01

    The Unidata Program Center (Unidata) of the University Corporation of Atmospheric Research (UCAR) is actively involved in international collaborations whose goals are real-time sharing of hydro-meteorological data by institutions of higher education throughout the Americas; in the distribution of analysis and visualization tools for those data; and in the establishment of server sites that provide easy-to-use, programmatic remote- access to a wide variety of datasets. Data sharing capabilities are being provided by Unidata's Internet Data Distribution (IDD) system, a community-based effort that has been the primary source of real-time meteorological data for approximately 150 US universities for over a decade. A collaboration among Unidata, Brazil's Centro de PreviSão de Tempo e Estudos Climáticos (CPTEC), the Universidad Federal do Rio de Janeiro (UFRJ), and the Universidade de São Paulo (USP) has resulted in the creation of a Brazilian peer of the North American IDD, the IDD-Brasil. Collaboration among Unidata, the Universidad de Costa Rica (UCR), and the University of Puerto Rico at Mayaguez (UPRM) seeks to extend IDD data sharing throughout Central America and the Caribbean in an IDD-Caribe. Collaboration between Unidata and the Caribbean Institute for Meteorology and Hydrology (CIMH), a World Meteorological Organization (WMO) Regional Meteorological Training Center (RMTC) based in Barbados, has been launched to investigate the possibility of expansion of IDD data sharing throughout Caribbean RMTC member countries. Most recently, efforts aimed at creating a data sharing network for researchers on the Antarctic continent have resulted in the establishment of the Antarctic-IDD. Data analysis and visualization capabilities are being provided by Unidata through a suite of freely-available applications: the National Centers for Environmental Prediction (NCEP) GEneral Meteorology PAcKage (GEMPAK); the Unidata Integrated Data Viewer (IDV); and University of Wisconsin, Space Science and Engineering Center (SSEC) Man-computer Interactive Data Access System (McIDAS). Remote data access capabilities are provided by Unidata's Thematic Realtime Environmental Data Services (THREDDS) servers (which incorporate Open-source Project for a Network Data Access (OPeNDAP) data services), and the Abstract Data Distribution Environment (ADDE) of McIDAS. It is envisioned that the data sharing capabilities available in the IDD, IDD-Brasil, and IDD-Caribe, remote data access capabilities available in THREDDS and ADDE, and analysis capabilities available in GEMPAK, the IDV, and McIDAS will help foster new collaborations among prominent university educators and researchers, national meteorological agencies, and WMO Regional Meteorological Training Centers throughout North, Central, and South America.

  6. Distribution of Information in Ad Hoc Networks

    DTIC Science & Technology

    2007-09-01

    2.4. MACA Protocol...................................20 Figure 2.5. Route discovery in AODV (From [32]).............28 Figure 2.6. Creation of a...19 Figure 2.3. Exposed terminal Problem (From [20]) (3) MACA and MACAW Protocols. One of the first protocols conceived for wireless local area...networks is MACA [21] (Multiple Accesses with Collision Avoidance). The transmitter sends a small packet, or RTS (Request To Send), which has little

  7. Receiver Statistics for Cognitive Radios in Dynamic Spectrum Access Networks

    DTIC Science & Technology

    2012-02-28

    SNR) are employed by many protocols and processes in direct-sequence ( DS ) spread-spectrum packet radio networks, including soft-decision decoding...adaptive modulation protocols, and power adjustment protocols. For DS spread spectrum, we have introduced and evaluated SNR estimators that employ...obtained during demodulation in a binary CDMA receiver. We investigated several methods to apply the proposed metric to the demodulator’s soft-decision

  8. Q-Learning and p-persistent CSMA based rendezvous protocol for cognitive radio networks operating with shared spectrum activity

    NASA Astrophysics Data System (ADS)

    Watson, Clifton L.; Biswas, Subir

    2014-06-01

    With an increasing demand for spectrum, dynamic spectrum access (DSA) has been proposed as viable means for providing the flexibility and greater access to spectrum necessary to meet this demand. Within the DSA concept, unlicensed secondary users temporarily "borrow" or access licensed spectrum, while respecting the licensed primary user's rights to that spectrum. As key enablers for DSA, cognitive radios (CRs) are based on software-defined radios which allow them to sense, learn, and adapt to the spectrum environment. These radios can operate independently and rapidly switch channels. Thus, the initial setup and maintenance of cognitive radio networks are dependent upon the ability of CR nodes to find each other, in a process known as rendezvous, and create a link on a common channel for the exchange of data and control information. In this paper, we propose a novel rendezvous protocol, known as QLP, which is based on Q-learning and the p-persistent CSMA protocol. With the QLP protocol, CR nodes learn which channels are best for rendezvous and thus adapt their behavior to visit those channels more frequently. We demonstrate through simulation that the QLP protocol provides a rendevous capability for DSA environments with different dynamics of PU activity, while attempting to achieve the following performance goals: (1) minimize the average time-to-rendezvous, (2) maximize system throughput, (3) minimize primary user interference, and (4) minimize collisions among CR nodes.

  9. Tolcapone suppresses ethanol intake in alcohol-preferring rats performing a novel cued access protocol.

    PubMed

    McCane, Aqilah M; Czachowski, Cristine L; Lapish, Christopher C

    2014-09-01

    Dopamine (DA) has been shown to play a central role in regulating motivated behavior and encoding reward. Chronic drug abuse elicits a state of hypodopaminergia in the mesocorticolimbic (MCL) system in both humans and preclinical rodent models of addiction, including those modeling alcohol use disorders (AUD). Working under the hypothesis that reductions in the bioavailability of DA play an integral role in the expression of the excessive drinking phenotype, the catechol-O-methyltransferase (COMT) inhibitor tolcapone was used as a means to amplify cortical DA concentration and drinking behaviors were then assessed. Sucrose and ethanol (EtOH) consumption were measured in P and Wistar rats in both a free choice drinking protocol and a novel cued access protocol. Tolcapone attenuated the consumption of EtOH, and to a lesser extent sucrose, in P rats in the cued access protocol, while no effect was observed in the free choice drinking protocol. Tolcapone also decreased EtOH consumption in high drinking Wistar rats. A follow-up experiment using the indirect DA agonist d-amphetamine showed no change in EtOH consumption. Collectively, these data suggest that COMT inhibitors may be capable of alleviating the extremely motivating or salient nature of stimuli associated with alcohol. The hypothesis is put forth that the relative specificity of tolcapone for cortical DA systems may mediate the suppression of the high seeking/drinking phenotype. Copyright © 2014 by the Research Society on Alcoholism.

  10. Energy-efficient boarder node medium access control protocol for wireless sensor networks.

    PubMed

    Razaque, Abdul; Elleithy, Khaled M

    2014-03-12

    This paper introduces the design, implementation, and performance analysis of the scalable and mobility-aware hybrid protocol named boarder node medium access control (BN-MAC) for wireless sensor networks (WSNs), which leverages the characteristics of scheduled and contention-based MAC protocols. Like contention-based MAC protocols, BN-MAC achieves high channel utilization, network adaptability under heavy traffic and mobility, and low latency and overhead. Like schedule-based MAC protocols, BN-MAC reduces idle listening time, emissions, and collision handling at low cost at one-hop neighbor nodes and achieves high channel utilization under heavy network loads. BN-MAC is particularly designed for region-wise WSNs. Each region is controlled by a boarder node (BN), which is of paramount importance. The BN coordinates with the remaining nodes within and beyond the region. Unlike other hybrid MAC protocols, BN-MAC incorporates three promising models that further reduce the energy consumption, idle listening time, overhearing, and congestion to improve the throughput and reduce the latency. One of the models used with BN-MAC is automatic active and sleep (AAS), which reduces the ideal listening time. When nodes finish their monitoring process, AAS lets them automatically go into the sleep state to avoid the idle listening state. Another model used in BN-MAC is the intelligent decision-making (IDM) model, which helps the nodes sense the nature of the environment. Based on the nature of the environment, the nodes decide whether to use the active or passive mode. This decision power of the nodes further reduces energy consumption because the nodes turn off the radio of the transceiver in the passive mode. The third model is the least-distance smart neighboring search (LDSNS), which determines the shortest efficient path to the one-hop neighbor and also provides cross-layering support to handle the mobility of the nodes. The BN-MAC also incorporates a semi-synchronous feature with a low duty cycle, which is advantageous for reducing the latency and energy consumption for several WSN application areas to improve the throughput. BN-MAC uses a unique window slot size to enhance the contention resolution issue for improved throughput. BN-MAC also prefers to communicate within a one-hop destination using Anycast, which maintains load balancing to maintain network reliability. BN-MAC is introduced with the goal of supporting four major application areas: monitoring and behavioral areas, controlling natural disasters, human-centric applications, and tracking mobility and static home automation devices from remote places. These application areas require a congestion-free mobility-supported MAC protocol to guarantee reliable data delivery. BN-MAC was evaluated using network simulator-2 (ns2) and compared with other hybrid MAC protocols, such as Zebra medium access control (Z-MAC), advertisement-based MAC (A-MAC), Speck-MAC, adaptive duty cycle SMAC (ADC-SMAC), and low-power real-time medium access control (LPR-MAC). The simulation results indicate that BN-MAC is a robust and energy-efficient protocol that outperforms other hybrid MAC protocols in the context of quality of service (QoS) parameters, such as energy consumption, latency, throughput, channel access time, successful delivery rate, coverage efficiency, and average duty cycle.

  11. Energy-Efficient Boarder Node Medium Access Control Protocol for Wireless Sensor Networks

    PubMed Central

    Razaque, Abdul; Elleithy, Khaled M.

    2014-01-01

    This paper introduces the design, implementation, and performance analysis of the scalable and mobility-aware hybrid protocol named boarder node medium access control (BN-MAC) for wireless sensor networks (WSNs), which leverages the characteristics of scheduled and contention-based MAC protocols. Like contention-based MAC protocols, BN-MAC achieves high channel utilization, network adaptability under heavy traffic and mobility, and low latency and overhead. Like schedule-based MAC protocols, BN-MAC reduces idle listening time, emissions, and collision handling at low cost at one-hop neighbor nodes and achieves high channel utilization under heavy network loads. BN-MAC is particularly designed for region-wise WSNs. Each region is controlled by a boarder node (BN), which is of paramount importance. The BN coordinates with the remaining nodes within and beyond the region. Unlike other hybrid MAC protocols, BN-MAC incorporates three promising models that further reduce the energy consumption, idle listening time, overhearing, and congestion to improve the throughput and reduce the latency. One of the models used with BN-MAC is automatic active and sleep (AAS), which reduces the ideal listening time. When nodes finish their monitoring process, AAS lets them automatically go into the sleep state to avoid the idle listening state. Another model used in BN-MAC is the intelligent decision-making (IDM) model, which helps the nodes sense the nature of the environment. Based on the nature of the environment, the nodes decide whether to use the active or passive mode. This decision power of the nodes further reduces energy consumption because the nodes turn off the radio of the transceiver in the passive mode. The third model is the least-distance smart neighboring search (LDSNS), which determines the shortest efficient path to the one-hop neighbor and also provides cross-layering support to handle the mobility of the nodes. The BN-MAC also incorporates a semi-synchronous feature with a low duty cycle, which is advantageous for reducing the latency and energy consumption for several WSN application areas to improve the throughput. BN-MAC uses a unique window slot size to enhance the contention resolution issue for improved throughput. BN-MAC also prefers to communicate within a one-hop destination using Anycast, which maintains load balancing to maintain network reliability. BN-MAC is introduced with the goal of supporting four major application areas: monitoring and behavioral areas, controlling natural disasters, human-centric applications, and tracking mobility and static home automation devices from remote places. These application areas require a congestion-free mobility-supported MAC protocol to guarantee reliable data delivery. BN-MAC was evaluated using network simulator-2 (ns2) and compared with other hybrid MAC protocols, such as Zebra medium access control (Z-MAC), advertisement-based MAC (A-MAC), Speck-MAC, adaptive duty cycle SMAC (ADC-SMAC), and low-power real-time medium access control (LPR-MAC). The simulation results indicate that BN-MAC is a robust and energy-efficient protocol that outperforms other hybrid MAC protocols in the context of quality of service (QoS) parameters, such as energy consumption, latency, throughput, channel access time, successful delivery rate, coverage efficiency, and average duty cycle. PMID:24625737

  12. A Secure and Efficient Handover Authentication Protocol for Wireless Networks

    PubMed Central

    Wang, Weijia; Hu, Lei

    2014-01-01

    Handover authentication protocol is a promising access control technology in the fields of WLANs and mobile wireless sensor networks. In this paper, we firstly review an efficient handover authentication protocol, named PairHand, and its existing security attacks and improvements. Then, we present an improved key recovery attack by using the linearly combining method and reanalyze its feasibility on the improved PairHand protocol. Finally, we present a new handover authentication protocol, which not only achieves the same desirable efficiency features of PairHand, but enjoys the provable security in the random oracle model. PMID:24971471

  13. User Procedures Standardization for Network Access. NBS Technical Note 799.

    ERIC Educational Resources Information Center

    Neumann, A. J.

    User access procedures to information systems have become of crucial importance with the advent of computer networks, which have opened new types of resources to a broad spectrum of users. This report surveys user access protocols of six representative systems: BASIC, GE MK II, INFONET, MEDLINE, NIC/ARPANET and SPIRES. Functional access…

  14. Tools to Ease Your Internet Adventures: Part I.

    ERIC Educational Resources Information Center

    Descy, Don E.

    1993-01-01

    This first of a two-part series highlights three tools that improve accessibility to Internet resources: (1) Alex, a database that accesses files in FTP (file transfer protocol) sites; (2) Archie, software that searches for file names with a user's search term; and (3) Gopher, a menu-driven program to access Internet sites. (LRW)

  15. 15 CFR 784.1 - Complementary access: General information on the purpose of complementary access, affected...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... may be conducted, on a selective basis, to verify the absence of undeclared nuclear material and nuclear related activities at reportable uranium hard-rock mines and ore beneficiation plants (see § 783.1... OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE ADDITIONAL PROTOCOL REGULATIONS COMPLEMENTARY ACCESS...

  16. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  17. Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent

    NASA Technical Reports Server (NTRS)

    Mishkin, Andrew H.; Dvorak, Daniel L.; Wagner, David A.; Bennett, Matthew B.

    2013-01-01

    This software implements a real-time access control protocol that is intended to make all connected users aware of the presence of other connected users, and which of them is currently in control of the system. Here, "in control" means that a single user is authorized and enabled to issue instructions to the system. The software The software also implements a goal scheduling mechanism that can detect situations where plans for the operation of a target system proposed by different users overlap and interact in conflicting ways. In such situations, the system can either simply report the conflict (rejecting one goal or the entire plan), or reschedule the goals in a way that does not conflict. The access control mechanism (and associated control protocol) is unique. Other access control mechanisms are generally intended to authenticate users, or exclude unauthorized access. This software does neither, and would likely depend on having some other mechanism to support those requirements.

  18. An Action-Based Fine-Grained Access Control Mechanism for Structured Documents and Its Application

    PubMed Central

    Su, Mang; Li, Fenghua; Tang, Zhi; Yu, Yinyan; Zhou, Bo

    2014-01-01

    This paper presents an action-based fine-grained access control mechanism for structured documents. Firstly, we define a describing model for structured documents and analyze the application scenarios. The describing model could support the permission management on chapters, pages, sections, words, and pictures of structured documents. Secondly, based on the action-based access control (ABAC) model, we propose a fine-grained control protocol for structured documents by introducing temporal state and environmental state. The protocol covering different stages from document creation, to permission specification and usage control are given by using the Z-notation. Finally, we give the implementation of our mechanism and make the comparisons between the existing methods and our mechanism. The result shows that our mechanism could provide the better solution of fine-grained access control for structured documents in complicated networks. Moreover, it is more flexible and practical. PMID:25136651

  19. An action-based fine-grained access control mechanism for structured documents and its application.

    PubMed

    Su, Mang; Li, Fenghua; Tang, Zhi; Yu, Yinyan; Zhou, Bo

    2014-01-01

    This paper presents an action-based fine-grained access control mechanism for structured documents. Firstly, we define a describing model for structured documents and analyze the application scenarios. The describing model could support the permission management on chapters, pages, sections, words, and pictures of structured documents. Secondly, based on the action-based access control (ABAC) model, we propose a fine-grained control protocol for structured documents by introducing temporal state and environmental state. The protocol covering different stages from document creation, to permission specification and usage control are given by using the Z-notation. Finally, we give the implementation of our mechanism and make the comparisons between the existing methods and our mechanism. The result shows that our mechanism could provide the better solution of fine-grained access control for structured documents in complicated networks. Moreover, it is more flexible and practical.

  20. A prototype of Virtual Observatory access for planetary data in the framework of Europlanet-RI/IDIS

    NASA Astrophysics Data System (ADS)

    Gangloff, M.; Cecconi, B.; Bourrel, N.; Jacquey, C.; Le Sidaner, P.; Berthier, J.; André, N.; Pallier, E.; Erard, S.; Aboudarham, J.; Chanteur, G. M.; Capria, M. T.; Khodachenko, M.; Manaud, N.; Schmidt, W.; Schmitt, B.; Topf, F.; Trautan, F.; Sarkissian, A.

    2011-12-01

    Europlanet RI is a four-year project supported by the European Union under the Seventh Framework Programme. Launched in January 2009, it is an Integrated Infrastructure Initiative, ie. A combination of Networking Activities, Transnational Access Activities and Joint Research Activities. The Networking Activities aim at further fostering a culture of cooperation in the field of Planetary Sciences. The objective of the Transnational Access Activities is to provide transnational access to a range of laboratory and field site facilities tailored to the needs of planetary research and on-line access to the available planetary science data, information and software tools, through the IDIS e-service. The overall aim of the Joint Research Activities (JRA) is to improve the services provided by the ensemble of Transnational Access Activities. In EuroPlaNet-RI, JRA4 must prepare essential tools for IDIS (Integrated and Distributed Information Service) allowing the planetary science community to interrogate some selected data centres, access and process data and visualize the results. This is the first step towards a Planetary Virtual Observatory. The first requirement for different data centres to be able to operate together collectively is adequate standardization. In particular a common description of data and services is essential. This is why the major part of JRA4/Task2 activity is focussing on data models, associated dictionnaries, and protocols to exchange queries. A specific data model is being developed for IDIS, associated with the PDAP protocol, a standard defined by the IPDA (International Planetary Data Alliance) The scope of this prototype is to demonstrate the capabilities of the IDIS Data Model, and the PDAP protocol to search and retrieve data in the wide topical planetology context.

  1. Recommendations for a service framework to access astronomical archives

    NASA Technical Reports Server (NTRS)

    Travisano, J. J.; Pollizzi, J.

    1992-01-01

    There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit archive user and supplier alike is proposed.

  2. Collaboratively Enabling Reanalysis Intercomparison Using the Earth System Grid Federation (ESGF): A Case Study.

    NASA Astrophysics Data System (ADS)

    Potter, G. L.; Bosilovich, M. G.; Carriere, L.; McInerney, M.; Nadeau, D.; Shen, Y.

    2014-12-01

    The NASA Climate Model Data Service (CDS) and the NASA Center for Climate Simulation (NCCS) are collaborating to provide an end-to-end system for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to adhere to the CMIP5 standards and published on the ESGF. Reanalysis centers provide interfaces to the various reanalyses, but each data set requires some effort to either compare with other reanalyses or with atmospheric model output. The repackaging for ESGF required reformatting, restructuring and modifications to the metadata to facilitate the ESGF search capabilities. Once this was done, the data structure is the same as used by the very successful CMIP3 and CMIP5 making comparison among reanalyses and climate models a relatively easy exercise. The data can now be accessed using WGET, OPENDAP, or HTTPServer at https://earthsystemcog.org/projects/ana4mips/ . An example using this interface will be shown including comparison of the reanalyses portrayal of the surface heat balance during the 2010 Russian heat wave. We have found that although the difference reanalyses produce very similar atmospheric features of the heat wave, the surface energy balance terms such as latent and sensible heat show considerable differences. This comparison helps point out systematic differences in the reanalyses surface moisture and may lead to a better understanding of the differences.

  3. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  4. Technologies for Online Data Management of Oceanographic Data

    NASA Astrophysics Data System (ADS)

    Zodiatis, G.; Hayes, D.; Karaolia, A.; Stylianou, S.; Nikolaidis, A.; Constantinou, I.; Michael, S.; Galanis, G.; Georgiou, G.

    2012-04-01

    The need for efficient and effective on line data management is greatly recognized today by the marine research community. The Cyprus Oceanography Center at the University of Cyprus, realizing this need, is continuously working in this area and has developed a variety of data management and visualization tools which are currently utilized for both the Mediterranean and the Black Sea. Bythos, CYCOFOS and LAS server are three different systems employed by the Oceanography Center, each one dealing with different data sets and processes. Bythos is a rich internet application that combines the latest technologies and enables scientists to search, visualize and download climatological oceanographic data with capabilities of being applied worldwide. CYCOFOS is an operational coastal ocean forecasting and observing system which provides in near real time predictions for sea currents, hydrological characteristics, waves, swells and tides, remote sensing and in-situ data from various remote observing platforms in the Mediterranean Sea, the EEZ and the coastal areas of Cyprus. LAS (Live Access Server) is deployed to present distributed various types of data sets as a unified virtual data base through the use of OpenDap networking. It is first applied for providing an integrated, high resolution system for monitoring the energy potential from sea waves in the Exclusive Economic Zone of Cyprus and the Eastern Mediterranean Levantine Basin. This paper presents the aforementioned technologies as currently adopted by the Cyprus Oceanography Center and describes their utilization that supports both the research and operational activities in the Mediterranean.

  5. Meta Data Mining in Earth Remote Sensing Data Archives

    NASA Astrophysics Data System (ADS)

    Davis, B.; Steinwand, D.

    2014-12-01

    Modern search and discovery tools for satellite based remote sensing data are often catalog based and rely on query systems which use scene- (or granule-) based meta data for those queries. While these traditional catalog systems are often robust, very little has been done in the way of meta data mining to aid in the search and discovery process. The recently coined term "Big Data" can be applied in the remote sensing world's efforts to derive information from the vast data holdings of satellite based land remote sensing data. Large catalog-based search and discovery systems such as the United States Geological Survey's Earth Explorer system and the NASA Earth Observing System Data and Information System's Reverb-ECHO system provide comprehensive access to these data holdings, but do little to expose the underlying scene-based meta data. These catalog-based systems are extremely flexible, but are manually intensive and often require a high level of user expertise. Exposing scene-based meta data to external, web-based services can enable machine-driven queries to aid in the search and discovery process. Furthermore, services which expose additional scene-based content data (such as product quality information) are now available and can provide a "deeper look" into remote sensing data archives too large for efficient manual search methods. This presentation shows examples of the mining of Landsat and Aster scene-based meta data, and an experimental service using OPeNDAP to extract information from quality band from multiple granules in the MODIS archive.

  6. Internet Connections: Understanding Your Access Options.

    ERIC Educational Resources Information Center

    Notess, Greg R.

    1994-01-01

    Describes levels of Internet connectivity, physical connections, and connection speeds. Compares options for connecting to the Internet, including terminal accounts, dial-up terminal accounts, direct connections through a local area network, and direct connections using SLIP (Serial Line Internet Protocol) or PPP (Point-to-Point Protocol). (eight…

  7. Satellite Communications Using Commercial Protocols

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Griner, James H.; Dimond, Robert; Frantz, Brian D.; Kachmar, Brian; Shell, Dan

    2000-01-01

    NASA Glenn Research Center has been working with industry, academia, and other government agencies in assessing commercial communications protocols for satellite and space-based applications. In addition, NASA Glenn has been developing and advocating new satellite-friendly modifications to existing communications protocol standards. This paper summarizes recent research into the applicability of various commercial standard protocols for use over satellite and space- based communications networks as well as expectations for future protocol development. It serves as a reference point from which the detailed work can be readily accessed. Areas that will be addressed include asynchronous-transfer-mode quality of service; completed and ongoing work of the Internet Engineering Task Force; data-link-layer protocol development for unidirectional link routing; and protocols for aeronautical applications, including mobile Internet protocol routing for wireless/mobile hosts and the aeronautical telecommunications network protocol.

  8. Agreements between Industry and Academia on Publication Rights: A Retrospective Study of Protocols and Publications of Randomized Clinical Trials

    PubMed Central

    Kasenda, Benjamin; von Elm, Erik; You, John J.; Tomonaga, Yuki; Saccilotto, Ramon; Amstutz, Alain; Bengough, Theresa; Meerpohl, Joerg J.; Stegert, Mihaela; Olu, Kelechi K.; Tikkinen, Kari A. O.; Neumann, Ignacio; Carrasco-Labra, Alonso; Faulhaber, Markus; Mulla, Sohail M.; Mertz, Dominik; Akl, Elie A.; Bassler, Dirk; Busse, Jason W.; Nordmann, Alain; Gloy, Viktoria; Ebrahim, Shanil; Schandelmaier, Stefan; Sun, Xin; Vandvik, Per O.; Johnston, Bradley C.; Walter, Martin A.; Burnand, Bernard; Hemkens, Lars G.; Bucher, Heiner C.; Guyatt, Gordon H.; Briel, Matthias

    2016-01-01

    Background Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. Methods and Findings We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner’s right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. Conclusions Publication agreements constraining academic authors’ independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol. PMID:27352244

  9. Determining Appropriate Coupling between User Experiences and Earth Science Data Services

    NASA Astrophysics Data System (ADS)

    Moghaddam-Taaheri, E.; Pilone, D.; Newman, D. J.; Mitchell, A. E.; Goff, T. D.; Baynes, K.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. Reverb exposes ECHO's capabilities through an interactive, Web 2.0 application designed around searching for Earth Science data and downloading or ordering data of interest. ECHO and Reverb have supported the concept of Earth Science data services for several years but only for discovery. Invocation of these services was not a primary capability of the user experience. As more and more Earth Science data moves online and away from the concept of data ordering, progress has been made in making on demand services available for directly accessed data. These concepts have existed through access mechanisms such as OPeNDAP but are proliferating to accommodate a wider variety of services and service providers. Recently, the EOSDIS Service Interface (ESI) was defined and integrated into the ECS system. The ESI allows data providers to expose a wide variety of service capabilities including reprojection, reformatting, spatial and band subsetting, and resampling. ECHO and Reverb were tasked with making these services available to end-users in a meaningful and usable way that integrated into its existing search and ordering workflow. This presentation discusses the challenges associated with exposing disparate service capabilities while presenting a meaningful and cohesive user experience. Specifically, we'll discuss: - Benefits and challenges of tightly coupling the user interface with underlying services - Approaches to generic service descriptions - Approaches to dynamic user interfaces that better describe service capabilities while minimizing application coupling - Challenges associated with traditional WSDL / UDDI style service descriptions - Walkthrough of the solution used by ECHO and Reverb to integrate and expose ESI compliant services to our users

  10. PAVICS: A Platform for the Analysis and Visualization of Climate Science

    NASA Astrophysics Data System (ADS)

    Gauvin St-Denis, B.; Landry, T.; Huard, D. B.; Byrns, D.; Chaumont, D.; Foucher, S.

    2016-12-01

    Climate service providers are boundary organizations working at the interface of climate science research and users of climate information. Users include academics in other disciplines looking for credible, customized future climate scenarios, government planners, resource managers, asset owners, as well as service utilities. These users are looking for relevant information regarding the impacts of climate change as well as informing decisions regarding adaptation options. As climate change concerns become mainstream, the pressure on climate service providers to deliver tailored, high quality information in a timely manner increases rapidly. To meet this growing demand, Ouranos, a climate service center located in Montreal, is collaborating with the Centre de recherche informatique de Montreal (CRIM) to develop a climate data analysis web-based platform interacting with RESTful services covering data access and retrieval, geospatial analysis, bias correction, distributed climate indicator computing and results visualization. The project, financed by CANARIE, relies on the experience of the UV-CDAT and ESGF-CWT teams, as well as on the Birdhouse framework developed by the German Climate Research Center (DKRZ) and French IPSL. Climate data is accessed through OPEnDAP, while computations are carried through WPS. Regions such as watersheds or user-defined polygons, used as spatial selections for computations, are managed by GeoServer, also providing WMS, WFS and WPS capabilities. The services are hosted on independent servers communicating by high throughput network. Deployment, maintenance and collaboration with other development teams are eased by the use of Docker and OpenStack VMs. Web-based tools are developed with modern web frameworks such as React-Redux, OpenLayers 3, Cesium and Plotly. Although the main objective of the project is to build a functioning, usable data analysis pipeline within two years, time is also devoted to explore emerging technologies and assess their potential. For instance, sandbox environments will store climate data in HDFS, process it with Apache Spark and allow interaction through Jupyter Notebooks. Data streaming of observational data with OpenGL and Cesium is also considered.

  11. Dynamic reusable workflows for ocean science

    USGS Publications Warehouse

    Signell, Richard; Fernandez, Filipe; Wilcox, Kyle

    2016-01-01

    Digital catalogs of ocean data have been available for decades, but advances in standardized services and software for catalog search and data access make it now possible to create catalog-driven workflows that automate — end-to-end — data search, analysis and visualization of data from multiple distributed sources. Further, these workflows may be shared, reused and adapted with ease. Here we describe a workflow developed within the US Integrated Ocean Observing System (IOOS) which automates the skill-assessment of water temperature forecasts from multiple ocean forecast models, allowing improved forecast products to be delivered for an open water swim event. A series of Jupyter Notebooks are used to capture and document the end-to-end workflow using a collection of Python tools that facilitate working with standardized catalog and data services. The workflow first searches a catalog of metadata using the Open Geospatial Consortium (OGC) Catalog Service for the Web (CSW), then accesses data service endpoints found in the metadata records using the OGC Sensor Observation Service (SOS) for in situ sensor data and OPeNDAP services for remotely-sensed and model data. Skill metrics are computed and time series comparisons of forecast model and observed data are displayed interactively, leveraging the capabilities of modern web browsers. The resulting workflow not only solves a challenging specific problem, but highlights the benefits of dynamic, reusable workflows in general. These workflows adapt as new data enters the data system, facilitate reproducible science, provide templates from which new scientific workflows can be developed, and encourage data providers to use standardized services. As applied to the ocean swim event, the workflow exposed problems with two of the ocean forecast products which led to improved regional forecasts once errors were corrected. While the example is specific, the approach is general, and we hope to see increased use of dynamic notebooks across the geoscience domains.

  12. Visualization of GPM Standard Products at the Precipitation Processing System (PPS)

    NASA Astrophysics Data System (ADS)

    Kelley, O.

    2010-12-01

    Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).

  13. Advertisement-Based Energy Efficient Medium Access Protocols for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Ray, Surjya Sarathi

    One of the main challenges that prevents the large-scale deployment of Wireless Sensor Networks (WSNs) is providing the applications with the required quality of service (QoS) given the sensor nodes' limited energy supplies. WSNs are an important tool in supporting applications ranging from environmental and industrial monitoring, to battlefield surveillance and traffic control, among others. Most of these applications require sensors to function for long periods of time without human intervention and without battery replacement. Therefore, energy conservation is one of the main goals for protocols for WSNs. Energy conservation can be performed in different layers of the protocol stack. In particular, as the medium access control (MAC) layer can access and control the radio directly, large energy savings is possible through intelligent MAC protocol design. To maximize the network lifetime, MAC protocols for WSNs aim to minimize idle listening of the sensor nodes, packet collisions, and overhearing. Several approaches such as duty cycling and low power listening have been proposed at the MAC layer to achieve energy efficiency. In this thesis, I explore the possibility of further energy savings through the advertisement of data packets in the MAC layer. In the first part of my research, I propose Advertisement-MAC or ADV-MAC, a new MAC protocol for WSNs that utilizes the concept of advertising for data contention. This technique lets nodes listen dynamically to any desired transmission and sleep during transmissions not of interest. This minimizes the energy lost in idle listening and overhearing while maintaining an adaptive duty cycle to handle variable loads. Additionally, ADV-MAC enables energy efficient MAC-level multicasting. An analytical model for the packet delivery ratio and the energy consumption of the protocol is also proposed. The analytical model is verified with simulations and is used to choose an optimal value of the advertisement period. Simulations show that the optimized ADV-MAC provides substantial energy gains (50% to 70% less than other MAC protocols for WSNs such as T-MAC and S-MAC for the scenarios investigated) while faring as well as T-MAC in terms of packet delivery ratio and latency. Although ADV-MAC provides substantial energy gains over S-MAC and T-MAC, it is not optimal in terms of energy savings because contention is done twice -- once in the Advertisement Period and once in the Data Period. In the next part of my research, the second contention in the Data Period is eliminated and the advantages of contention-based and TDMA-based protocols are combined to form Advertisement based Time-division Multiple Access (ATMA), a distributed TDMA-based MAC protocol for WSNs. ATMA utilizes the bursty nature of the traffic to prevent energy waste through advertisements and reservations for data slots. Extensive simulations and qualitative analysis show that with bursty traffic, ATMA outperforms contention-based protocols (S-MAC, T-MAC and ADV-MAC), a TDMA based protocol (TRAMA) and hybrid protocols (Z-MAC and IEEE 802.15.4). ATMA provides energy reductions of up to 80%, while providing the best packet delivery ratio (close to 100%) and latency among all the investigated protocols. Simulations alone cannot reflect many of the challenges faced by real implementations of MAC protocols, such as clock-drift, synchronization, imperfect physical layers, and irregular interference from other transmissions. Such issues may cripple a protocol that otherwise performs very well in software simulations. Hence, to validate my research, I conclude with a hardware implementation of the ATMA protocol on SORA (Software Radio), developed by Microsoft Research Asia. SORA is a reprogrammable Software Defined Radio (SDR) platform that satisfies the throughput and timing requirements of modern wireless protocols while utilizing the rich general purpose PC development environment. Experimental results obtained from the hardware implementation of ATMA closely mirror the simulation results obtained for a single hop network with 4 nodes.

  14. Network support for turn-taking in multimedia collaboration

    NASA Astrophysics Data System (ADS)

    Dommel, Hans-Peter; Garcia-Luna-Aceves, Jose J.

    1997-01-01

    The effectiveness of collaborative multimedia systems depends on the regulation of access to their shared resources, such as continuous media or instruments used concurrently by multiple parties. Existing applications use only simple protocols to mediate such resource contention. Their cooperative rules follow a strict agenda and are largely application-specific. The inherent problem of floor control lacks a systematic methodology. This paper presents a general model on floor control for correct, scalable, fine-grained and fair resource sharing that integrates user interaction with network conditions, and adaptation to various media types. The motion of turn-taking known from psycholinguistics in studies on discourse structure is adapted for this framework. Viewed as a computational analogy to speech communication, online collaboration revolves around dynamically allocated access permissions called floors. The control semantics of floors derives from concurrently control methodology. An explicit specification and verification of a novel distributed Floor Control Protocol are presented. Hosts assume sharing roles that allow for efficient dissemination of control information, agreeing on a floor holder which is granted mutually exclusive access to a resource. Performance analytic aspects of floor control protocols are also briefly discussed.

  15. Chart Card: feasibility of a tool for improving emergency department care in sickle cell disease.

    PubMed

    Neumayr, Lynne; Pringle, Steven; Giles, Stephen; Quirolo, Keith C; Paulukonis, Susan; Vichinsky, Elliott P; Treadwell, Marsha J

    2010-11-01

    Patients with sickle cell disease (SCD) are concerned with emergency department care, including time to treatment and staff attitudes and knowledge. Providers are concerned about rapid access to patient information and SCD treatment protocols. A software application that stores and retrieves encrypted personal medical information on a plastic credit card-sized Chart Card was designed. To determine the applicability and feasibility of the Chart Card on patient satisfaction with emergency department care and provider accessibility to patient information and care protocols. One-half of 44 adults (aged -18 years) and 50 children with SCD were randomized to either the Chart Card or usual care. Patient satisfaction was surveyed pre and post implementation of the Chart Card program, and emergency department staff was surveyed about familiarity with SCD treatment protocols. Patient satisfaction with emergency department care and efficacy in health care increased post Chart Card implementation. Providers valued immediate access to patient information and SCD treatment guidelines. The technology has potential for application in the treatment of other illnesses in other settings.

  16. Community-Based Services that Facilitate Interoperability and Intercomparison of Precipitation Datasets from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Kempler, Steven; Teng, William; Leptoukh, Gregory; Ostrenga, Dana

    2010-01-01

    Over the past 12 years, large volumes of precipitation data have been generated from space-based observatories (e.g., TRMM), merging of data products (e.g., gridded 3B42), models (e.g., GMAO), climatologies (e.g., Chang SSM/I derived rain indices), field campaigns, and ground-based measuring stations. The science research, applications, and education communities have greatly benefited from the unrestricted availability of these data from the Goddard Earth Sciences Data and Information Services Center (GES DISC) and, in particular, the services tailored toward precipitation data access and usability. In addition, tools and services that are responsive to the expressed evolving needs of the precipitation data user communities have been developed at the Precipitation Data and Information Services Center (PDISC) (http://disc.gsfc.nasa.gov/precipitation or google NASA PDISC), located at the GES DISC, to provide users with quick data exploration and access capabilities. In recent years, data management and access services have become increasingly sophisticated, such that they now afford researchers, particularly those interested in multi-data set science analysis and/or data validation, the ability to homogenize data sets, in order to apply multi-variant, comparison, and evaluation functions. Included in these services is the ability to capture data quality and data provenance. These interoperability services can be directly applied to future data sets, such as those from the Global Precipitation Measurement (GPM) mission. This presentation describes the data sets and services at the PDISC that are currently used by precipitation science and applications researchers, and which will be enhanced in preparation for GPM and associated multi-sensor data research. Specifically, the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) will be illustrated. Giovanni enables scientific exploration of Earth science data without researchers having to perform the complicated data access and match-up processes. In addition, PDISC tool and service capabilities being adapted for GPM data will be described, including the Google-like Mirador data search and access engine; semantic technology to help manage large amounts of multi-sensor data and their relationships; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion to various formats (e.g., netCDF, HDF, KML (for Google Earth)); visualization and analysis of Level 2 data profiles and maps; parameter and spatial subsetting; time and temporal aggregation; regridding; data version control and provenance; continuous archive verification; and expertise in data-related standards and interoperability. The goal of providing these services is to further the progress towards a common framework by which data analysis/validation can be more easily accomplished.

  17. An Educators' Guide to Information Access across the Internet.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1994-01-01

    A discussion of tools available for use of the Internet, particularly by college and university educators and students, offers information on use of various services, including electronic mailing list servers, data communications protocols for networking, inter-host connections, file transfer protocol, gopher software, bibliographic searching,…

  18. 47 CFR 79.4 - Closed captioning of video programming delivered using Internet protocol.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Closed captioning of video programming... (CONTINUED) BROADCAST RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Video Programming Owners, Providers, and Distributors § 79.4 Closed captioning of video programming delivered using Internet protocol. (a...

  19. 77 FR 6094 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-07

    ....-International Atomic Energy Agency Additional Protocol. Under the U.S.-International Atomic Energy Agency (IAEA...-related activities to the IAEA and potentially provide access to IAEA inspectors for verification purposes. The U.S.-IAEA Additional Protocol permits the United States unilaterally to declare exclusions from...

  20. Combining novel technologies with improved logistics to reduce hemodialysis vascular access dysfunction.

    PubMed

    Roy-Chaudhury, P; Lee, T; Duncan, H; El-Khatib, M

    2009-01-01

    Hemodialysis (HD) vascular access dysfunction is currently a huge clinical problem for which there are no effective therapies. There are, however, a number of promising technologies that are currently at the experimental or clinical trial stage. We believe that the application of these novel technologies in combination with better clinical protocols for vascular access care could significantly reduce the current problems associated with HD vascular access.

  1. Global Access to Library of Congress' Digital Resources: National Digital Library and Internet Resources.

    ERIC Educational Resources Information Center

    Chen, Ching-chih

    1996-01-01

    Summarizes how the Library of Congress' digital library collections can be accessed globally via the Internet and World Wide Web. Outlines the resources found in each of the various access points: gopher, online catalog, library and legislative Web sites, legal and copyright databases, and FTP (file transfer protocol) sites. (LAM)

  2. 106-17 Telemetry Standards Chapter 7 Packet Telemetry Downlink

    DTIC Science & Technology

    2017-07-31

    Acronyms IP Internet Protocol IPv4 Internet Protocol, Version 4 IPv6 Internet Protocol, Version 6 LLP low-latency PTDP MAC media access control...o 4’b0101: PT Internet Protocol (IP) Packet o 4’b0110: PT Chapter 24 TmNSMessage Packet o 4’b0111 – 4’b1111: Reserved • Fragment (bits 17 – 16...packet is defined as a free -running 12-bit counter. The PT test counter packet shall consist of one 12-bit word and shall be encoded as one 24-bit

  3. Easy Online Access to Helpful Internet Guides.

    ERIC Educational Resources Information Center

    Tuss, Joan

    1993-01-01

    Lists recommended guides to the Internet that are available electronically. Basic commands needed to use anonymous ftp (file transfer protocol) are explained. An annotation and command formats to access, scan, retrieve, and exit each file are included for 11 titles. (EAM)

  4. An extended smart utilization medium access control (ESU-MAC) protocol for ad hoc wireless systems

    NASA Astrophysics Data System (ADS)

    Vashishtha, Jyoti; Sinha, Aakash

    2006-05-01

    The demand for spontaneous setup of a wireless communication system has increased in recent years for areas like battlefield, disaster relief operations etc., where a pre-deployment of network infrastructure is difficult or unavailable. A mobile ad-hoc network (MANET) is a promising solution, but poses a lot of challenges for all the design layers, specifically medium access control (MAC) layer. Recent existing works have used the concepts of multi-channel and power control in designing MAC layer protocols. SU-MAC developed by the same authors, efficiently uses the 'available' data and control bandwidth to send control information and results in increased throughput via decreasing contention on the control channel. However, SU-MAC protocol was limited for static ad-hoc network and also faced the busy-receiver node problem. We present the Extended SU-MAC (ESU-MAC) protocol which works mobile nodes. Also, we significantly improve the scheme of control information exchange in ESU-MAC to overcome the busy-receiver node problem and thus, further avoid the blockage of control channel for longer periods of time. A power control scheme is used as before to reduce interference and to effectively re-use the available bandwidth. Simulation results show that ESU-MAC protocol is promising for mobile, ad-hoc network in terms of reduced contention at the control channel and improved throughput because of channel re-use. Results show a considerable increase in throughput compared to SU-MAC which could be attributed to increased accessibility of control channel and improved utilization of data channels due to superior control information exchange scheme.

  5. Multiple Access Schemes for Lunar Missions

    NASA Technical Reports Server (NTRS)

    Deutsch, Leslie; Hamkins, Jon; Stocklin, Frank J.

    2010-01-01

    Two years ago, the NASA Coding, Modulation, and Link Protocol (CMLP) study was completed. The study, led by the authors of this paper, recommended codes, modulation schemes, and desired attributes of link protocols for all space communication links in NASA's future space architecture. Portions of the NASA CMLP team were reassembled to resolve one open issue: the use of multiple access (MA) communication from the lunar surface. The CMLP-MA team analyzed and simulated two candidate multiple access schemes that were identified in the original CMLP study: Code Division MA (CDMA) and Frequency Division MA (FDMA) based on a bandwidth-efficient Continuous Phase Modulation (CPM) with a superimposed Pseudo-Noise (PN) ranging signal (CPM/PN). This paper summarizes the results of the analysis and simulation of the CMLP-MA study and describes the final recommendations.

  6. Application of open source standards and technologies in the http://climate4impact.eu/ portal

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; Som de Cerff, Wim; Pagé, Christian; Tatarinova, Natalia

    2015-04-01

    This presentation will demonstrate how to calculate and visualize the climate indice SU (number of summer days) on the climate4impact portal. The following topics will be covered during the demonstration: - Security: Login using OpenID for access to the Earth System Grid Fedeation (ESGF) data nodes. The ESGF works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESGF search services. A catalog browser allows for browsing through CMIP5 and any other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). - Processing using Web Processing Services (WPS): Transform data, subset, export into other formats, and perform climate indices calculations using Web Processing Services implemented by PyWPS, based on NCAR NCPP OpenClimateGIS and IS-ENES2 ICCLIM. - Visualization using Web Map Services (WMS): Visualize data from ESGF data nodes using ADAGUC Web Map Services. The aim of climate4impact is to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. The portal is based on 21 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. It has been developed within the European projects IS-ENES and IS-ENES2 for more than 5 years, and its development currently continues within IS-ENES2 and CLIPC. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in the ENES portal interface for climate impact communities and can be visited at http://climate4impact.eu/ The current main objectives for climate4impact can be summarized in two objectives. The first one is to work on a web interface which automatically generates a graphical user interface on WPS endpoints. The WPS calculates climate indices and subset data using OpenClimateGIS/ICCLIM on data stored in ESGF data nodes. Data is then transmitted from ESGF nodes over secured OpenDAP and becomes available in a new, per user, secured OpenDAP server. The results can then be visualized again using ADAGUC WMS. Dedicated wizards for processing of climate indices will be developed in close collaboration with users. The second one is to expose climate4impact services, so as to offer standardized services which can be used by other portals. This has the advantage to add interoperability between several portals, as well as to enable the design of specific portals aimed at different impact communities, either thematic or national, for example.

  7. RosettaScripts: a scripting language interface to the Rosetta macromolecular modeling suite.

    PubMed

    Fleishman, Sarel J; Leaver-Fay, Andrew; Corn, Jacob E; Strauch, Eva-Maria; Khare, Sagar D; Koga, Nobuyasu; Ashworth, Justin; Murphy, Paul; Richter, Florian; Lemmon, Gordon; Meiler, Jens; Baker, David

    2011-01-01

    Macromolecular modeling and design are increasingly useful in basic research, biotechnology, and teaching. However, the absence of a user-friendly modeling framework that provides access to a wide range of modeling capabilities is hampering the wider adoption of computational methods by non-experts. RosettaScripts is an XML-like language for specifying modeling tasks in the Rosetta framework. RosettaScripts provides access to protocol-level functionalities, such as rigid-body docking and sequence redesign, and allows fast testing and deployment of complex protocols without need for modifying or recompiling the underlying C++ code. We illustrate these capabilities with RosettaScripts protocols for the stabilization of proteins, the generation of computationally constrained libraries for experimental selection of higher-affinity binding proteins, loop remodeling, small-molecule ligand docking, design of ligand-binding proteins, and specificity redesign in DNA-binding proteins.

  8. A global water resources ensemble of hydrological models: the eartH2Observe Tier-1 dataset

    NASA Astrophysics Data System (ADS)

    Schellekens, Jaap; Dutra, Emanuel; Martínez-de la Torre, Alberto; Balsamo, Gianpaolo; van Dijk, Albert; Sperna Weiland, Frederiek; Minvielle, Marie; Calvet, Jean-Christophe; Decharme, Bertrand; Eisner, Stephanie; Fink, Gabriel; Flörke, Martina; Peßenteiner, Stefanie; van Beek, Rens; Polcher, Jan; Beck, Hylke; Orth, René; Calton, Ben; Burke, Sophia; Dorigo, Wouter; Weedon, Graham P.

    2017-07-01

    The dataset presented here consists of an ensemble of 10 global hydrological and land surface models for the period 1979-2012 using a reanalysis-based meteorological forcing dataset (0.5° resolution). The current dataset serves as a state of the art in current global hydrological modelling and as a benchmark for further improvements in the coming years. A signal-to-noise ratio analysis revealed low inter-model agreement over (i) snow-dominated regions and (ii) tropical rainforest and monsoon areas. The large uncertainty of precipitation in the tropics is not reflected in the ensemble runoff. Verification of the results against benchmark datasets for evapotranspiration, snow cover, snow water equivalent, soil moisture anomaly and total water storage anomaly using the tools from The International Land Model Benchmarking Project (ILAMB) showed overall useful model performance, while the ensemble mean generally outperformed the single model estimates. The results also show that there is currently no single best model for all variables and that model performance is spatially variable. In our unconstrained model runs the ensemble mean of total runoff into the ocean was 46 268 km3 yr-1 (334 kg m-2 yr-1), while the ensemble mean of total evaporation was 537 kg m-2 yr-1. All data are made available openly through a Water Cycle Integrator portal (WCI, wci.earth2observe.eu), and via a direct http and ftp download. The portal follows the protocols of the open geospatial consortium such as OPeNDAP, WCS and WMS. The DOI for the data is https://doi.org/10.1016/10.5281/zenodo.167070.

  9. Integrated AUTODIN System Architecture Report. Part 2.

    DTIC Science & Technology

    1979-03-01

    Link Modes Protocols and end-to- end host protocols Codes ASCII, ITA#2 ASCII, Others (Trans- parent to network) Speeds 45 thru 4800 bps 110 bps thru 56K ...service facilities such as AMPEs, subscriber access lines, modems , multiplexers, concentrators, interface development to include software design and...Protocol) CODES - ASCII and ITA#2 (others transparent) SPEEDS - 45.5bps - 56K bps FORMATS - AUTODIN II Segment Formats, JANAP 128, ACP 126/127, DOI 103

  10. Intrusion Detection for Defense at the MAC and Routing Layers of Wireless Networks

    DTIC Science & Technology

    2007-01-01

    Space DoS Denial of Service DSR Dynamic Source Routing IDS Intrusion Detection System LAR Location-Aided Routing MAC Media Access Control MACA Multiple...different mobility parameters. 10 They simulate interaction between three MAC protocols ( MACA , 802.11 and CSMA) and three routing protocols (AODV, DSR

  11. The Historian and Electronic Research: File Transfer Protocol (FTP).

    ERIC Educational Resources Information Center

    McCarthy, Michael J.

    1993-01-01

    Asserts that the Internet will become the academic communication medium for historians in the 1990s. Describes the "file transfer protocol" (FTP) access approach to the Internet and discusses its significant for historical research. Includes instructions for using FTP and a list of history-related FTP sites. (CFR)

  12. Migrating an Online Service to WAP - A Case Study.

    ERIC Educational Resources Information Center

    Klasen, Lars

    2002-01-01

    Discusses mobile access via wireless application protocol (WAP) to online services that is offered in Sweden through InfoTorg. Topics include the Swedish online market; filtering HTML data from an Internet/Web server into WML (wireless markup language); mobile phone technology; microbrowsers; WAP protocol; and future possibilities. (LRW)

  13. 78 FR 54612 - Closed Captioning of Internet Protocol-Delivered Video Programming: Implementation of the Twenty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 79 [MB Docket No. 11-154; DA 13-1785] Closed Captioning of Internet Protocol-Delivered Video Programming: Implementation of the Twenty-First Century Communications and Video Accessibility Act of 2010 AGENCY: Federal Communications Commission. ACTION: Proposed...

  14. The management of Convulsive Refractory Status Epilepticus in adults in the UK: No consistency in practice and little access to continuous EEG monitoring.

    PubMed

    Patel, Mitesh; Bagary, Manny; McCorry, Dougall

    2015-01-01

    Convulsive Status Epilepticus (CSE) is a common neurological emergency with patients presenting with prolonged epileptic activity. Sub-optimal management is coupled with high morbidity and mortality. Continuous electroencephalogram (EEG) monitoring is considered essential by the National Institute for Health and Care Excellence (NICE) in the management of Convulsive Refractory Status Epilepticus (CRSE). The aim of this research was to determine current clinical practice in the management of CRSE amongst adults in intensive care units (ICU) in the UK and establish if the use of a standardised protocol requires re-enforcement within trusts. 75 randomly selected UK NHS Trusts were contacted and asked to complete a questionnaire in addition to providing their protocol for CRSE management in ICU. 55 (73%) trusts responded. While 31 (56% of responders) had a protocol available in ICU for early stages of CSE, just 21 (38%) trusts had specific guidelines if CRSE occurred. Only 23 (42%) trusts involved neurologists at any stage of management and just 18 (33%) have access to continuous EEG monitoring. This study identifies significant inconsistency in the management of CSE in ICU's across the UK. A minority of ICU units have a protocol for CRSE or access to continuous EEG monitoring despite it being considered fundamental for management and supported by NICE guidance. Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  15. Protocol for Communication Networking for Formation Flying

    NASA Technical Reports Server (NTRS)

    Jennings, Esther; Okino, Clayton; Gao, Jay; Clare, Loren

    2009-01-01

    An application-layer protocol and a network architecture have been proposed for data communications among multiple autonomous spacecraft that are required to fly in a precise formation in order to perform scientific observations. The protocol could also be applied to other autonomous vehicles operating in formation, including robotic aircraft, robotic land vehicles, and robotic underwater vehicles. A group of spacecraft or other vehicles to which the protocol applies could be characterized as a precision-formation- flying (PFF) network, and each vehicle could be characterized as a node in the PFF network. In order to support precise formation flying, it would be necessary to establish a corresponding communication network, through which the vehicles could exchange position and orientation data and formation-control commands. The communication network must enable communication during early phases of a mission, when little positional knowledge is available. Particularly during early mission phases, the distances among vehicles may be so large that communication could be achieved only by relaying across multiple links. The large distances and need for omnidirectional coverage would limit communication links to operation at low bandwidth during these mission phases. Once the vehicles were in formation and distances were shorter, the communication network would be required to provide high-bandwidth, low-jitter service to support tight formation-control loops. The proposed protocol and architecture, intended to satisfy the aforementioned and other requirements, are based on a standard layered-reference-model concept. The proposed application protocol would be used in conjunction with conventional network, data-link, and physical-layer protocols. The proposed protocol includes the ubiquitous Institute of Electrical and Electronics Engineers (IEEE) 802.11 medium access control (MAC) protocol to be used in the datalink layer. In addition to its widespread and proven use in diverse local-area networks, this protocol offers both (1) a random- access mode needed for the early PFF deployment phase and (2) a time-bounded-services mode needed during PFF-maintenance operations. Switching between these two modes could be controlled by upper-layer entities using standard link-management mechanisms. Because the early deployment phase of a PFF mission can be expected to involve multihop relaying to achieve network connectivity (see figure), the proposed protocol includes the open shortest path first (OSPF) network protocol that is commonly used in the Internet. Each spacecraft in a PFF network would be in one of seven distinct states as the mission evolved from initial deployment, through coarse formation, and into precise formation. Reconfiguration of the formation to perform different scientific observations would also cause state changes among the network nodes. The application protocol provides for recognition and tracking of the seven states for each node and for protocol changes under specified conditions to adapt the network and satisfy communication requirements associated with the current PFF mission phase. Except during early deployment, when peer-to-peer random access discovery methods would be used, the application protocol provides for operation in a centralized manner.

  16. Effect of a feed/fast protocol on pH in the proximal equine stomach.

    PubMed

    Husted, L; Sanchez, L C; Baptiste, K E; Olsen, S N

    2009-09-01

    Risk factors for the development of gastric squamous ulcers include various management procedures, such as intermittent feed deprivation that can occur during weight management regimens or stall and dry lot confinement. To investigate the effect of intermittent feed deprivation relative to continuous feed intake on proximal intragastric pH, specifically in the region of the squamous mucosa of the lesser curvature. In 6 horses, pH electrodes were placed just inside of the oesophageal sphincter in the stomach for each of two 72 h protocols (A and B) in a randomised, cross-over design. Protocol A consisted of 12 h fed, 12 h fasted, 24 h fed and 24 h fasted, in sequence. Protocol B consisted of 72 h fed. During the fed periods of each protocol, horses had ad libitum access to coastal Bermuda hay and were fed sweet feed (1 kg, b.i.d.). Horses had ad libitum access to water at all times. Proximal intragastric pH was significantly lower during protocol A, than during protocol B. However, hourly mean pH was significantly different only during the day and evening hours between protocols. During protocol B, mean proximal pH decreased significantly from 03.00 to 09.00 compared to 19.00 to 23.00 h. A moderate positive correlation of hay intake vs. proximal gastric pH could be established. Intermittent feed deprivation decreased proximal gastric pH in horses relative to those horses for which feed was not restricted. However, the effect was only significant when fasting occurred during the day and evening hours, as a nocturnal decrease in pH occurred simultaneously in the fed horses. Episodes of daytime feed deprivation should be avoided if possible, as proximal gastric acid exposure rapidly increases during such events.

  17. Gaining Access to the Internet.

    ERIC Educational Resources Information Center

    Notess, Greg R.

    1992-01-01

    Discusses Internet services and protocols (i.e., electronic mail, file transfer, and remote login) and provides instructions for retrieving guides and directories of the Internet. Services providing access to the Internet are described, including bulletin board systems, regional networks, nationwide connections, and library organizations; and a…

  18. Quantum CSMA/CD Synchronous Communication Protocol with Entanglement

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zeng, Binyang; Gong, Lihua

    By utilizing the characteristics of quantum entanglement, a quantum synchronous communication protocol for Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is presented. The proposed protocol divides the link into the busy time and leisure one, where the data frames are sent via classical channels and the distribution of quantum entanglement is supposed to be completed at leisure time and the quantum acknowledge frames are sent via quantum entanglement channels. The time span between two successfully delivered messages can be significantly reduced in this proposed protocol. It is shown that the performance of the CSMA/CD protocol can be improved significantly since the collision can be reduced to a certain extent. The proposed protocol has great significance in quantum communication.

  19. On the Design of a Comprehensive Authorisation Framework for Service Oriented Architecture (SOA)

    DTIC Science & Technology

    2013-07-01

    Authentication Server AZM Authorisation Manager AZS Authorisation Server BP Business Process BPAA Business Process Authorisation Architecture BPAD Business...Internet Protocol Security JAAS Java Authentication and Authorisation Service MAC Mandatory Access Control RBAC Role Based Access Control RCA Regional...the authentication process, make authorisation decisions using application specific access control functions that results in the practice of

  20. Interdisciplinary Approach to the Development of Accessible Computer-Administered Measurement Instruments.

    PubMed

    Magasi, Susan; Harniss, Mark; Heinemann, Allen W

    2018-01-01

    Principles of fairness in testing require that all test takers, including people with disabilities, have an equal opportunity to demonstrate their capacity on the construct being measured. Measurement design features and assessment protocols can pose barriers for people with disabilities. Fairness in testing is a fundamental validity issue at all phases in the design, administration, and interpretation of measurement instruments in clinical practice and research. There is limited guidance for instrument developers on how to develop and evaluate the accessibility and usability of measurement instruments. This article describes a 6-stage iterative process for developing accessible computer-administered measurement instruments grounded in the procedures implemented across several major measurement initiatives. A key component of this process is interdisciplinary teams of accessibility experts, content and measurement experts, information technology experts, and people with disabilities working together to ensure that measurement instruments are accessible and usable by a wide range of users. The development of accessible measurement instruments is not only an ethical requirement, it also ensures better science by minimizing measurement bias, missing data, and attrition due to mismatches between the target population and test administration platform and protocols. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Finding, Browsing and Getting Data Easily Using SPDF Web Services

    NASA Technical Reports Server (NTRS)

    Candey, R.; Chimiak, R.; Harris, B.; Johnson, R.; Kovalick, T.; Lal, N.; Leckner, H.; Liu, M.; McGuire, R.; Papitashvili, N.; hide

    2010-01-01

    The NASA GSFC Space Physics Data Facility (5PDF) provides heliophysics science-enabling information services for enhancing scientific research and enabling integration of these services into the Heliophysics Data Environment paradigm, via standards-based approach (SOAP) and Representational State Transfer (REST) web services in addition to web browser, FTP, and OPeNDAP interfaces. We describe these interfaces and the philosophies behind these web services, and show how to call them from various languages, such as IDL and Perl. We are working towards a "one simple line to call" philosophy extolled in the recent VxO discussions. Combining data from many instruments and missions enables broad research analysis and correlation and coordination with other experiments and missions.

  2. Obtaining i.v. fosfomycin through an expanded-access protocol.

    PubMed

    Frederick, Corey M; Burnette, Jennifer; Aragon, Laura; Gauthier, Timothy P

    2016-08-15

    One hospital's experience with procuring i.v. fosfomycin via an expanded-access protocol to treat a panresistant infection is described. In mid-2014, a patient at a tertiary care institution had an infection caused by a gram-negative pathogen expressing notable drug resistance. Once it was determined by the infectious diseases (ID) attending physician that i.v. fosfomycin was a possible treatment for this patient, the ID pharmacist began the process of drug procurement. The research and ID pharmacists completed an investigational new drug (IND) application, which required patient-specific details and contributions from the ID physician. After obtaining approval of the IND, an Internet search identified a product vendor in the United Kingdom, who was then contacted to begin the drug purchasing and acquisition processes. Authorization of the transaction required signatures from key senior hospital administrators, including the chief financial officer and the chief operating officer. Approximately 6 days after beginning the acquisition process, the research pharmacist arranged for the wholesaler to expedite product delivery. The ID pharmacist contacted the wholesaler's shipping company at the U.S. Customs Office, providing relevant contact information to ensure that any unexpected circumstances could be quickly addressed. The product arrived at the U.S. Customs Office 8 days after beginning the acquisition process and was held in the U.S. Customs Office for 2 days. The patient received the first dose of i.v. fosfomycin 13 days after starting the expanded-access protocol process. I.V. fosfomycin was successfully procured through an FDA expanded-access protocol by coordinating efforts among ID physicians, pharmacists, and hospital executives. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  3. Saccadic vector optokinetic perimetry in children with neurodisability or isolated visual pathway lesions: observational cohort study.

    PubMed

    Tailor, Vijay; Glaze, Selina; Unwin, Hilary; Bowman, Richard; Thompson, Graham; Dahlmann-Noor, Annegret

    2016-10-01

    Children and adults with neurological impairments are often not able to access conventional perimetry; however, information about the visual field is valuable. A new technology, saccadic vector optokinetic perimetry (SVOP), may have improved accessibility, but its accuracy has not been evaluated. We aimed to explore accessibility, testability and accuracy of SVOP in children with neurodisability or isolated visual pathway deficits. Cohort study; recruitment October 2013-May 2014, at children's eye clinics at a tertiary referral centre and a regional Child Development Centre; full orthoptic assessment, SVOP (central 30° of the visual field) and confrontation visual fields (CVF). Group 1: age 1-16 years, neurodisability (n=16), group 2: age 10-16 years, confirmed or suspected visual field defect (n=21); group 2 also completed Goldmann visual field testing (GVFT). Group 1: testability with a full 40-point test protocol is 12.5%; with reduced test protocols, testability is 100%, but plots may be clinically meaningless. Children (44%) and parents/carers (62.5%) find the test easy. SVOP and CVF agree in 50%. Group 2: testability is 62% for the 40-point protocol, and 90.5% for reduced protocols. Corneal changes in childhood glaucoma interfere with SVOP testing. All children and parents/carers find SVOP easy. Overall agreement with GVFT is 64.7%. While SVOP is highly accessible to children, many cannot complete a full 40-point test. Agreement with current standard tests is moderate to poor. Abnormal saccades cause an apparent non-specific visual field defect. In children with glaucoma or nystagmus SVOP calibration often fails. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. IP access networks with QoS support

    NASA Astrophysics Data System (ADS)

    Sargento, Susana; Valadas, Rui J. M. T.; Goncalves, Jorge; Sousa, Henrique

    2001-07-01

    The increasing demand of new services and applications is pushing for drastic changes on the design of access networks targeted mainly for residential and SOHO users. Future access networks will provide full service integration (including multimedia), resource sharing at the packet level and QoS support. It is expected that using IP as the base technology, the ideal plug-and-play scenario, where the management actions of the access network operator are kept to a minimum, will be achieved easily. This paper proposes an architecture for access networks based on layer 2 or layer 3 multiplexers that allows a number of simplifications in the network elements and protocols (e.g. in the routing and addressing functions). We discuss two possible steps in the evolution of access networks towards a more efficient support of IP based services. The first one still provides no QoS support and was designed with the goal of reusing as much as possible current technologies; it is based on tunneling to transport PPP sessions. The second one introduces QoS support through the use of emerging technologies and protocols. We illustrate the different phases of a multimedia Internet access session, when using SIP for session initiation, COPS for the management of QoS policies including the AAA functions and RSVP for resource reservation.

  5. Analysis of practical backoff protocols for contention resolution with multiple servers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; MacKenzie, P.D.

    Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less

  6. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Transport Protocol (Transmission Control Protocol/User Datagram Protocol [TCP/UDP]) Analysis

    DTIC Science & Technology

    2015-09-01

    the network Mac8 Medium Access Control ( Mac ) (Ethernet) address observed as destination for outgoing packets subsessionid8 Zero-based index of...15. SUBJECT TERMS tactical networks, data reduction, high-performance computing, data analysis, big data 16. SECURITY CLASSIFICATION OF: 17...Integer index of row cts_deid Device (instrument) Identifier where observation took place cts_collpt Collection point or logical observation point on

  7. Very High-Speed Report File System

    DTIC Science & Technology

    1992-12-15

    1.5 and 45 Mb/s and is expected 1 Introduction to reach 150 Mb/s. These new technologies pose some challenges to The Internet Protocol (IP) family (IP... Internet Engineering Task Force (IETF) has R taken up the issue, but a definitive answer is probably some time away. The basic issues are the choice of AAL...by an IEEE 802. la Subnetwork Access Protocol (SNAP) However, with a large number of networks all header. The third proposal identifies the protocol

  8. Performance Analysis of Modified Accelerative Preallocation MAC Protocol for Passive Star-Coupled WDMA Networks

    NASA Astrophysics Data System (ADS)

    Yun, Changho; Kim, Kiseon

    2006-04-01

    For the passive star-coupled wavelength-division multiple-access (WDMA) network, a modified accelerative preallocation WDMA (MAP-WDMA) media access control (MAC) protocol is proposed, which is based on AP-WDMA. To show the advantages of MAP-WDMA as an adequate MAC protocol for the network over AP-WDMA, the channel utilization, the channel-access delay, and the latency of MAP-WDMA are investigated and compared with those of AP-WDMA under various data traffic patterns, including uniform, quasi-uniform type, disconnected type, mesh type, and ring type data traffics, as well as the assumption that a given number of network stations is equal to that of channels, in other words, without channel sharing. As a result, the channel utilization of MAP-WDMA can be competitive with respect to that of AP-WDMA at the expense of insignificantly higher latency. Namely, if the number of network stations is small, MAP-WDMA provides better channel utilization for uniform, quasi-uniform-type, and disconnected-type data traffics at all data traffic loads, as well as for mesh and ring-type data traffics at low data traffic loads. Otherwise, MAP-WDMA only outperforms AP-WDMA for the first three data traffics at higher data traffic loads. In the aspect of channel-access delay, MAP-WDMA gives better performance than AP-WDMA, regardless of data traffic patterns and the number of network stations.

  9. Designing of routing algorithms in autonomous distributed data transmission system for mobile computing devices with ‘WiFi-Direct’ technology

    NASA Astrophysics Data System (ADS)

    Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.

    2017-02-01

    The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.

  10. Cryptanalysis and improvement of an improved two factor authentication protocol for telecare medical information systems.

    PubMed

    Chaudhry, Shehzad Ashraf; Naqvi, Husnain; Shon, Taeshik; Sher, Muhammad; Farash, Mohammad Sabzinejad

    2015-06-01

    Telecare medical information systems (TMIS) provides rapid and convenient health care services remotely. Efficient authentication is a prerequisite to guarantee the security and privacy of patients in TMIS. Authentication is used to verify the legality of the patients and TMIS server during remote access. Very recently Islam et al. (J. Med. Syst. 38(10):135, 2014) proposed a two factor authentication protocol for TMIS using elliptic curve cryptography (ECC) to improve Xu et al.'s (J. Med. Syst. 38(1):9994, 2014) protocol. They claimed their improved protocol to be efficient and provides all security requirements. However our analysis reveals that Islam et al.'s protocol suffers from user impersonation and server impersonation attacks. Furthermore we proposed an enhanced protocol. The proposed protocol while delivering all the virtues of Islam et al.'s protocol resists all known attacks.

  11. A Study of Shared-Memory Mutual Exclusion Protocols Using CADP

    NASA Astrophysics Data System (ADS)

    Mateescu, Radu; Serwe, Wendelin

    Mutual exclusion protocols are an essential building block of concurrent systems: indeed, such a protocol is required whenever a shared resource has to be protected against concurrent non-atomic accesses. Hence, many variants of mutual exclusion protocols exist in the shared-memory setting, such as Peterson's or Dekker's well-known protocols. Although the functional correctness of these protocols has been studied extensively, relatively little attention has been paid to their non-functional aspects, such as their performance in the long run. In this paper, we report on experiments with the performance evaluation of mutual exclusion protocols using Interactive Markov Chains. Steady-state analysis provides an additional criterion for comparing protocols, which complements the verification of their functional properties. We also carefully re-examined the functional properties, whose accurate formulation as temporal logic formulas in the action-based setting turns out to be quite involved.

  12. Extremely high data-rate, reliable network systems research

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, Kurt J.; Mukkamala, R.; Murray, Nicholas D.; Overstreet, C. Michael

    1990-01-01

    Significant progress was made over the year in the four focus areas of this research group: gigabit protocols, extensions of metropolitan protocols, parallel protocols, and distributed simulations. Two activities, a network management tool and the Carrier Sensed Multiple Access Collision Detection (CSMA/CD) protocol, have developed to the point that a patent is being applied for in the next year; a tool set for distributed simulation using the language SIMSCRIPT also has commercial potential and is to be further refined. The year's results for each of these areas are summarized and next year's activities are described.

  13. Automating individualized coaching and authentic role-play practice for brief intervention training.

    PubMed

    Hayes-Roth, B; Saker, R; Amano, K

    2010-01-01

    Brief intervention helps to reduce alcohol abuse, but there is a need for accessible, cost-effective training of clinicians. This study evaluated STAR Workshop , a web-based training system that automates efficacious techniques for individualized coaching and authentic role-play practice. We compared STAR Workshop to a web-based, self-guided e-book and a no-treatment control, for training the Engage for Change (E4C) brief intervention protocol. Subjects were medical and nursing students. Brief written skill probes tested subjects' performance of individual protocol steps, in different clinical scenarios, at three test times: pre-training, post-training, and post-delay (two weeks). Subjects also did live phone interviews with a standardized patient, post-delay. STAR subjects performed significantly better than both other groups. They showed significantly greater improvement from pre-training probes to post-training and post-delay probes. They scored significantly higher on post-delay phone interviews. STAR Workshop appears to be an accessible, cost-effective approach for training students to use the E4C protocol for brief intervention in alcohol abuse. It may also be useful for training other clinical interviewing protocols.

  14. A Survey of Authentication Schemes in Telecare Medicine Information Systems.

    PubMed

    Aslam, Muhammad Umair; Derhab, Abdelouahid; Saleem, Kashif; Abbas, Haider; Orgun, Mehmet; Iqbal, Waseem; Aslam, Baber

    2017-01-01

    E-Healthcare is an emerging field that provides mobility to its users. The protected health information of the users are stored at a remote server (Telecare Medical Information System) and can be accessed by the users at anytime. Many authentication protocols have been proposed to ensure the secure authenticated access to the Telecare Medical Information System. These protocols are designed to provide certain properties such as: anonymity, untraceability, unlinkability, privacy, confidentiality, availability and integrity. They also aim to build a key exchange mechanism, which provides security against some attacks such as: identity theft, password guessing, denial of service, impersonation and insider attacks. This paper reviews these proposed authentication protocols and discusses their strengths and weaknesses in terms of ensured security and privacy properties, and computation cost. The schemes are divided in three broad categories of one-factor, two-factor and three-factor authentication schemes. Inter-category and intra-category comparison has been performed for these schemes and based on the derived results we propose future directions and recommendations that can be very helpful to the researchers who work on the design and implementation of authentication protocols.

  15. Land-mobile satellite communication system

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee (Inventor); Rafferty, William (Inventor); Dessouky, Khaled I. (Inventor); Wang, Charles C. (Inventor); Cheng, Unjeng (Inventor)

    1993-01-01

    A satellite communications system includes an orbiting communications satellite for relaying communications to and from a plurality of ground stations, and a network management center for making connections via the satellite between the ground stations in response to connection requests received via the satellite from the ground stations, the network management center being configured to provide both open-end service and closed-end service. The network management center of one embodiment is configured to provides both types of service according to a predefined channel access protocol that enables the ground stations to request the type of service desired. The channel access protocol may be configured to adaptively allocate channels to open-end service and closed-end service according to changes in the traffic pattern and include a free-access tree algorithm that coordinates collision resolution among the ground stations.

  16. Secure Publish-Subscribe Protocols for Heterogeneous Medical Wireless Body Area Networks

    PubMed Central

    Picazo-Sanchez, Pablo; Tapiador, Juan E.; Peris-Lopez, Pedro; Suarez-Tangil, Guillermo

    2014-01-01

    Security and privacy issues in medical wireless body area networks (WBANs) constitute a major unsolved concern because of the challenges posed by the scarcity of resources in WBAN devices and the usability restrictions imposed by the healthcare domain. In this paper, we describe a WBAN architecture based on the well-known publish-subscribe paradigm. We present two protocols for publishing data and sending commands to a sensor that guarantee confidentiality and fine-grained access control. Both protocols are based on a recently proposed ciphertext policy attribute-based encryption (CP-ABE) scheme that is lightweight enough to be embedded into wearable sensors. We show how sensors can implement lattice-based access control (LBAC) policies using this scheme, which are highly appropriate for the eHealth domain. We report experimental results with a prototype implementation demonstrating the suitability of our proposed solution. PMID:25460814

  17. Increasing access to kidney transplantation for sensitized recipient through three-way kidney paired donation with desensitization: The first Indian report

    PubMed Central

    Kute, Vivek B; Patel, Himanshu V; Shah, Pankaj R; Modi, Pranjal R; Shah, Veena R; Rizvi, Sayyed J; Pal, Bipin C; Modi, Manisha P; Shah, Priya S; Varyani, Umesh T; Wakhare, Pavan S; Shinde, Saiprasad G; Ghodela, Viajay A; Patel, Minaxi H; Trivedi, Varsha B; Trivedi, Hargovind L

    2016-01-01

    The combination of kidney paired donation (KPD) with desensitization represents a promising method of increasing the rate of living donor kidney transplantation (LDKT) in immunologically challenging patients. Patients who are difficult to match and desensitize due to strong donor specific antibody are may be transplanted by a combination of desensitization and KPD protocol with more immunologically favorable donor. We present our experience of combination of desensitization protocol with three-way KPD which contributed to successful LDKT in highly sensitized end stage renal disease patient. All recipients were discharged with normal and stable allograft function at 24 mo follow up. We believe that this is first report from India where three-way KPD exchange was performed with the combination of KPD and desensitization. The combination of desensitization protocol with KPD improves access and outcomes of LDKT. PMID:27803919

  18. The long underestimated carbonyl function of carbohydrates – an organocatalyzed shot into carbohydrate chemistry.

    PubMed

    Mahrwald, R

    2015-09-21

    The aggressive and strong development of organocatalysis provides several protocols for the convenient utilization of the carbonyl function of unprotected carbohydrates in C-C-bond formation processes. These amine-catalyzed mechanisms enable multiple cascade-protocols for the synthesis of a wide range of carbohydrate-derived compound classes. Several, only slightly different protocols, have been developed for the application of 1,3-dicarbonyl compounds in the stereoselective chain-elongation of unprotected carbohydrates and the synthesis of highly functionalized C-glycosides of defined configuration. In addition, C-glycosides can also be accessed by amine-catalyzed reactions with methyl ketones. By a one-pot cascade reaction of isocyanides with unprotected aldoses and amino acids access to defined configured glycopeptide mimetics is achieved. Depending on the reaction conditions different origins to control the installation of configuration during the bond-formation process were observed.

  19. Access Protocol For An Industrial Optical Fibre LAN

    NASA Astrophysics Data System (ADS)

    Senior, John M.; Walker, William M.; Ryley, Alan

    1987-09-01

    A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.

  20. Traffic management mechanism for intranets with available-bit-rate access to the Internet

    NASA Astrophysics Data System (ADS)

    Hassan, Mahbub; Sirisena, Harsha R.; Atiquzzaman, Mohammed

    1997-10-01

    The design of a traffic management mechanism for intranets connected to the Internet via an available bit rate access- link is presented. Selection of control parameters for this mechanism for optimum performance is shown through analysis. An estimate for packet loss probability at the access- gateway is derived for random fluctuation of available bit rate of the access-link. Some implementation strategies of this mechanism in the standard intranet protocol stack are also suggested.

  1. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources.

    PubMed

    Cruz-Piris, Luis; Rivera, Diego; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2018-03-20

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.

  2. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources

    PubMed Central

    2018-01-01

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal. PMID:29558406

  3. A performance study of WebDav access to storages within the Belle II collaboration

    NASA Astrophysics Data System (ADS)

    Pardi, S.; Russo, G.

    2017-10-01

    WebDav and HTTP are becoming popular protocols for data access in the High Energy Physics community. The most used Grid and Cloud storage solutions provide such kind of interfaces, in this scenario tuning and performance evaluation became crucial aspects to promote the adoption of these protocols within the Belle II community. In this work, we present the results of a large-scale test activity, made with the goal to evaluate performances and reliability of the WebDav protocol, and study a possible adoption for the user analysis. More specifically, we considered a pilot infrastructure composed by a set of storage elements configured with the WebDav interface, hosted at the Belle II sites. The performance tests include a comparison with xrootd and gridftp. As reference tests we used a set of analysis jobs running under the Belle II software framework, accessing the input data with the ROOT I/O library, in order to simulate as much as possible a realistic user activity. The final analysis shows the possibility to achieve promising performances with WebDav on different storage systems, and gives an interesting feedback, for Belle II community and for other high energy physics experiments.

  4. Providing Internet Access to High-Resolution Lunar Images

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.

  5. South Dakota Department of Education Data Access Policy

    ERIC Educational Resources Information Center

    South Dakota Department of Education, 2015

    2015-01-01

    The South Dakota Department of Education (DOE) collects education records from local schools and districts in accordance with federal and state laws and regulations. This policy document establishes the procedures and protocols for accessing, maintaining, disclosing, and disposing of confidential data records, including data records containing…

  6. Diagnostic and Therapeutic Knowledge and Practices in the Management of Congenital Syphilis by Pediatricians in Public Maternity Hospitals in Brazil.

    PubMed

    Dos Santos, Raquel Rodrigues; Niquini, Roberta Pereira; Bastos, Francisco Inácio; Domingues, Rosa Maria Soares Madeira

    2017-01-01

    The study aimed to assess conformity with Brazil's standard protocol for diagnostic and therapeutic practices in the management of congenital syphilis by pediatricians in public maternity hospitals. A cross-sectional study was conducted in 2015 with 41 pediatricians working in all the public maternity hospitals in Teresina, the capital of Piauí State, Northeast Brazil, through self-completed questionnaires. The study assessed the conformity of knowledge and practices according to the Brazilian Ministry of Health protocols. The study has made evident low access to training courses (54%) and insufficient knowledge of the case definition of congenital syphilis (42%) and rapid tests for syphilis (39%). Flaws were observed in the diagnostic workup and treatment of newborns. Requesting VDRL (88%) and correct treatment of neurosyphilis (88%) were the practices that showed the highest conformity with standard protocols. Low conformity with protocols leads to missed opportunities for identifying and adequately treating congenital syphilis. Based on the barriers identified in the study, better access to diagnostic and treatment protocols, improved recording on prenatal cards and hospital patient charts, availability of tests and medicines, and educational work with pregnant women should be urgently implemented, aiming to reverse the currently inadequate management of congenital syphilis and to curb its spread.

  7. Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks

    PubMed Central

    2017-01-01

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle’s safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given. PMID:29231882

  8. Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks.

    PubMed

    Song, Caixia

    2017-12-12

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle's safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given.

  9. Intelligent Cooperative MAC Protocol for Balancing Energy Consumption

    NASA Astrophysics Data System (ADS)

    Wu, S.; Liu, K.; Huang, B.; Liu, F.

    To extend the lifetime of wireless sensor networks, we proposed an intelligent balanced energy consumption cooperative MAC protocol (IBEC-CMAC) based on the multi-node cooperative transmission model. The protocol has priority to access high-quality channels for reducing energy consumption of each transmission. It can also balance the energy consumption among cooperative nodes by using high residual energy nodes instead of excessively consuming some node's energy. Simulation results show that IBEC-CMAC can obtain longer network lifetime and higher energy utilization than direct transmission.

  10. On Alarm Protocol in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Cichoń, Jacek; Kapelko, Rafał; Lemiesz, Jakub; Zawada, Marcin

    We consider the problem of efficient alarm protocol for ad-hoc radio networks consisting of devices that try to gain access for transmission through a shared radio communication channel. The problem arise in tasks that sensors have to quickly inform the target user about an alert situation such as presence of fire, dangerous radiation, seismic vibrations, and more. In this paper, we present a protocol which uses O(logn) time slots and show that Ω(logn/loglogn) is a lower bound for used time slots.

  11. Device USB interface and software development for electric parameter measuring instrument

    NASA Astrophysics Data System (ADS)

    Li, Deshi; Chen, Jian; Wu, Yadong

    2003-09-01

    Aimed at general devices development, this paper discussed the development of USB interface and software development. With an example, using PDIUSBD12 which support parallel interface, the paper analyzed its technical characteristics. Designed different interface circuit with 80C52 singlechip microcomputer and TMS320C54 series digital signal processor, analyzed the address allocation, register access. According to USB1.1 standard protocol, designed the device software and application layer protocol. The paper designed the data exchange protocol, and carried out system functions.

  12. The MISTRALS programme data portal

    NASA Astrophysics Data System (ADS)

    Fleury, Laurence; Brissebrat, Guillaume; Belmahfoud, Nizar; Boichard, Jean-Luc; Brosolo, Laetitia; Cloché, Sophie; Descloitres, Jacques; Ferré, Hélène; Focsa, Loredana; Henriot, Nicolas; Labatut, Laurent; Mière, Arnaud; Petit de la Villéon, Loïc; Ramage, Karim; Schmechtig, Catherine; Vermeulen, Anne; André, François

    2015-04-01

    Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS) is a decennial programme for systematic observations and research dedicated to the understanding of the Mediterranean Basin environmental process and its evolution under the planet global change. It is composed of eight multidisciplinary projects that cover all the components of the Earth system (atmosphere, ocean, continental surfaces, lithosphere...) and their interactions, all the disciplines (physics, chemistry, marine biogeochemistry, biology, geology, sociology...) and different time scales. For example Hydrological cycle in the Mediterranean eXperiment (HyMeX) aims at improving the predictability of rainfall extreme events, and assessing the social and economic vulnerability to extreme events and adaptation capacity. Paleo Mediterranean Experiment (PaleoMeX) is dedicated to the study of the interactions between climate, societies and civilizations of the Mediterranean world during the last 10000 years. Many long term monitoring research networks are associated with MISTRALS, such as Mediterranean Ocean Observing System on Environment (MOOSE), Centre d'Observation Régional pour la Surveillance du Climat et de l'environnement Atmosphérique et océanographique en Méditerranée occidentale (CORSICA) and the environmental observations from Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE). Therefore, the data generated or used by the different MISTRALS projects are very heterogeneous. They include in situ observations, satellite products, model outputs, social sciences surveys... Some datasets are automatically produced by operational networks, and others come from research instruments and analysis procedures. They correspond to different time scales (historical time series, observatories, campaigns...) and are managed by several data centres. They originate from many scientific communities, with different data sharing practices, specific expectations and using different file formats and data processing tools. The MISTRALS data portal - http://mistrals.sedoo.fr/ - has been designed and developed as a unified tool for sharing scientific data in spite of many sources of heterogeneity, and for fostering collaboration between research communities. The metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A search tool allows to browse the catalog by keyword or multicriteria selection (area, period, physical property...) and to access data. Data sets managed by different data centres (ICARE, IPSL, SEDOO, CORIOLIS) are available through interoperability protocols (OPeNDAP, xml requests...) or archive synchronisation. Every in situ data set is available in the native format, but the most commonly used data sets have been homogenized (property names, units, quality flags...) and inserted in a relational database, in order to enable accurate data selection, and download of different data sets in a shared format. At present the MISTRALS data portal enables to access about 550 datasets. It counts more than 600 registered users and about 100 data requests every month. The number of available datasets is increasing daily, due to the provision of campaign datasets (2012, 2013, 2014) by several projects. Every scientist is invited to browse the catalog, complete the online registration form and use MISTRALS data. Feel free to contact mistrals-contact@sedoo.fr for any question.

  13. The MISTRALS programme data portal

    NASA Astrophysics Data System (ADS)

    Fleury, Laurence; Brissebrat, Guillaume; Belmahfoud, Nizar; Boichard, Jean-Luc; Brosolo, Laetitia; Cloché, Sophie; Descloitres, Jacques; Ferré, Hélène; Focsa, Loredana; Labatut, Laurent; Mastrorillo, Laurence; Mière, Arnaud; Petit de la Villéon, Loïc; Ramage, Karim; Schmechtig, Catherine

    2014-05-01

    Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS) is a decennial programme for systematic observations and research dedicated to the understanding of the Mediterranean Basin environmental process and its evolution under the planet global change. It is composed of eight multidisciplinary projects that cover all the components of the Earth system (atmosphere, ocean, continental surfaces, lithosphere...) and their interactions, many disciplines (physics, chemistry, marine biogeochemistry, biology, geology, sociology...) and different time scales. For example Hydrological cycle in the Mediterranean eXperiment (HyMeX) aims at improving the predictability of rainfall extreme events, and assessing the social and economic vulnerability to extreme events and adaptation capacity, and Paleo Mediterranean Experiment (PaleoMeX) is dedicated to the study of the interactions between climate, societies and civilizations of the Mediterranean world during the last 10000 years. Many long term monitoring research networks are associated with MISTRALS, like Mediterranean Ocean Observing System on Environment (MOOSE), Centre d'Observation Régional pour la Surveillance du Climat et de l'environnement Atmosphérique et océanographique en Méditerranée occidentale (CORSICA) and the environmental observations from Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE). Therefore, the data generated or used by the different MISTRALS projects are very heterogeneous. They include in situ observations, satellite products, model outputs, qualitative field surveys... Some datasets are automatically produced by operational networks, and others come from research instruments and analysis procedures. They correspond to different time scales (historical time series, observatories, campaigns...) and are managed by different data centres. They originate from many scientific communities, with varied data sharing cultures, specific expectations, and using different file formats and data processing tools. The MISTRALS data portal - http://mistrals.sedoo.fr/ - has been designed and developed as a unified tool to share scientific data in spite of many sources of heterogeneity, and to foster collaboration between research communities. The metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A search tool allows to browse the catalogue by keyword or by multicriteria selection (location, period, physical property...) and to access data. Data sets managed by different data centres (ICARE, IPSL, SEDOO, CORIOLIS) are available through interoperability protocols (OPeNDAP, xml requests...) or archive synchronisation. At present the MISTRALS data portal allows to access more than 400 datasets and counts more than 500 registered users. The number of available datasets is increasing daily, due to the provision of campaign datasets (2012, 2013) by several projects. Every in situ data set is available in the native format, but the favorite data sets have been homogenized (property names, units, quality flags...) and inserted in a relational database, in order to enable more accurate data selection, and download of different datasets in a shared format. Every scientist is invited to make use of the different MISTRALS tools and data. Do not hesitate to browse the catalogue and fill the online registration form. Feel free to contact mistrals-contact@sedoo.fr for any question.

  14. The MISTRALS programme data portal

    NASA Astrophysics Data System (ADS)

    Brissebrat, Guillaume; Albert-Aguilar, Alexandre; Belmahfoud, Nizar; Cloché, Sophie; Darras, Sabine; Descloitres, Jacques; Ferré, Hélène; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Labatut, Laurent; Petit de la Villéon, Loïc; Ramage, Karim; Schmechtig, Catherine; Vermeulen, Anne

    2016-04-01

    Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS) is a decennial programme for systematic observations and research dedicated to the understanding of the Mediterranean Basin environmental process and its evolution under the planet global change. It is composed of eight multidisciplinary projects that cover all the components of the Earth system (atmosphere, ocean, continental surfaces, lithosphere...) and their interactions, all the disciplines (physics, chemistry, marine biogeochemistry, biology, geology, sociology...) and different time scales. For example Hydrological cycle in the Mediterranean eXperiment (HyMeX) aims at improving the predictability of rainfall extreme events, and assessing the social and economic vulnerability to extreme events and adaptation capacity. Paleo Mediterranean Experiment (PaleoMeX) is dedicated to the study of the interactions between climate, societies and civilizations of the Mediterranean world during the last 10000 years. Many long term monitoring research networks are associated with MISTRALS, such as Mediterranean Ocean Observing System on Environment (MOOSE), Centre d'Observation Régional pour la Surveillance du Climat et de l'environnement Atmosphérique et océanographique en Méditerranée occidentale (CORSICA) and the environmental observations from Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE). Therefore, the data generated or used by the different MISTRALS projects are very heterogeneous. They include in situ observations, satellite products, model outputs, social sciences surveys... Some datasets are automatically produced by operational networks, and others come from research instruments and analysis procedures. They correspond to different time scales (historical time series, observatories, campaigns...) and are managed by several data centres. They originate from many scientific communities, with different data sharing practices, specific expectations and using different file formats and data processing tools. The MISTRALS data portal - http://mistrals.sedoo.fr/ - has been designed and developed as a unified tool for sharing scientific data in spite of many sources of heterogeneity, and for fostering collaboration between research communities. The metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A search tool allows to browse the catalog by keyword or multicriteria selection (area, period, physical property...) and to access data. Data sets managed by different data centres (ICARE, IPSL, SEDOO, CORIOLIS) are available through interoperability protocols (OPeNDAP, xml requests...) or archive synchronisation. Every in situ data set is available in the native format, but the most commonly used data sets have been homogenized (property names, units, quality flags...) and inserted in a relational database, in order to enable accurate data selection, and download of different data sets in a shared format. At present the MISTRALS data portal enables to access about 600 datasets. It counts more than 675 registered users and about 100 data requests every month. The number of available datasets is increasing daily, due to the provision of campaign datasets by several projects. Every scientist is invited to browse the catalog, complete the online registration form and use MISTRALS data. Feel free to contact mistrals-contact@sedoo.fr for any question.

  15. A Hybrid Lifetime Extended Directional Approach for WBANs

    PubMed Central

    Li, Changle; Yuan, Xiaoming; Yang, Li; Song, Yueyang

    2015-01-01

    Wireless Body Area Networks (WBANs) can provide real-time and reliable health monitoring, attributing to the human-centered and sensor interoperability properties. WBANs have become a key component of the ubiquitous eHealth (electronic health) revolution that prospers on the basis of information and communication technologies. The prime consideration in WBAN is how to maximize the network lifetime with battery-powered sensor nodes in energy constraint. Novel solutions in Medium Access Control (MAC) protocols are imperative to satisfy the particular BAN scenario and the need of excellent energy efficiency in healthcare applications. In this paper, we propose a hybrid Lifetime Extended Directional Approach (LEDA) MAC protocol based on IEEE 802.15.6 to reduce energy consumption and prolong network lifetime. The LEDA MAC protocol takes full advantages of directional superiority in energy saving that employs multi-beam directional mode in Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) and single-beam directional mode in Time Division Multiple Access (TDMA) for alternative in data reservation and transmission according to the traffic varieties. Moreover, the impacts of some inherent problems of directional antennas such as deafness and hidden terminal problem can be decreased owing to that all nodes generate individual beam according to user priorities designated. Furthermore, LEDA MAC employs a Dynamic Polled Allocation Period (DPAP) for burst data transmissions to increase the network reliability and adaptability. Extensive analysis and simulation results show that the proposed LEDA MAC protocol achieves extended network lifetime with improved performance compared with IEEE 802.15.6. PMID:26556357

  16. OCP: Opportunistic Carrier Prediction for Wireless Networks

    DTIC Science & Technology

    2008-08-01

    Many protocols have been proposed for medium access control in wireless networks. MACA [13], MACAW [3], and FAMA [8] are the earlier proposals for...world performance of carrier sense. In Proceedings of ACM SIGCOMM E-WIND Workshop, 2005. [13] P. Karn. MACA : A new channel access method for packet radio

  17. Cultivating Conditions for Access: A Case for "Case-Making" in Graduate Student Preparation for Interdisciplinary Research

    ERIC Educational Resources Information Center

    Hannah, Mark A.; Arreguin, Alex

    2017-01-01

    Gaining access to interdisciplinary research sites poses unique research challenges to technical and professional communication scholars and practitioners. Drawing on applied experiences in externally funded interdisciplinary research projects and scholarship about interdisciplinary research, this article describes a training protocol for…

  18. 21 CFR 1311.125 - Requirements for establishing logical access control-Individual practitioner.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... substance prescriptions and who has obtained a two-factor authentication credential as provided in § 1311... his two-factor authentication credential to satisfy the logical access controls. The second individual... authentication factor required by the two-factor authentication protocol is lost, stolen, or compromised. Such...

  19. Remote Patron Validation: Posting a Proxy Server at the Digital Doorway.

    ERIC Educational Resources Information Center

    Webster, Peter

    2002-01-01

    Discussion of remote access to library services focuses on proxy servers as a method for remote access, based on experiences at Saint Mary's University (Halifax). Topics include Internet protocol user validation; browser-directed proxies; server software proxies; vendor alternatives for validating remote users; and Internet security issues. (LRW)

  20. Techtalk: Telecommunications for Improving Developmental Education.

    ERIC Educational Resources Information Center

    Caverly, David C.; Broderick, Bill

    1993-01-01

    Explains how to access the Internet, discussing hardware and software considerations, connectivity, and types of access available to users. Describes the uses of electronic mail; TELNET, a method for remotely logging onto another computer; and anonymous File Transfer Protocol (FTP), a method for downloading files from a remote computer. (MAB)

  1. The University of Minnesota's Internet Gopher System: A Tool for Accessing Network-Based Electronic Information.

    ERIC Educational Resources Information Center

    Wiggins, Rich

    1993-01-01

    Describes the Gopher system developed at the University of Minnesota for accessing information on the Internet. Highlights include the need for navigation tools; Gopher clients; FTP (File Transfer Protocol); campuswide information systems; navigational enhancements; privacy and security issues; electronic publishing; multimedia; and future…

  2. Results of the 2009 ASBVd survey of avocado accessions in the national germplasm collection in Florida

    USDA-ARS?s Scientific Manuscript database

    The presence of Avocado Sunblotch Viroid (ASBVd) infection among the avocado (Persea americana Mill.) accessions in the National Germplasm Repository at Miami (NGR-Mia) was established in previous studies. An ASBVd specific reverse transcription-polymerase chain reaction (RT-PCR) protocol was used t...

  3. Factors Affecting the Implementation of Sheltered Instruction Observation Protocols for English Language Learners

    ERIC Educational Resources Information Center

    Calderon, Carlos Trevino

    2012-01-01

    The purpose of this sequential mixed methods case study was to explore the role of a teacher's attitude towards Sheltered Instruction Observation Protocols (SIOP) and how those attitudes affect the program's effectiveness. SIOP is a program designed to mitigate the effects of limited English proficiency and promote equal access to the curriculum…

  4. A System for Distributing Real-Time Customized (NEXRAD-Radar) Geosciences Data

    NASA Astrophysics Data System (ADS)

    Singh, Satpreet; McWhirter, Jeff; Krajewski, Witold; Kruger, Anton; Goska, Radoslaw; Seo, Bongchul; Domaszczynski, Piotr; Weber, Jeff

    2010-05-01

    Hydrometeorologists and hydrologists can benefit from (weather) radar derived rain products, including rain rates and accumulations. The Hydro-NEXRAD system (HNX1) has been in operation since 2006 at IIHR-Hydroscience and Engineering at The University of Iowa. It provides rapid and user-friendly access to such user-customized products, generated using archived Weather Surveillance Doppler Radar (WSR-88D) data from the NEXRAD weather radar network in the United States. HNX1 allows researchers to deal directly with radar-derived rain products, without the burden of the details of radar data collection, quality control, processing, and format conversion. A number of hydrologic applications can benefit from a continuous real-time feed of customized radar-derived rain products. We are currently developing such a system, Hydro-NEXRAD 2 (HNX2). HNX2 collects real-time, unprocessed data from multiple NEXRAD radars as they become available, processes them through a user-configurable pipeline of data-processing modules, and then publishes processed products at regular intervals. Modules in the data processing pipeline encapsulate algorithms such as non-meteorological echo detection, range correction, radar-reflectivity-rain rate (Z-R) conversion, advection correction, merging products from multiple radars, and grid transformations. HNX2's implementation presents significant challenges, including quality-control, error-handling, time-synchronization of data from multiple asynchronous sources, generation of multiple-radar metadata products, distribution of products to a user base with diverse needs and constraints, and scalability. For content management and distribution, HNX2 uses RAMADDA (Repository for Archiving, Managing and Accessing Diverse Data), developed by the UCAR/Unidata Program Center in the Unites States. RAMADDA allows HNX2 to publish products through automation and gives users multiple access methods to the published products, including simple web-browser based access, and OpenDAP access. The latter allows a user to set up automation at his/her end, and fetch new data from HNX2 at regular intervals. HNX2 uses a two-dimensional metadata structure called a mosaic for managing metadata of the rain products. Currently, HNX2 is in pre-production state and is serving near real-time rain-rate map data-products for individual radars and merged data-products from seven radars covering the state of Iowa in the United States. These products then drive a rainfall-runoff model called CUENCAS, which is used as part of the Iowa Flood Center (housed at The University of Iowa) real-time flood forecasting system. We are currently developing a generalized scalable framework that will run on inexpensive hardware and will provide products for basins anywhere in the continental United States.

  5. Data Democratization - Promoting Real-Time Data Sharing and Use Worldwide

    NASA Astrophysics Data System (ADS)

    Yoksas, T. C.; Almeida, W. G.; Leon, V. C.

    2007-05-01

    The Unidata Program Center (Unidata) of the University Corporation of Atmospheric Research (UCAR) is actively involved in international collaborations whose goals are the free-and-open sharing of hydro-meteorological data; the distribution of analysis and visualization tools for those data; the establishment of server technologies that provide easy-to-use, programmatic remote-access to a wide variety of datasets, and in the building of a community where data, tools, and best practices in education and research are shared. The tools and services provided by Unidata are available to the research and education community free-of-charge. Data sharing capabilities are being provided by Unidata's Internet Data Distribution (IDD) system, a community-based effort that has been the primary source of real-time meteorological data in the US university community for over a decade. A collaboration among Unidata, Brazil's Centro de Previso de Tempo e Estudos Climaticos (CPTEC), the Universidad Federal do Rio de Janeiro (UFRJ), and the Universidade de Sao Paulo (USP) has resulted in the creation of a Brazilian peer of the North American IDD, the IDD-Brasil. Collaboration between Unidata and the Universidad de Costa Rica (UCR) seeks to extend IDD data sharing throughout Central America and the Caribbean in an IDD-Caribe. Efforts aimed at creating a data sharing network for researchers on the Antarctic continent have resulted in the establishment of the Antarctic-IDD. Most recently, explorations of data sharing between UCAR and select countries in Africa have begun. Data analysis and visualization capabilities are available through Unidata in a suite of freely-available applications: the National Centers for Environmental Prediction (NCEP) GEneral Meteorology PAcKage (GEMPAK); the Unidata Integrated Data Viewer (IDV); and University of Wisconsin, Space Science and Engineering Center (SSEC) Man-computer Interactive Data Access System (McIDAS). Remote data access capabilities are provided by Unidata's Thematic Realtime Environmental Data Services (THREDDS) servers (which incorporate Open-source Project for a Network Data Access (OPeNDAP) data services), and the Abstract Data Distribution Environment (ADDE) of McIDAS. It is envisioned that the data sharing capabilities available in the IDD, IDD-Brasil, IDD-Caribe, and Antarctic-IDD, remote data access capabilities available in THREDDS and ADDE, and analysis capabilities available in GEMPAK, the IDV, and McIDAS will help foster new collaborations among prominent universities, national meteorological agencies, and WMO Regional Meteorological Training Centers throughout North, Central, and South America, in the Antarctic research community, and eventually in Africa. This paper is intended to inform AGU 2007 Joint Assembly attendees, especially those in Mexico and Central America, of the availability of real-time data and tools to analyze/visualize those data, and to promote the free-and-open sharing of data, especially of locally-held datasets of general interest.

  6. In-memory interconnect protocol configuration registers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Kevin Y.; Roberts, David A.

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mappingmore » decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.« less

  7. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  8. Early Access to the Cardiac Catheterization Laboratory for Patients Resuscitated From Cardiac Arrest Due to a Shockable Rhythm: The Minnesota Resuscitation Consortium Twin Cities Unified Protocol.

    PubMed

    Garcia, Santiago; Drexel, Todd; Bekwelem, Wobo; Raveendran, Ganesh; Caldwell, Emily; Hodgson, Lucinda; Wang, Qi; Adabag, Selcuk; Mahoney, Brian; Frascone, Ralph; Helmer, Gregory; Lick, Charles; Conterato, Marc; Baran, Kenneth; Bart, Bradley; Bachour, Fouad; Roh, Steven; Panetta, Carmelo; Stark, Randall; Haugland, Mark; Mooney, Michael; Wesley, Keith; Yannopoulos, Demetris

    2016-01-07

    In 2013 the Minnesota Resuscitation Consortium developed an organized approach for the management of patients resuscitated from shockable rhythms to gain early access to the cardiac catheterization laboratory (CCL) in the metro area of Minneapolis-St. Paul. Eleven hospitals with 24/7 percutaneous coronary intervention capabilities agreed to provide early (within 6 hours of arrival at the Emergency Department) access to the CCL with the intention to perform coronary revascularization for outpatients who were successfully resuscitated from ventricular fibrillation/ventricular tachycardia arrest. Other inclusion criteria were age >18 and <76 and presumed cardiac etiology. Patients with other rhythms, known do not resuscitate/do not intubate, noncardiac etiology, significant bleeding, and terminal disease were excluded. The primary outcome was survival to hospital discharge with favorable neurological outcome. Patients (315 out of 331) who were resuscitated from VT/VF and transferred alive to the Emergency Department had complete medical records. Of those, 231 (73.3%) were taken to the CCL per the Minnesota Resuscitation Consortium protocol while 84 (26.6%) were not taken to the CCL (protocol deviations). Overall, 197 (63%) patients survived to hospital discharge with good neurological outcome (cerebral performance category of 1 or 2). Of the patients who followed the Minnesota Resuscitation Consortium protocol, 121 (52%) underwent percutaneous coronary intervention, and 15 (7%) underwent coronary artery bypass graft. In this group, 151 (65%) survived with good neurological outcome, whereas in the group that did not follow the Minnesota Resuscitation Consortium protocol, 46 (55%) survived with good neurological outcome (adjusted odds ratio: 1.99; [1.07-3.72], P=0.03). Early access to the CCL after cardiac arrest due to a shockable rhythm in a selected group of patients is feasible in a large metropolitan area in the United States and is associated with a 65% survival rate to hospital discharge with a good neurological outcome. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  9. Multi-Bit Quantum Private Query

    NASA Astrophysics Data System (ADS)

    Shi, Wei-Xu; Liu, Xing-Tong; Wang, Jian; Tang, Chao-Jing

    2015-09-01

    Most of the existing Quantum Private Queries (QPQ) protocols provide only single-bit queries service, thus have to be repeated several times when more bits are retrieved. Wei et al.'s scheme for block queries requires a high-dimension quantum key distribution system to sustain, which is still restricted in the laboratory. Here, based on Markus Jakobi et al.'s single-bit QPQ protocol, we propose a multi-bit quantum private query protocol, in which the user can get access to several bits within one single query. We also extend the proposed protocol to block queries, using a binary matrix to guard database security. Analysis in this paper shows that our protocol has better communication complexity, implementability and can achieve a considerable level of security.

  10. A hash based mutual RFID tag authentication protocol in telecare medicine information system.

    PubMed

    Srivastava, Keerti; Awasthi, Amit K; Kaul, Sonam D; Mittal, R C

    2015-01-01

    Radio Frequency Identification (RFID) is a technology which has multidimensional applications to reduce the complexity of today life. Everywhere, like access control, transportation, real-time inventory, asset management and automated payment systems etc., RFID has its enormous use. Recently, this technology is opening its wings in healthcare environments, where potential applications include patient monitoring, object traceability and drug administration systems etc. In this paper, we propose a secure RFID-based protocol for the medical sector. This protocol is based on hash operation with synchronized secret. The protocol is safe against active and passive attacks such as forgery, traceability, replay and de-synchronization attack.

  11. Insecurity of Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Frederick T; Weber, John Mark; Yoo, Seong-Moo

    Wireless is a powerful core technology enabling our global digital infrastructure. Wi-Fi networks are susceptible to attacks on Wired Equivalency Privacy, Wi-Fi Protected Access (WPA), and WPA2. These attack signatures can be profiled into a system that defends against such attacks on the basis of their inherent characteristics. Wi-Fi is the standard protocol for wireless networks used extensively in US critical infrastructures. Since the Wired Equivalency Privacy (WEP) security protocol was broken, the Wi-Fi Protected Access (WPA) protocol has been considered the secure alternative compatible with hardware developed for WEP. However, in November 2008, researchers developed an attack on WPA,more » allowing forgery of Address Resolution Protocol (ARP) packets. Subsequent enhancements have enabled ARP poisoning, cryptosystem denial of service, and man-in-the-middle attacks. Open source systems and methods (OSSM) have long been used to secure networks against such attacks. This article reviews OSSMs and the results of experimental attacks on WPA. These experiments re-created current attacks in a laboratory setting, recording both wired and wireless traffic. The article discusses methods of intrusion detection and prevention in the context of cyber physical protection of critical Internet infrastructure. The basis for this research is a specialized (and undoubtedly incomplete) taxonomy of Wi-Fi attacks and their adaptations to existing countermeasures and protocol revisions. Ultimately, this article aims to provide a clearer picture of how and why wireless protection protocols and encryption must achieve a more scientific basis for detecting and preventing such attacks.« less

  12. [Venous access in oncology].

    PubMed

    Lesimple, T; Béguec, J F; Levêque, J M

    1998-10-31

    Many treatments administered to cancer patients require venous access either via a peripheral vein or a larger central vein at the risk of local or systemic infection, thrombus formation or venous occlusion and dysfunction. Insertion of a central catheter is an invasive procedure which must be conducted under conditions of rigorous asepsia. Strict rules based on well-defined protocols must be applied throughout its use. Local or systemic infectious complications account for 18 to 25% of all nosocomial infections and are often related to colonisation of the puncture site by a Gram positive germ. In case of infection, ablation of the central catheter is not mandatory for diagnosis or antibiotic treatment. Reported at varying frequencies in the literature from 4 to 42%, thrombus formation is unpredictable and often difficult to diagnose. Anticoagulants or fibrolytic agents are indicated but it may also be necessary to withdraw the catheter. Displacement, rupture, obstruction and extravasation are frequent complications. Back flow must be checked in all venous accesses and free flow carefully verified. The access must remain patent throughout the period of use, guaranteed by a standard heparinization and rinsing protocol. This complications must not mask the important progress achieved with the use of central venous access for specific and symptomatic treatment in cancer patients.

  13. The Potato Cryobank at The International Potato Center (Cip): A Model for Long Term Conservation of Clonal Plant Genetic Resources Collections of the Future.

    PubMed

    Vollmer, R; Villagaray, R; Egusquiza, V; Espirilla, J; García, M; Torres, A; Rojas, E; Panta, A; Barkley, N A; Ellis, D

    Cryobanks are a secure, efficient and low cost method for the long-term conservation of plant genetic resources for theoretically centuries or millennia with minimal maintenance. The present manuscript describes CIP's modified protocol for potato cryopreservation, its large-scale application, and the establishment of quality and operational standards, which included a viability reassessment of material entering the cryobank. In 2013, CIP established stricter quality and operational standards under which 1,028 potato accessions were cryopreserved with an improved PVS2-droplet protocol. In 2014 the viability of 114 accessions cryopreserved in 2013 accessions were reassessed. The average recovery rate (full plant recovery after LN exposure) of 1028 cryopreserved Solanum species ranged from 34 to 59%, and 70% of the processed accessions showed a minimum recovery rate of ≥20% and were considered as successfully cryopreserved. CIP has established a new high quality management system for cryobanking. Periodic viability reassessment, strict and clear recovery criteria and the monitoring of the percent of successful accessions meeting the criteria as well as contamination rates are metrics that need to be considered in cryobanks.

  14. A comparison of the additional protocols of the five nuclear weapon states and the ensuing safeguards benefits to international nonproliferation efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uribe, Eva C; Sandoval, M Analisa; Sandoval, Marisa N

    2009-01-01

    With the 6 January 2009 entry into force of the Additional Protocol by the United States of America, all five declared Nuclear Weapon States that are part of the Nonproliferation Treaty have signed, ratified, and put into force the Additional Protocol. This paper makes a comparison of the strengths and weaknesses of the five Additional Protocols in force by the five Nuclear Weapon States with respect to the benefits to international nonproliferation aims. This paper also documents the added safeguards burden to the five declared Nuclear Weapon States that these Additional Protocols put on the states with respect to accessmore » to their civilian nuclear programs and the hosting of complementary access activities as part of the Additional Protocol.« less

  15. Persistent escalation of alcohol drinking in C57BL/6J mice with intermittent access to 20% ethanol

    PubMed Central

    Hwa, Lara S.; Chu, Adam; Levinson, Sally A.; Kayyali, Tala M.; DeBold, Joseph F.; Miczek, Klaus A.

    2011-01-01

    Background Intermittent access to drugs of abuse, as opposed to continuous access, is hypothesized to induce a kindling-type transition from moderate to escalated use, leading to dependence. Intermittent 24-hour cycles of ethanol access and deprivation can generate high levels of voluntary ethanol drinking in rats. Methods The current study uses C57BL/6J mice (B6) in an intermittent access to 20% ethanol protocol to escalate ethanol drinking levels. Adult male and female B6 mice were given intermittent access to 20% ethanol on alternating days of the week with water available ad libitum. Ethanol consumption during the initial 2 hours of access was compared to a short term, limited access “binge” drinking procedure, similar to drinking-in-the-dark (DID). B6 mice were also assessed for ethanol dependence with handling-induced convulsion (HIC), a reliable measure of withdrawal severity. Results After 3 weeks, male mice given intermittent access to ethanol achieved high stable levels of ethanol drinking in excess of 20 g/kg/24h, reaching above 100 mg/dl BEC, and showed a significantly higher ethanol preference than mice given continuous access to ethanol. Also, mice given intermittent access drank about twice as much as DID mice in the initial 2-hour access period. B6 mice that underwent the intermittent access protocol for longer periods of time displayed more severe signs of alcohol withdrawal. Additionally, female B6 mice were given intermittent access to ethanol and drank significantly more than males (ca. 30 g/kg/24h). Discussion The intermittent access method in B6 mice is advantageous because it induces escalated, voluntary, and preferential per os ethanol intake, behavior that may mimic a cardinal feature of human alcohol dependence, though the exact nature and site of ethanol acting in the brain and blood as a result of intermittent access has yet to be determined. PMID:21631540

  16. Accessing Stereochemically Rich Sultams via Microwave-Assisted, Continuous Flow Organic Synthesis (MACOS) Scale-out

    PubMed Central

    Organ, Michael G.; Hanson, Paul R.; Rolfe, Alan; Samarakoon, Thiwanka B.; Ullah, Farman

    2011-01-01

    The generation of stereochemically-rich benzothiaoxazepine-1,1′-dioxides for enrichment of high-throughput screening collections is reported. Utilizing a microwave-assisted, continuous flow organic synthesis platform (MACOS), scale-out of core benzothiaoxazepine-1,1′-dioxide scaffolds has been achieved on multi-gram scale using an epoxide opening/SNAr cyclization protocol. Diversification of these sultam scaffolds was attained via a microwave-assisted intermolecular SNAr reaction with a variety of amines. Overall, a facile, 2-step protocol generated a collection of benzothiaoxazepine-1,1′-dioxides possessing stereochemical complexity in rapid fashion, where all 8 stereoisomers were accessed from commercially available starting materials. PMID:22116791

  17. Remote direct memory access over datagrams

    DOEpatents

    Grant, Ryan Eric; Rashti, Mohammad Javad; Balaji, Pavan; Afsahi, Ahmad

    2014-12-02

    A communication stack for providing remote direct memory access (RDMA) over a datagram network is disclosed. The communication stack has a user level interface configured to accept datagram related input and communicate with an RDMA enabled network interface card (NIC) via an NIC driver. The communication stack also has an RDMA protocol layer configured to supply one or more data transfer primitives for the datagram related input of the user level. The communication stack further has a direct data placement (DDP) layer configured to transfer the datagram related input from a user storage to a transport layer based on the one or more data transfer primitives by way of a lower layer protocol (LLP) over the datagram network.

  18. A CoAP-Based Network Access Authentication Service for Low-Power Wide Area Networks: LO-CoAP-EAP.

    PubMed

    Garcia-Carrillo, Dan; Marin-Lopez, Rafael; Kandasamy, Arunprabhu; Pelov, Alexander

    2017-11-17

    The Internet-of-Things (IoT) landscape is expanding with new radio technologies. In addition to the Low-Rate Wireless Personal Area Network (LR-WPAN), the recent set of technologies conforming the so-called Low-Power Wide Area Networks (LP-WAN) offers long-range communications, allowing one to send small pieces of information at a reduced energy cost, which promotes the creation of new IoT applications and services. However, LP-WAN technologies pose new challenges since they have strong limitations in the available bandwidth. In general, a first step prior to a smart object being able to gain access to the network is the process of network access authentication. It involves authentication, authorization and key management operations. This process is of vital importance for operators to control network resources. However, proposals for managing network access authentication in LP-WAN are tailored to the specifics of each technology, which could introduce interoperability problems in the future. In this sense, little effort has been put so far into providing a wireless-independent solution for network access authentication in the area of LP-WAN. To fill this gap, we propose a service named Low-Overhead CoAP-EAP (LO-CoAP-EAP), which is based on previous work designed for LR-WPAN. LO-CoAP-EAP integrates the use of Authentication, Authorization and Accounting (AAA) infrastructures and the Extensible Authentication Protocol (EAP) protocol. For this integration, we use the Constrained Application Protocol (CoAP) to design a network authentication service independent of the type of LP-WAN technology. LO-CoAP-EAP represents a trade-off between flexibility, wireless technology independence, scalability and performance in LP-WAN.

  19. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems

    PubMed Central

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-01

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853

  20. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    PubMed

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  1. A CoAP-Based Network Access Authentication Service for Low-Power Wide Area Networks: LO-CoAP-EAP

    PubMed Central

    Garcia-Carrillo, Dan; Marin-Lopez, Rafael; Kandasamy, Arunprabhu; Pelov, Alexander

    2017-01-01

    The Internet-of-Things (IoT) landscape is expanding with new radio technologies. In addition to the Low-Rate Wireless Personal Area Network (LR-WPAN), the recent set of technologies conforming the so-called Low-Power Wide Area Networks (LP-WAN) offers long-range communications, allowing one to send small pieces of information at a reduced energy cost, which promotes the creation of new IoT applications and services. However, LP-WAN technologies pose new challenges since they have strong limitations in the available bandwidth. In general, a first step prior to a smart object being able to gain access to the network is the process of network access authentication. It involves authentication, authorization and key management operations. This process is of vital importance for operators to control network resources. However, proposals for managing network access authentication in LP-WAN are tailored to the specifics of each technology, which could introduce interoperability problems in the future. In this sense, little effort has been put so far into providing a wireless-independent solution for network access authentication in the area of LP-WAN. To fill this gap, we propose a service named Low-Overhead CoAP-EAP (LO-CoAP-EAP), which is based on previous work designed for LR-WPAN. LO-CoAP-EAP integrates the use of Authentication, Authorization and Accounting (AAA) infrastructures and the Extensible Authentication Protocol (EAP) protocol. For this integration, we use the Constrained Application Protocol (CoAP) to design a network authentication service independent of the type of LP-WAN technology. LO-CoAP-EAP represents a trade-off between flexibility, wireless technology independence, scalability and performance in LP-WAN. PMID:29149040

  2. A Mobility-Aware QoS Signaling Protocol for Ambient Networks

    NASA Astrophysics Data System (ADS)

    Jeong, Seong-Ho; Lee, Sung-Hyuck; Bang, Jongho

    Mobility-aware quality of service (QoS) signaling is crucial to provide seamless multimedia services in the ambient environment where mobile nodes may move frequently between different wireless access networks. The mobility of an IP-based node in ambient networks affects routing paths, and as a result, can have a significant impact on the operation and state management of QoS signaling protocols. In this paper, we first analyze the impact of mobility on QoS signaling protocols and how the protocols operate in mobility scenarios. We then propose an efficient mobility-aware QoS signaling protocol which can operate adaptively in ambient networks. The key features of the protocol include the fast discovery of a crossover node where the old and new paths converge or diverge due to handover and the localized state management for seamless services. Our analytical and simulation/experimental results show that the proposed/implemented protocol works better than existing protocols in the IP-based mobile environment.

  3. 106-17 Telemetry Management Resources Chapter 25

    DTIC Science & Technology

    2017-07-01

    aspects of the TmNS system . There are two primary protocols for accessing the management resources: Simple Network Management Protocol (SNMP) and... management resources as well as a basic HTTP clients and servers for a more RESTful approach to system management . Both tools are available from the...Telemetry Standards, RCC Standard 106-17 Chapter 25, July 2017 i CHAPTER 25 Management Resources Acronyms

  4. Universal Network Access System

    DTIC Science & Technology

    2003-11-01

    128 Figure 37 The detail of the SCM TX , (LO; local oscillator, LPF; Low-pass filter, AMP; Amplifier, BPF ...with UNAS, ( BPF : band-pass filter, BM Rx; Burst Mode receiver, AWGR; Arrayed waveguide grating router, FBG; Fiber Bragg Grating, TL; Tunable Laser...protocols. Standard specifications and RFCs will be used as guidelines for implementation. Table 1 UNAS Serial I/O Formats Protocol Implement1

  5. A low power medium access control protocol for wireless medical sensor networks.

    PubMed

    Lamprinos, I; Prentza, A; Sakka, E; Koutsouris, D

    2004-01-01

    The concept of a wireless integrated network of sensors, already applied in several sectors of our everyday life, such as security, transportation and environment monitoring, can as well provide an advanced monitor and control resource for healthcare services. By networking medical sensors wirelessly, attaching them in patient's body, we create the appropriate infrastructure for continuous and real-time monitoring of patient without discomforting him. This infrastructure can improve healthcare by providing the means for flexible acquisition of vital signs, while at the same time it provides more convenience to the patient. Given the type of wireless network, traditional medium access control (MAC) protocols cannot take advantage of the application specific requirements and information characteristics occurring in medical sensor networks, such as the demand for low power consumption and the rather limited and asymmetric data traffic. In this paper, we present the architecture of a low power MAC protocol, designated to support wireless networks of medical sensors. This protocol aims to improve energy efficiency by exploiting the inherent application features and requirements. It is oriented towards the avoidance of main energy wastage sources, such as idle listening, collision and power outspending.

  6. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  7. Graphical Internet Access on a Budget: Making a Pseudo-SLIP Connection.

    ERIC Educational Resources Information Center

    McCulley, P. Michael

    1995-01-01

    Examines The Internet Adapter (TIA), an Internet protocol that allows computers to be directly on the Internet and access graphics over standard telephone lines using high-speed modems. Compares TIA's system requirements, performance, and costs to other Internet connections. Sidebars describe connections other than TIA and how to find information…

  8. Access and benefit sharing: Best practices for the use and exchange of invertebrate biological control agents

    USDA-ARS?s Scientific Manuscript database

    The Convention on Biological Diversity (CBD) acknowledges the sovereign rights that countries have over their ‘genetic resources’. The Nagoya Protocol that came into force in 2014 provides a framework for implementation of and equitable process by which access to, and sharing of benefits between don...

  9. Wireless Computing Architecture III

    DTIC Science & Technology

    2013-09-01

    MIMO Multiple-Input and Multiple-Output MIMO /CON MIMO with concurrent hannel access and estimation MU- MIMO Multiuser MIMO OFDM Orthogonal...compressive sensing \\; a design for concurrent channel estimation in scalable multiuser MIMO networking; and novel networking protocols based on machine...Network, Antenna Arrays, UAV networking, Angle of Arrival, Localization MIMO , Access Point, Channel State Information, Compressive Sensing 16

  10. 21 CFR 312.320 - Treatment IND or treatment protocol.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... clinical trial under an IND designed to support a marketing application for the expanded access use, or (ii) All clinical trials of the drug have been completed; and (2) Marketing status. The sponsor is actively pursuing marketing approval of the drug for the expanded access use with due diligence; and (3) Evidence...

  11. Privacy/Security Policy

    Science.gov Websites

    automatically is: The Internet Protocol (IP) address of the domain from which you access the Internet (i.e DUF6 Management and Uses DUF6 Conversion EIS Documents News FAQs Internet Resources Glossary Home  , to access, obtain, alter, damage, or destroy information, or otherwise to interfere with the system

  12. A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.

    ERIC Educational Resources Information Center

    Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin

    2003-01-01

    Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)

  13. Embedded controller for GEM detector readout system

    NASA Astrophysics Data System (ADS)

    Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dominik, Wojciech; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek

    2013-10-01

    This paper describes the embedded controller used for the multichannel readout system for the GEM detector. The controller is based on the embedded Mini ITX mainboard, running the GNU/Linux operating system. The controller offers two interfaces to communicate with the FPGA based readout system. FPGA configuration and diagnostics is controlled via low speed USB based interface, while high-speed setup of the readout parameters and reception of the measured data is handled by the PCI Express (PCIe) interface. Hardware access is synchronized by the dedicated server written in C. Multiple clients may connect to this server via TCP/IP network, and different priority is assigned to individual clients. Specialized protocols have been implemented both for low level access on register level and for high level access with transfer of structured data with "msgpack" protocol. High level functionalities have been split between multiple TCP/IP servers for parallel operation. Status of the system may be checked, and basic maintenance may be performed via web interface, while the expert access is possible via SSH server. System was designed with reliability and flexibility in mind.

  14. Group Membership Based Authorization to CADC Resources

    NASA Astrophysics Data System (ADS)

    Damian, A.; Dowler, P.; Gaudet, S.; Hill, N.

    2012-09-01

    The Group Membership Service (GMS), implemented at the Canadian Astronomy Data Centre (CADC), is a prototype of what could eventually be an IVOA standard for a distributed and interoperable group membership protocol. Group membership is the core authorization concept that enables teamwork and collaboration amongst astronomers accessing distributed resources and services. The service integrates and complements other access control related IVOA standards such as single-sign-on (SSO) using X.509 proxy certificates and the Credential Delegation Protocol (CDP). The GMS has been used at CADC for several years now, initially as a subsystem and then as a stand-alone Web service. It is part of the authorization mechanism for controlling the access to restricted Web resources as well as the VOSpace service hosted by the CADC. We present the role that GMS plays within the access control system at the CADC, including the functionality of the service and how the different CADC services make use of it to assert user authorization to resources. We also describe the main advantages and challenges of using the service as well as future work to increase its robustness and functionality.

  15. Serving Satellite Remote Sensing Data to User Community through the OGC Interoperability Protocols

    NASA Astrophysics Data System (ADS)

    di, L.; Yang, W.; Bai, Y.

    2005-12-01

    Remote sensing is one of the major methods for collecting geospatial data. Hugh amount of remote sensing data has been collected by space agencies and private companies around the world. For example, NASA's Earth Observing System (EOS) is generating more than 3 Tb of remote sensing data per day. The data collected by EOS are processed, distributed, archived, and managed by the EOS Data and Information System (EOSDIS). Currently, EOSDIS is managing several petabytes of data. All of those data are not only valuable for global change research, but also useful for local and regional application and decision makings. How to make the data easily accessible to and usable by the user community is one of key issues for realizing the full potential of these valuable datasets. In the past several years, the Open Geospatial Consortium (OGC) has developed several interoperability protocols aiming at making geospatial data easily accessible to and usable by the user community through Internet. The protocols particularly relevant to the discovery, access, and integration of multi-source satellite remote sensing data are the Catalog Service for Web (CS/W) and Web Coverage Services (WCS) Specifications. The OGC CS/W specifies the interfaces, HTTP protocol bindings, and a framework for defining application profiles required to publish and access digital catalogues of metadata for geographic data, services, and related resource information. The OGC WCS specification defines the interfaces between web-based clients and servers for accessing on-line multi-dimensional, multi-temporal geospatial coverage in an interoperable way. Based on definitions by OGC and ISO 19123, coverage data include all remote sensing images as well as gridded model outputs. The Laboratory for Advanced Information Technology and Standards (LAITS), George Mason University, has been working on developing and implementing OGC specifications for better serving NASA Earth science data to the user community for many years. We have developed the NWGISS software package that implements multiple OGC specifications, including OGC WMS, WCS, CS/W, and WFS. As a part of NASA REASON GeoBrain project, the NWGISS WCS and CS/W servers have been extended to provide operational access to NASA EOS data at data pools through OGC protocols and to make both services chainable in the web-service chaining. The extensions in the WCS server include the implementation of WCS 1.0.0 and WCS 1.0.2, and the development of WSDL description of the WCS services. In order to find the on-line EOS data resources, the CS/W server is extended at the backend to search metadata in NASA ECHO. This presentation reports those extensions and discuss lessons-learned on the implementation. It also discusses the advantage, disadvantages, and future improvement of OGC specifications, particularly the WCS.

  16. The Characteristics of Binary Spike-Time-Dependent Plasticity in HfO2-Based RRAM and Applications for Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Liu, Chen; Shen, Wensheng; Dong, Zhen; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2017-04-01

    A binary spike-time-dependent plasticity (STDP) protocol based on one resistive-switching random access memory (RRAM) device was proposed and experimentally demonstrated in the fabricated RRAM array. Based on the STDP protocol, a novel unsupervised online pattern recognition system including RRAM synapses and CMOS neurons is developed. Our simulations show that the system can efficiently compete the handwritten digits recognition task, which indicates the feasibility of using the RRAM-based binary STDP protocol in neuromorphic computing systems to obtain good performance.

  17. Internet Protocol-Hybrid Opto-Electronic Ring Network (IP-HORNET): A Novel Internet Protocol-Over-Wavelength Division Multiplexing (IP-Over-WDM) Multiple-Access Metropolitan Area Network (MAN)

    DTIC Science & Technology

    2003-04-01

    usage times. End users may range from today’s typical users, such as home and business users, to futuristic users such as automobiles , appliances, hand...has the ability to drop a reprogrammable quantity of wavelengths into the node. The second technological requirement is a protocol that automatically...goal of the R-OADM is to have the ability to drop a reprogrammable number of wavelengths. If it is determined that at peak usage the node must receive M

  18. Generation of comprehensive thoracic oncology database--tool for translational research.

    PubMed

    Surati, Mosmi; Robinson, Matthew; Nandi, Suvobroto; Faoro, Leonardo; Demchuk, Carley; Kanteti, Rajani; Ferguson, Benjamin; Gangadhar, Tara; Hensing, Thomas; Hasina, Rifat; Husain, Aliya; Ferguson, Mark; Karrison, Theodore; Salgia, Ravi

    2011-01-22

    The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.

  19. On Increasing Network Lifetime in Body Area Networks Using Global Routing with Energy Consumption Balancing

    PubMed Central

    Tsouri, Gill R.; Prieto, Alvaro; Argade, Nikhil

    2012-01-01

    Global routing protocols in wireless body area networks are considered. Global routing is augmented with a novel link cost function designed to balance energy consumption across the network. The result is a substantial increase in network lifetime at the expense of a marginal increase in energy per bit. Network maintenance requirements are reduced as well, since balancing energy consumption means all batteries need to be serviced at the same time and less frequently. The proposed routing protocol is evaluated using a hardware experimental setup comprising multiple nodes and an access point. The setup is used to assess network architectures, including an on-body access point and an off-body access point with varying number of antennas. Real-time experiments are conducted in indoor environments to assess performance gains. In addition, the setup is used to record channel attenuation data which are then processed in extensive computer simulations providing insight on the effect of protocol parameters on performance. Results demonstrate efficient balancing of energy consumption across all nodes, an average increase of up to 40% in network lifetime corresponding to a modest average increase of 0.4 dB in energy per bit, and a cutoff effect on required transmission power to achieve reliable connectivity. PMID:23201987

  20. On increasing network lifetime in body area networks using global routing with energy consumption balancing.

    PubMed

    Tsouri, Gill R; Prieto, Alvaro; Argade, Nikhil

    2012-09-26

    Global routing protocols in wireless body area networks are considered. Global routing is augmented with a novel link cost function designed to balance energy consumption across the network. The result is a substantial increase in network lifetime at the expense of a marginal increase in energy per bit. Network maintenance requirements are reduced as well, since balancing energy consumption means all batteries need to be serviced at the same time and less frequently. The proposed routing protocol is evaluated using a hardware experimental setup comprising multiple nodes and an access point. The setup is used to assess network architectures, including an on-body access point and an off-body access point with varying number of antennas. Real-time experiments are conducted in indoor environments to assess performance gains. In addition, the setup is used to record channel attenuation data which are then processed in extensive computer simulations providing insight on the effect of protocol parameters on performance. Results demonstrate efficient balancing of energy consumption across all nodes, an average increase of up to 40% in network lifetime corresponding to a modest average increase of 0.4 dB in energy per bit, and a cutoff effect on required transmission power to achieve reliable connectivity.

  1. NGSI student activities in open source information analysis in support of the training program of the U.S. DOE laboratories for the entry into force of the additional protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandoval, M Analisa; Uribe, Eva C; Sandoval, Marisa N

    2009-01-01

    In 2008 a joint team from Los Alamos National Laboratory (LANL) and Brookhaven National Laboratory (BNL) consisting of specialists in training of IAEA inspectors in the use of complementary access activities formulated a training program to prepare the U.S. Doe laboratories for the entry into force of the Additional Protocol. As a major part of the support of the activity, LANL summer interns provided open source information analysis to the LANL-BNL mock inspection team. They were a part of the Next Generation Safeguards Initiative's (NGSI) summer intern program aimed at producing the next generation of safeguards specialists. This paper describesmore » how they used open source information to 'backstop' the LANL-BNL team's effort to construct meaningful Additional Protocol Complementary Access training scenarios for each of the three DOE laboratories, Lawrence Livermore National Laboratory, Idaho National Laboratory, and Oak Ridge National Laboratory.« less

  2. Vehicle Density Based Forwarding Protocol for Safety Message Broadcast in VANET

    PubMed Central

    Huang, Jiawei; Wang, Jianxin

    2014-01-01

    In vehicular ad hoc networks (VANETs), the medium access control (MAC) protocol is of great importance to provide time-critical safety applications. Contemporary multihop broadcast protocols in VANETs usually choose the farthest node in broadcast range as the forwarder to reduce the number of forwarding hops. However, in this paper, we demonstrate that the farthest forwarder may experience large contention delay in case of high vehicle density. We propose an IEEE 802.11-based multihop broadcast protocol VDF to address the issue of emergency message dissemination. To achieve the tradeoff between contention delay and forwarding hops, VDF adaptably chooses the forwarder according to the vehicle density. Simulation results show that, due to its ability to decrease the transmission collisions, the proposed protocol can provide significantly lower broadcast delay. PMID:25121125

  3. Limited school drinking water access for youth

    PubMed Central

    Kenney, Erica L.; Gortmaker, Steven L.; Cohen, Juliana F.W.; Rimm, Eric B.; Cradock, Angie L.

    2016-01-01

    PURPOSE Providing children and youth with safe, adequate drinking water access during school is essential for health. This study utilized objectively measured data to investigate the extent to which schools provide drinking water access that meets state and federal policies. METHODS We visited 59 middle and high schools in Massachusetts during spring 2012. Trained research assistants documented the type, location, and working condition of all water access points throughout each school building using a standard protocol. School food service directors (FSDs) completed surveys reporting water access in cafeterias. We evaluated school compliance with state plumbing codes and federal regulations and compared FSD self-reports of water access with direct observation; data were analyzed in 2014. RESULTS On average, each school had 1.5 (SD: 0.6) water sources per 75 students; 82% (SD: 20) were functioning, and fewer (70%) were both clean and functioning. Less than half of the schools met the federal Healthy Hunger Free Kids Act requirement for free water access during lunch; 18 schools (31%) provided bottled water for purchase but no free water. Slightly over half (59%) met the Massachusetts state plumbing code. FSDs overestimated free drinking water access compared to direct observation (96% FSD-reported versus 48% observed, kappa=0.07, p=0.17). CONCLUSIONS School drinking water access may be limited. In this study, many schools did not meet state or federal policies for minimum student drinking water access. School administrative staff may not accurately report water access. Public health action is needed to increase school drinking water access. IMPLICATIONS AND CONTRIBUTIONS Adolescents’ water consumption is lower than recommended. In a sample of Massachusetts middle and high schools, about half did not meet federal and state minimum drinking water access policies. Direct observation may improve assessments of drinking water access and could be integrated into routine school food service monitoring protocols. PMID:27235376

  4. Server-Controlled Identity-Based Authenticated Key Exchange

    NASA Astrophysics Data System (ADS)

    Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun

    We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.

  5. The Viability of Hearing Protection Device Fit-Testing at Navy and Marine Corps Accession Points

    PubMed Central

    Federman, Jeremy; Duhon, Christon

    2016-01-01

    Introduction: The viability of hearing protection device (HPD) verification (i.e., fit-testing) on a large scale was investigated to address this gap in a military accession environment. Materials and Methods: Personal Attenuation Ratings (PARs) following self-fitted (SELF-Fit) HPDs were acquired from 320 US Marine Corps training recruits (87.5% male, 12.5% female) across four test protocols (1-, 3-, 5-, and 7- frequency). SELF-Fit failures received follow-up to assess potential causes. Follow-up PARs were acquired (Experimenter fit [EXP-Fit], followed by Subject re-fit [SUB Re-Fit]). EXP-Fit was intended to provide a perception (dubbed “ear canal muscle memory”) of what a correctly fitted HPD should feel like. SUB Re-Fit was completed following EXP-Fit to determine whether a training recruit could duplicate EXP-Fit on her/his own without assistance. Results: A one-way analysis of variance (ANOVA) (N=320) showed that SELF-Fit means differed significantly between protocols (P < 0.001). Post-hoc analyses showed that the 1-freq SELF-Fit mean was significantly lower than all other protocols (P < 0.03) by 5.6 dB or more. No difference was found between the multi-frequency protocols. For recruits who were followed up with EXP-Fit (n=79), across all protocols, a significant (P < 0.001) mean improvement of 25.68 dB (10.99) was found, but PARs did not differ (P = 0.99) between EXP-Fit protocols. For recruits in the 3-freq and 5-freq protocol groups who experienced all three PAR test methods (n=33), PAR methods differed (P < 0.001) but no method by protocol interaction was found (P = 0.46). Post hoc tests showed that both EXP-Fit and SUB Re-Fit had significantly better attenuation than SELF-Fit (P < 0.001), but no difference was found between EXPFit and SUB Re-Fit (P = 0.59). For SELF-Fit, the 1-freq protocol resulted in a 35% pass rate, whereas the 3-, 5-, and 7-freq protocols resulted in >60% pass rates. Results showed that once recruits experienced how HPDs should feel when inserted correctly, they were able to properly replicate the procedure with similar results to the expert fit suggesting “ear canal muscle memory” may be a viable training strategy concomitant with HPD verification. Fit-test duration was also measured to examine the tradeoff between results accuracy and time required to complete each protocol. Discussion: Results from this study showed the critical importance of initial selection and fitting of HPDs followed by verification (i.e., fit-testing) at Navy and Marine Corps accession points. Achieving adequate protection from an HPD is fundamentally dependent on obtaining proper fit of the issued HPD as well as the quality of training recruits receive regarding HPD use. PMID:27991461

  6. Protocol for Reducing Time to Antibiotics in Pediatric Patients Presenting to an Emergency Department With Fever and Neutropenia: Efficacy and Barriers.

    PubMed

    Cohen, Clay; King, Amber; Lin, Chee Paul; Friedman, Gregory K; Monroe, Kathy; Kutny, Matthew

    2016-11-01

    Patients with febrile neutropenia are at high risk of morbidity and mortality from infectious causes. Decreasing time to antibiotic (TTA) administration is associated with improved patient outcomes. We sought to reduce TTA for children presenting to the emergency department with fever and neutropenia. In a prospective cohort study with historical comparison, TTA administration was evaluated in patients with neutropenia presenting to the Children's of Alabama Emergency Department. A protocol was established to reduce delays in antibiotic administration and increase the percentage of patients who receive treatment within 60 minutes of presentation. One hundred pre-protocol patient visits between August 2010 and December 2011 were evaluated and 153 post-protocol visits were evaluated between August 2012 and September 2013. We reviewed individual cases to determine barriers to rapid antibiotic administration. Antibiotics were administered in 96.9 ± 57.8 minutes in the pre-protocol patient group, and only 35% of patients received antibiotics within 60 minutes of presentation and 70% received antibiotics within 120 minutes. After implementation of the protocol, TTA for neutropenic patients was decreased to 64.3 ± 28.4 minutes (P < 0.0001) with 51.4% receiving antibiotics within 60 minutes and 93.2% within 120 minutes. Implementing a standard approach to patients at risk for neutropenia decreased TTA. There are numerous challenges in providing timely antibiotics to children with febrile neutropenia. Identified delays included venous access (time to effect of topical anesthetics, and difficulty obtaining access), physicians waiting on laboratory results, and antibiotic availability.

  7. Browsing the World Wide Web from behind a firewall

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, R.W.

    1995-02-01

    The World Wide Web provides a unified method of access to various information services on the Internet via a variety of protocols. Mosaic and other browsers give users a graphical interface to the Web that is easier to use and more visually pleasing than any other common Internet information service today. The availability of information via the Web and the number of users accessing it have both grown rapidly in the last year. The interest and investment of commercial firms in this technology suggest that in the near future, access to the Web may become as necessary to doing businessmore » as a telephone. This is problematical for organizations that use firewalls to protect their internal networks from the Internet. Allowing all the protocols and types of information found in the Web to pass their firewall will certainly increase the risk of attack by hackers on the Internet. But not allowing access to the Web could be even more dangerous, as frustrated users of the internal network are either unable to do their jobs, or find creative new ways to get around the firewall. The solution to this dilemma adopted at Sandia National Laboratories is described. Discussion also covers risks of accessing the Web, design alternatives considered, and trade-offs used to find the proper balance between access and protection.« less

  8. Review of the Composability Problem for System Evaluation

    DTIC Science & Technology

    2004-11-01

    burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services ...directory services (e.g., the Lightweight Directory Access Protocol (LDAP)), authentication (e.g., Kerberos), databases, user interface (e.g...exemplifies this type of development, by its use of commercial components and systems for authentication, access management, directory services

  9. Extending the Wireshark Network Protocol Analyser to Decode Link 16 Tactical Data Link Messages

    DTIC Science & Technology

    2014-01-01

    Interoperability Workshop 2003, Paper No. 03F- SIW -002. 6. Boardman, B., (2008), Introduction to Tactical Data Links in the ADF, accessed from <http...2008, Paper No. 08E- SIW -046. 19. Lamping, U., (2013), Wireshark Developer’s Guide for Wireshark 1.11, accessed from <http://www.wireshark.org/docs

  10. How Public Is the Web?: Robots, Access, and Scholarly Communication.

    ERIC Educational Resources Information Center

    Snyder, Herbert; Rosenbaum, Howard

    1998-01-01

    Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…

  11. Architecture and System Engineering Development Study of Space-Based Satellite Networks for NASA Missions

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2003-01-01

    Traditional NASA missions, both near Earth and deep space, have been stovepipe in nature and point-to-point in architecture. Recently, NASA and others have conceptualized missions that required space-based networking. The notion of networks in space is a drastic shift in thinking and requires entirely new architectures, radio systems (antennas, modems, and media access), and possibly even new protocols. A full system engineering approach for some key mission architectures will occur that considers issues such as the science being performed, stationkeeping, antenna size, contact time, data rates, radio-link power requirements, media access techniques, and appropriate networking and transport protocols. This report highlights preliminary architecture concepts and key technologies that will be investigated.

  12. Performance Analysis of IEEE 802.15.3 MAC Protocol with Different ACK Polices

    NASA Astrophysics Data System (ADS)

    Mehta, S.; Kwak, K. S.

    The wireless personal area network (WPAN) is an emerging wireless technology for future short range indoor and outdoor communication applications. The IEEE 802.15.3 medium access control (MAC) is proposed, specially, for short range high data rates applications, to coordinate the access to the wireless medium among the competing devices. This paper uses analytical model to study the performance analysis of WPAN (IEEE 802.15.3) MAC in terms of throughput, efficient bandwidth utilization, and delay with various acknowledgment schemes under different parameters. Also, some important observations are obtained, which can be very useful to the protocol architectures. Finally, we come up with some important research issues to further investigate the possible improvements in the WPAN MAC.

  13. Remote Memory Access Protocol Target Node Intellectual Property

    NASA Technical Reports Server (NTRS)

    Haddad, Omar

    2013-01-01

    The MagnetoSpheric Multiscale (MMS) mission had a requirement to use the Remote Memory Access Protocol (RMAP) over its SpaceWire network. At the time, no known intellectual property (IP) cores were available for purchase. Additionally, MMS preferred to implement the RMAP functionality with control over the low-level details of the design. For example, not all the RMAP standard functionality was needed, and it was desired to implement only the portions of the RMAP protocol that were needed. RMAP functionality had been previously implemented in commercial off-the-shelf (COTS) products, but the IP core was not available for purchase. The RMAP Target IP core is a VHDL (VHSIC Hardware Description Language description of a digital logic design suitable for implementation in an FPGA (field-programmable gate array) or ASIC (application-specific integrated circuit) that parses SpaceWire packets that conform to the RMAP standard. The RMAP packet protocol allows a network host to access and control a target device using address mapping. This capability allows SpaceWire devices to be managed in a standardized way that simplifies the hardware design of the device, as well as the development of the software that controls the device. The RMAP Target IP core has some features that are unique and not specified in the RMAP standard. One such feature is the ability to automatically abort transactions if the back-end logic does not respond to read/write requests within a predefined time. When a request times out, the RMAP Target IP core automatically retracts the request and returns a command response with an appropriate status in the response packet s header. Another such feature is the ability to control the SpaceWire node or router using RMAP transactions in the extended address range. This allows the SpaceWire network host to manage the SpaceWire network elements using RMAP packets, which reduces the number of protocols that the network host needs to support.

  14. Scalability enhancement of AODV using local link repairing

    NASA Astrophysics Data System (ADS)

    Jain, Jyoti; Gupta, Roopam; Bandhopadhyay, T. K.

    2014-09-01

    Dynamic change in the topology of an ad hoc network makes it difficult to design an efficient routing protocol. Scalability of an ad hoc network is also one of the important criteria of research in this field. Most of the research works in ad hoc network focus on routing and medium access protocols and produce simulation results for limited-size networks. Ad hoc on-demand distance vector (AODV) is one of the best reactive routing protocols. In this article, modified routing protocols based on local link repairing of AODV are proposed. Method of finding alternate routes for next-to-next node is proposed in case of link failure. These protocols are beacon-less, means periodic hello message is removed from the basic AODV to improve scalability. Few control packet formats have been changed to accommodate suggested modification. Proposed protocols are simulated to investigate scalability performance and compared with basic AODV protocol. This also proves that local link repairing of proposed protocol improves scalability of the network. From simulation results, it is clear that scalability performance of routing protocol is improved because of link repairing method. We have tested protocols for different terrain area with approximate constant node densities and different traffic load.

  15. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W.; Kempler, S.

    2014-01-01

    On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following:Level-1 GPM Microwave Imager (GMI) and partner radiometer productsLevel-2 Goddard Profiling Algorithm (GPROF) GMI and partner productsLevel-3 daily and monthly productsIntegrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.

  16. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC)

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; Greene, M.; Teng, W.; Kempler, S. J.

    2015-01-01

    On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: 1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products. 2. Goddard Profiling Algorithm (GPROF) GMI and partner products. 3. Integrated Multi-satellitE Retrievals for GPM (IMERG) products. (early, late, and final)A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.In this presentation, we will present GPM data products and services with examples.

  17. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2.

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.

    2014-12-01

    Daymet: Daily Surface Weather Data and Climatological Summaries provides gridded estimates of daily weather parameters for North America, including daily continuous surfaces of minimum and maximum temperature, precipitation occurrence and amount, humidity, shortwave radiation, snow water equivalent, and day length. The current data product (Version 2) covers the period January 1, 1980 to December 31, 2013 [1]. Data are available on a daily time step at a 1-km x 1-km spatial resolution in Lambert Conformal Conic projection with a spatial extent that covers the North America as meteorological station density allows. Daymet data can be downloaded from 1) the ORNL Distributed Active Archive Center (DAAC) search and order tools (http://daac.ornl.gov/cgi-bin/cart/add2cart.pl?add=1219) or directly from the DAAC FTP site (http://daac.ornl.gov/cgi-bin/dsviewer.pl?ds_id=1219) and 2) the Single Pixel Tool (http://daymet.ornl.gov/singlepixel.html) and THREDDS (Thematic Real-time Environmental Data Services) Data Server (TDS) (http://daymet.ornl.gov/thredds_mosaics.html). The Single Pixel Data Extraction Tool [2] allows users to enter a single geographic point by latitude and longitude in decimal degrees. A routine is executed that translates the (lon, lat) coordinates into projected Daymet (x,y) coordinates. These coordinates are used to access the Daymet database of daily-interpolated surface weather variables. The Single Pixel Data Extraction Tool also provides the option to download multiple coordinates programmatically. The ORNL DAAC's TDS provides customized visualization and access to Daymet time series of North American mosaics. Users can subset and download Daymet data via a variety of community standards, including OPeNDAP, NetCDF Subset service, and Open Geospatial Consortium (OGC) Web Map/Coverage Service. References: [1] Thornton, P. E., Thornton, M. M., Mayer, B. W., Wilhelmi, N., Wei, Y., Devarakonda, R., & Cook, R. (2012). "Daymet: Daily surface weather on a 1 km grid for North America, 1980-2008". Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center for Biogeochemical Dynamics (DAAC), 1. [2] Devarakonda R., et al. 2012. Daymet: Single Pixel Data Extraction Tool. Available [http://daymet.ornl.go/singlepixel.html].

  18. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC)

    NASA Astrophysics Data System (ADS)

    Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W. L.; Kempler, S. J.

    2014-12-01

    On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http://pmm.nasa.gov/GPM). The GPM mission consists of an international network of satellites in which a GPM "Core Observatory" satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: Level-1 GPM Microwave Imager (GMI) and partner radiometer products Goddard Profiling Algorithm (GPROF) GMI and partner products Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. In this presentation, we will present GPM data products and services with examples.

  19. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    NASA Astrophysics Data System (ADS)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the NASA GLC Viewer discovery and analysis tool, the DigitalGlobe/NGA Data Discovery Tool, the NASA Disaster Response Group Mapping Platform (https://maps.disasters.nasa.gov), and support for NASA's Arctic - Boreal Vulnerability Experiment (ABoVE).

  20. Addressing Data Access Needs of the Long-tail Distribution of Geoscientists

    NASA Astrophysics Data System (ADS)

    Malik, T.; Foster, I.

    2012-12-01

    Geoscientists must increasingly consider data from multiple disciplines and make intelligent connections between the data in order to advance research frontiers in mission critical problems. As a first step towards making timely and relevant connections, scientists require data and resource access, made available through simple and efficient protocols and web services that allows them to conveniently transmit, acquire, process, and inspect data and metadata. The last decade witnessed some vital data and resource access barriers being crossed. "Big iron" data infrastructures enabled geoscientists with large volumes of simulation and observational datasets, protocols made data access convenient, and strong governing bodies ensured standards for interoperability, repeatability and auditability. All this remarkable growth in access, however, addresses needs of publishers of large data and ignores consumers of that data. To-date limited access mechanisms exist for the consumers, who fetch subsets, analyze them, and, more often than not, generate new data and analysis, which finally gets published in scientific articles. In this session, we will highlight the data access needs of the long-tail distribution of geoscientists and a state-of-the art cyber-infrastructure approaches proposed to address those needs. The needs and the state-of-the-art arose from discussions held with geoscientists as part of the EarthCube Data Access Workshop, which was coordinated by the authors. Our presentation will summarize the proceedings of the Data Access workshop. It will present qualifying characteristics of solutions that will continue to serve the needs of these scientists in the long-term. Finally, we will present some cyber-infrastructure efforts in building such solutions and also provide a vision of the future CI in which such solutions can be useful.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veseli, S.

    As the number of sites deploying and adopting EPICS Version 4 grows, so does the need to support PV Access from multiple languages. Especially important are the widely used scripting languages that tend to reduce both software development time and the learning curve for new users. In this paper we describe PvaPy, a Python API for the EPICS PV Access protocol and its accompanying structured data API. Rather than implementing the protocol itself in Python, PvaPy wraps the existing EPICS Version 4 C++ libraries using the Boost.Python framework. This approach allows us to benefit from the existing code base andmore » functionality, and to significantly reduce the Python API development effort. PvaPy objects are based on Python dictionaries and provide users with the ability to access even the most complex of PV Data structures in a relatively straightforward way. Its interfaces are easy to use, and include support for advanced EPICS Version 4 features such as implementation of client and server Remote Procedure Calls (RPC).« less

  2. Cooperative Energy Harvesting-Adaptive MAC Protocol for WBANs

    PubMed Central

    Esteves, Volker; Antonopoulos, Angelos; Kartsakli, Elli; Puig-Vidal, Manel; Miribel-Català, Pere; Verikoukis, Christos

    2015-01-01

    In this paper, we introduce a cooperative medium access control (MAC) protocol, named cooperative energy harvesting (CEH)-MAC, that adapts its operation to the energy harvesting (EH) conditions in wireless body area networks (WBANs). In particular, the proposed protocol exploits the EH information in order to set an idle time that allows the relay nodes to charge their batteries and complete the cooperation phase successfully. Extensive simulations have shown that CEH-MAC significantly improves the network performance in terms of throughput, delay and energy efficiency compared to the cooperative operation of the baseline IEEE 802.15.6 standard. PMID:26029950

  3. [Constraints on publication rights in industry-initiated clinical trials--secondary publication].

    PubMed

    Gøtzsche, Peter C; Hróbjartsson, Asbjørn; Johansen, Helle Krogh; Haahr, Mette T; Altman, Douglas G; Chan, An-Wen

    2006-06-19

    In 22 of 44 industry-initiated clinical trial protocols from 1994-95, it was noted that the sponsor either owned the data or needed to approve the manuscript; another 18 protocols had other constraints. Furthermore, in 16 trials, the sponsor had access to accumulating data, and in an additional 16 trials the sponsor could stop the trial at any time, for any reason. These facts were not noted in any of the trial reports. We found similar constraints on publication rights in 44 protocols from 2004. This tight sponsor control over industry-initiated trials should be changed.

  4. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  5. Using Virtual Observatory Services in Sky View

    NASA Technical Reports Server (NTRS)

    McGlynn, Thomas A.

    2007-01-01

    For over a decade Skyview has provided astronomers and the public with easy access to survey and imaging data from all wavelength regimes. SkyView has pioneered many of the concepts that underlie the Virtual Observatory. Recently SkyView has been released as a distributable package which uses VO protocols to access image and catalog services. This chapter describes how to use the Skyview as a local service and how to customize it to access additional VO services and local data.

  6. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-07-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  7. Experimental realization of an entanglement access network and secure multi-party computation

    NASA Astrophysics Data System (ADS)

    Chang, Xiuying; Deng, Donglin; Yuan, Xinxing; Hou, Panyu; Huang, Yuanyuan; Duan, Luming; Department of Physics, University of Michigan Collaboration; CenterQuantum Information in Tsinghua University Team

    2017-04-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography.

  8. Can SNMP be Used to Create a Silent SS in an 802.16 Implementation

    DTIC Science & Technology

    2008-09-01

    wireless transmissions by using the Simple Network Management Protocol (SNMP). SNMP is a networking protocol that can be used by network ...802.16 as a unique networking technology. In a more familiar wireless networking environment like Wi-Fi, there is no central scheduler for access to...much a concern due to the scheduling algorithm , this power saving method provides good transmission security as a

  9. Deployment of 802.15.4 Sensor Networks for C4ISR Operations

    DTIC Science & Technology

    2006-06-01

    43 Figure 20.MSP410CA Dense Grid Monitoring (Crossbow User’s Manual, 2005). ....................................44 Figure 21.(a)MICA2 without...Deployment of Sensor Grid (COASTS OPORD, 2006). ...56 Figure 27.Topology View of Two Nodes and Base Station .......57 Figure 28.Nodes Employing Multi...Random Access Memory TCP/IP Transmission Control Protocol/Internet Protocol TinyOS Tiny Micro Threading Operating System UARTs Universal

  10. Fingerprinting Reverse Proxies Using Timing Analysis of TCP Flows

    DTIC Science & Technology

    2013-09-01

    bayes classifier,” in Cloud Computing Security , ser. CCSW ’09. New York City, NY: ACM, 2009, pp. 31–42. [30] J. Zhang, R. Perdisci, W. Lee, U. Sarfraz...FSM Finite State Machine HTML Hypertext Markup Language HTTP Hypertext Transfer Protocol HTTPS Hypertext Transfer Protocol Secure ICMP Internet Control...This hidden traffic concept supports network access control, security protection through obfuscation, and performance boosts at the Internet facing

  11. Non-Orthogonal Random Access in MIMO Cognitive Radio Networks: Beamforming, Power Allocation, and Opportunistic Transmission

    PubMed Central

    Lin, Huifa; Shin, Won-Yong

    2017-01-01

    We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice. PMID:28076402

  12. Enhanced protocol for real-time transmission of echocardiograms over wireless channels.

    PubMed

    Cavero, Eva; Alesanco, Alvaro; García, Jose

    2012-11-01

    This paper presents a methodology to transmit clinical video over wireless networks in real-time. A 3-D set partitioning in hierarchical trees compression prior to transmission is proposed. In order to guarantee the clinical quality of the compressed video, a clinical evaluation specific to each video modality has to be made. This evaluation indicates the minimal transmission rate necessary for an accurate diagnosis. However, the channel conditions produce errors and distort the video. A reliable application protocol is therefore proposed using a hybrid solution in which either retransmission or retransmission combined with forward error correction (FEC) techniques are used, depending on the channel conditions. In order to analyze the proposed methodology, the 2-D mode of an echocardiogram has been assessed. A bandwidth of 200 kbps is necessary to guarantee its clinical quality. The transmission using the proposed solution and retransmission and FEC techniques working separately have been simulated and compared in high-speed uplink packet access (HSUPA) and worldwide interoperability for microwave access (WiMAX) networks. The proposed protocol achieves guaranteed clinical quality for bit error rates higher than with the other protocols, being for a mobile speed of 60 km/h up to 3.3 times higher for HSUPA and 10 times for WiMAX.

  13. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    PubMed

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  14. Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre

    2013-12-01

    In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.

  15. IDMA-Based MAC Protocol for Satellite Networks with Consideration on Channel Quality

    PubMed Central

    2014-01-01

    In order to overcome the shortcomings of existing medium access control (MAC) protocols based on TDMA or CDMA in satellite networks, interleave division multiple access (IDMA) technique is introduced into satellite communication networks. Therefore, a novel wide-band IDMA MAC protocol based on channel quality is proposed in this paper, consisting of a dynamic power allocation algorithm, a rate adaptation algorithm, and a call admission control (CAC) scheme. Firstly, the power allocation algorithm combining the technique of IDMA SINR-evolution and channel quality prediction is developed to guarantee high power efficiency even in terrible channel conditions. Secondly, the effective rate adaptation algorithm, based on accurate channel information per timeslot and by the means of rate degradation, can be realized. What is more, based on channel quality prediction, the CAC scheme, combining the new power allocation algorithm, rate scheduling, and buffering strategies together, is proposed for the emerging IDMA systems, which can support a variety of traffic types, and offering quality of service (QoS) requirements corresponding to different priority levels. Simulation results show that the new wide-band IDMA MAC protocol can make accurate estimation of available resource considering the effect of multiuser detection (MUD) and QoS requirements of multimedia traffic, leading to low outage probability as well as high overall system throughput. PMID:25126592

  16. Novel Application of a Reverse Triage Protocol Providing Increased Access to Care in an Outpatient, Primary Care Clinic Setting.

    PubMed

    Sacino, Amanda N; Shuster, Jonathan J; Nowicki, Kamil; Carek, Peter J; Wegman, Martin P; Listhaus, Alyson; Gibney, Joseph M; Chang, Ku-Lang

    2016-02-01

    As the number of patients with access to care increases, outpatient clinics will need to implement innovative strategies to maintain or enhance clinic efficiency. One viable alternative involves reverse triage. A reverse triage protocol was implemented during a student-run free clinic. Each patient's chief complaint(s) were obtained at the beginning of the clinic session and ranked by increasing complexity. "Complexity" was defined as the subjective amount of time required to provide a full, thorough evaluation of a patient. Less complex cases were prioritized first since they could be expedited through clinic processing and allow for more time and resources to be dedicated to complex cases. Descriptive statistics were used to characterize and summarize the data obtained. Categorical variables were analyzed using chi-square. A time series analysis of the outcome versus centered time in weeks was also conducted. The average number of patients seen per clinic session increased by 35% (9.5 versus 12.8) from pre-implementation of the reverse triage protocol to 6 months after the implementation of the protocol. The implementation of a reverse triage in an outpatient setting significantly increased clinic efficiency as noted by a significant increase in the number of patients seen during a clinic session.

  17. Non-Orthogonal Random Access in MIMO Cognitive Radio Networks: Beamforming, Power Allocation, and Opportunistic Transmission.

    PubMed

    Lin, Huifa; Shin, Won-Yong

    2017-01-01

    We study secondary random access in multi-input multi-output cognitive radio networks, where a slotted ALOHA-type protocol and successive interference cancellation are used. We first introduce three types of transmit beamforming performed by secondary users, where multiple antennas are used to suppress the interference at the primary base station and/or to increase the received signal power at the secondary base station. Then, we show a simple decentralized power allocation along with the equivalent single-antenna conversion. To exploit the multiuser diversity gain, an opportunistic transmission protocol is proposed, where the secondary users generating less interference are opportunistically selected, resulting in a further reduction of the interference temperature. The proposed methods are validated via computer simulations. Numerical results show that increasing the number of transmit antennas can greatly reduce the interference temperature, while increasing the number of receive antennas leads to a reduction of the total transmit power. Optimal parameter values of the opportunistic transmission protocol are examined according to three types of beamforming and different antenna configurations, in terms of maximizing the cognitive transmission capacity. All the beamforming, decentralized power allocation, and opportunistic transmission protocol are performed by the secondary users in a decentralized manner, thus resulting in an easy implementation in practice.

  18. Outcomes from a university-based low-cost in vitro fertilization program providing access to care for a low-resource socioculturally diverse urban community.

    PubMed

    Herndon, Christopher N; Anaya, Yanett; Noel, Martha; Cakmak, Hakan; Cedars, Marcelle I

    2017-10-01

    To report on outcomes from a university-based low-cost and low-complexity IVF program using mild stimulation approaches and simplified protocols to provide basic access to ART to a socioculturally diverse low-income urban population. Retrospective cohort study. Academic infertility center. Sixty-five infertile couples were enrolled from a county hospital serving a low-resource largely immigrant population. Patients were nonrandomly allocated to one of four mild stimulation protocols: clomiphene/letrozole alone, two clomiphene/letrozole-based protocols involving sequential or flare addition of low-dose gonadotropins, and low-dose gonadotropins alone. Clinical fellows managed all aspects of cycle preparation, monitoring, oocyte retrieval, and embryo transfer under an attending preceptor. Retrieval was undertaken without administration of deep anesthesia, and laboratory interventions were minimized. All embryo transfers were performed at the cleavage stage. Sociomedical demographics, treatment response, and pregnancy outcomes were recorded. From August 2010 to June 2016, 65 patients initiated 161 stimulation IVF cycles, which resulted in 107 retrievals, 91 fresh embryo transfers, and 40 frozen embryo transfer cycles. The mean age of patients was 33.3 years, and mean reported duration of infertility was 5.3 years; 33.5% (54/161) of cycles were cancelled before oocyte retrieval, with 13% due to premature ovulation. Overall, cumulative live birth rates per retrieval including subsequent use of frozen embryos was 29.0%; 44.6% (29/65) of patients enrolled in the program achieved pregnancy. Use of mild stimulation protocols, simplified monitoring, and minimized laboratory handling procedures enabled access to care in a low-resource socioculturally diverse infertile population. Copyright © 2017. Published by Elsevier Inc.

  19. The Rockefeller University Navigation Program: A Structured Multidisciplinary Protocol Development and Educational Program to Advance Translational Research

    PubMed Central

    Kost, Rhonda G.; Dowd, Kathleen A.; Hurley, Arlene M.; Rainer, Tyler‐Lauren; Coller, Barry S.

    2014-01-01

    Abstract The development of translational clinical research protocols is complex. To assist investigators, we developed a structured supportive guidance process (Navigation) to expedite protocol development to the standards of good clinical practice (GCP), focusing on research ethics and integrity. Navigation consists of experienced research coordinators leading investigators through a concerted multistep protocol development process from concept initiation to submission of the final protocol. To assess the effectiveness of Navigation, we collect data on the experience of investigators, the intensity of support required for protocol development, IRB review outcomes, and protocol start and completion dates. One hundred forty‐four protocols underwent Navigation and achieved IRB approval since the program began in 2007, including 37 led by trainee investigators, 26 led by MDs, 9 by MD/PhDs, 57 by PhDs, and 12 by investigators with other credentials (e.g., RN, MPH). In every year, more than 50% of Navigated protocols were approved by the IRB within 30 days. For trainees who had more than one protocol navigated, the intensity of Navigation support required decreased over time. Navigation can increase access to translational studies for basic scientists, facilitate GCP training for investigators, and accelerate development and approval of protocols of high ethical and scientific quality. PMID:24405608

  20. A Pilot and Feasibility Study of Virtual Reality as a Distraction for Children with Cancer

    ERIC Educational Resources Information Center

    Gershon, Jonathan; Zimand, Elana; Pickering, Melissa; Rothbaum, Barbara Olasov; Hodges, Larry

    2004-01-01

    Objective: To pilot and test the feasibility of a novel technology to reduce anxiety and pain associated with an invasive medical procedure in children with cancer. Method: Children with cancer (ages 7-19) whose treatment protocols required access of their subcutaneous venous port device (port access) were randomly assigned to a virtual reality…

  1. Implementation of an Intensive Treatment Protocol for Adolescents with Panic Disorder and Agoraphobia

    ERIC Educational Resources Information Center

    Angelosante, Aleta G.; Pincus, Donna B.; Whitton, Sarah W.; Cheron, Daniel; Pian, Jessica

    2009-01-01

    New and innovative ways of implementing cognitive-behavioral therapy (CBT) are required to address the varied needs of youth with anxiety disorders. Brief treatment formats may be useful in assisting teens to return to healthy functioning quickly and can make treatment more accessible for those who may not have local access to providers of CBT.…

  2. Privacy & Security Notice | Argonne National Laboratory

    Science.gov Websites

    server logs: The Internet Protocol (IP) address of the domain from which you access the Internet (i.e service to authorized users, to access, obtain, alter, damage, or destroy information, or otherwise to . 123.456.789.012) whether yours individually or provided as a proxy by your Internet Service Provider (ISP), The

  3. Interactive browsing of 3D environment over the Internet

    NASA Astrophysics Data System (ADS)

    Zhang, Cha; Li, Jin

    2000-12-01

    In this paper, we describe a system for wandering in a realistic environment over the Internet. The environment is captured by the concentric mosaic, compressed via the reference block coder (RBC), and accessed and delivered over the Internet through the virtual media (Vmedia) access protocol. Capturing the environment through the concentric mosaic is easy. We mount a camera at the end of a level beam, and shoot images as the beam rotates. The huge dataset of the concentric mosaic is then compressed through the RBC, which is specifically designed for both high compression efficiency and just-in-time (JIT) rendering. Through the JIT rendering function, only a portion of the RBC bitstream is accessed, decoded and rendered for each virtual view. A multimedia communication protocol -- the Vmedia protocol, is then proposed to deliver the compressed concentric mosaic data over the Internet. Only the bitstream segments corresponding to the current view are streamed over the Internet. Moreover, the delivered bitstream segments are managed by a local Vmedia cache so that frequently used bitstream segments need not be streamed over the Internet repeatedly, and the Vmedia is able to handle a RBC bitstream larger than its memory capacity. A Vmedia concentric mosaic interactive browser is developed where the user can freely wander in a realistic environment, e.g., rotate around, walk forward/backward and sidestep, even under a tight bandwidth of 33.6 kbps.

  4. Abiraterone acetate for patients with metastatic castration-resistant prostate cancer progressing after chemotherapy: final analysis of a multicentre, open-label, early-access protocol trial.

    PubMed

    Sternberg, Cora N; Castellano, Daniel; Daugaard, Gedske; Géczi, Lajos; Hotte, Sebastien J; Mainwaring, Paul N; Saad, Fred; Souza, Ciro; Tay, Miah H; Garrido, José M Tello; Galli, Luca; Londhe, Anil; De Porre, Peter; Goon, Betty; Lee, Emma; McGowan, Tracy; Naini, Vahid; Todd, Mary B; Molina, Arturo; George, Daniel J

    2014-10-01

    In the final analysis of the phase 3 COU-AA-301 study, abiraterone acetate plus prednisone significantly prolonged overall survival compared with prednisone alone in patients with metastatic castration-resistant prostate cancer progressing after chemotherapy. Here, we present the final analysis of an early-access protocol trial that was initiated after completion of COU-AA-301 to enable worldwide preapproval access to abiraterone acetate in patients with metastatic castration-resistant prostate cancer progressing after chemotherapy. We did a multicentre, open-label, early-access protocol trial in 23 countries. We enrolled patients who had metastatic castration-resistant prostate cancer progressing after taxane chemotherapy. Participants received oral doses of abiraterone acetate (1000 mg daily) and prednisone (5 mg twice a day) in 28-day cycles until disease progression, development of sustained side-effects, or abiraterone acetate becoming available in the respective country. The primary outcome was the number of adverse events arising during study treatment and within 30 days of discontinuation. Efficacy measures (time to prostate-specific antigen [PSA] progression and time to clinical progression) were gathered to guide treatment decisions. We included in our analysis all patients who received at least one dose of abiraterone acetate. This study is registered with ClinicalTrials.gov, number NCT01217697. Between Nov 17, 2010, and Sept 30, 2013, 2314 patients were enrolled into the early-access protocol trial. Median follow-up was 5·7 months (IQR 3·5-10·6). 952 (41%) patients had a grade 3 or 4 treatment-related adverse event, and grade 3 or 4 serious adverse events were recorded in 585 (25%) people. The most common grade 3 and 4 adverse events were hepatotoxicity (188 [8%]), hypertension (99 [4%]), cardiac disorders (52 [2%]), osteoporosis (31 [1%]), hypokalaemia (28 [1%]), and fluid retention or oedema (23 [1%]). 172 (7%) patients discontinued the study because of adverse events (64 [3%] were drug-related), as assessed by the investigator, and 171 (7%) people died. The funder assessed causes of death, which were due to disease progression (85 [4%]), an unrelated adverse experience (72 [3%]), and unknown reasons (14 [1%]). Of the 86 deaths not attributable to disease progression, 18 (<1%) were caused by a drug-related adverse event, as assessed by the investigator. Median time to PSA progression was 8·5 months (95% CI 8·3-9·7) and median time to clinical progression was 12·7 months (11·8-13·8). No new safety signals or unexpected adverse events were found in this early-access protocol trial to assess abiraterone acetate for patients with metastatic castration-resistant prostate cancer who progressed after chemotherapy. Future work is needed to ascertain the most effective regimen of abiraterone acetate to optimise patients' outcomes. Janssen Research & Development. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Reporting on blinding in trial protocols and corresponding publications was often inadequate but rarely contradictory.

    PubMed

    Hróbjartsson, Asbjørn; Pildal, Julie; Chan, An-Wen; Haahr, Mette T; Altman, Douglas G; Gøtzsche, Peter C

    2009-09-01

    To compare the reporting on blinding in protocols and articles describing randomized controlled trials. We studied 73 protocols of trials approved by the scientific/ethical committees for Copenhagen and Frederiksberg, 1994 and 1995, and their corresponding publications. Three out of 73 trials (4%) reported blinding in the protocol that contradicted that in the publication (e.g., "open" vs. "double blind"). The proportion of "double-blind" trials with a clear description of the blinding of participants increased from 11 out of 58 (19%) when based on publications alone to 39 (67%) when adding the information in the protocol. The similar proportions for the blinding of health care providers were 2 (3%) and 22 (38%); and for the blinding of data collectors, they were 8 (14%) and 14 (24%). In 52 of 58 publications (90%), it was unclear whether all patients, health care providers, and data collectors had been blinded. In 4 of the 52 trials (7%), the protocols clarified that all three key trial persons had been blinded. The reporting on blinding in both trial protocols and publications is often inadequate. We suggest developing international guidelines for the reporting of trial protocols and public access to protocols.

  6. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  7. A Functional Approach to Hyperspectral Image Analysis in the Cloud

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral image volumes are very large. A hyperspectral image analysis (HIA) may use 100TB of data, a huge barrier to their use. Hylatis is a new NASA project to create a toolset for HIA. Through web notebook and cloud technology, Hylatis will provide a more interactive experience for HIA by defining and implementing concepts and operations for HIA, identified and vetted by subject matter experts, and callable within a general purpose language, particularly Python. Hylatis leverages LaTiS, a data access framework developed at LASP. With an OPeNDAP compliant interface plus additional server side capabilities, the LaTiS API provides a uniform interface to virtually any data source, and has been applied to various storage systems, including: file systems, databases, remote servers, and in various domains including: space science, systems administration and stock quotes. In the LaTiS architecture, data `adapters' read data into a data model, where server-side computations occur. Data `writers' write data from the data model into the desired format. The Hylatis difference is the data model. In LaTiS, data are represented as mathematical functions of independent and dependent variables. Domain semantics are not present at this level, but are instead present in higher software layers. The benefit of a domain agnostic, mathematical representation is having the power of math, particularly functional algebra, unconstrained by domain semantics. This agnosticism supports reusable server side functionality applicable in any domain, such as statistical, filtering, or projection operations. Algorithms to aggregate or fuse data can be simpler because domain semantics are separated from the math. Hylatis will map the functional model onto the Spark relational interface, thereby adding a functional interface to that big data engine.This presentation will discuss Hylatis goals, strategies, and current state.

  8. Semantics in NETMAR (open service NETwork for MARine environmental data)

    NASA Astrophysics Data System (ADS)

    Leadbetter, Adam; Lowry, Roy; Clements, Oliver

    2010-05-01

    Over recent years, there has been a proliferation of environmental data portals utilising a wide range of systems and services, many of which cannot interoperate. The European Union Framework 7 project NETMAR (that commenced February 2010) aims to provide a toolkit for building such portals in a coherent manner through the use of chained Open Geospatial Consortium Web Services (WxS), OPeNDAP file access and W3C standards controlled by a Business Process Execution Language workflow. As such, the end product will be configurable by user communities interested in developing a portal for marine environmental data, and will offer search, download and integration tools for a range of satellite, model and observed data from open ocean and coastal areas. Further processing of these data will also be available in order to provide statistics and derived products suitable for decision making in the chosen environmental domain. In order to make the resulting portals truly interoperable, the NETMAR programme requires a detailed definition of the semantics of the services being called and the data which are being requested. A key goal of the NETMAR programme is, therefore, to develop a multi-domain and multilingual ontology of marine data and services. This will allow searches across both human languages and across scientific domains. The approach taken will be to analyse existing semantic resources and provide mappings between them, gluing together the definitions, semantics and workflows of the WxS services. The mappings between terms aim to be more general than the standard "narrower than", "broader than" type seen in the thesauri or simple ontologies implemented by previous programmes. Tools for the development and population of ontologoies will also be provided by NETMAR as there will be instances in which existing resources cannot sufficiently describe newly encountered data or services.

  9. A survey of system architecture requirements for health care-based wireless sensor networks.

    PubMed

    Egbogah, Emeka E; Fapojuwo, Abraham O

    2011-01-01

    Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  10. Optimizing the MAC Protocol in Localization Systems Based on IEEE 802.15.4 Networks

    PubMed Central

    Claver, Jose M.; Ezpeleta, Santiago

    2017-01-01

    Radio frequency signals are commonly used in the development of indoor localization systems. The infrastructure of these systems includes some beacons placed at known positions that exchange radio packets with users to be located. When the system is implemented using wireless sensor networks, the wireless transceivers integrated in the network motes are usually based on the IEEE 802.15.4 standard. But, the CSMA-CA, which is the basis for the medium access protocols in this category of communication systems, is not suitable when several users want to exchange bursts of radio packets with the same beacon to acquire the radio signal strength indicator (RSSI) values needed in the location process. Therefore, new protocols are necessary to avoid the packet collisions that appear when multiple users try to communicate with the same beacons. On the other hand, the RSSI sampling process should be carried out very quickly because some systems cannot tolerate a large delay in the location process. This is even more important when the RSSI sampling process includes measures with different signal power levels or frequency channels. The principal objective of this work is to speed up the RSSI sampling process in indoor localization systems. To achieve this objective, the main contribution is the proposal of a new MAC protocol that eliminates the medium access contention periods and decreases the number of packet collisions to accelerate the RSSI collection process. Moreover, the protocol increases the overall network throughput taking advantage of the frequency channel diversity. The presented results show the suitability of this protocol for reducing the RSSI gathering delay and increasing the network throughput in simulated and real environments. PMID:28684666

  11. Optimizing the MAC Protocol in Localization Systems Based on IEEE 802.15.4 Networks.

    PubMed

    Pérez-Solano, Juan J; Claver, Jose M; Ezpeleta, Santiago

    2017-07-06

    Radio frequency signals are commonly used in the development of indoor localization systems. The infrastructure of these systems includes some beacons placed at known positions that exchange radio packets with users to be located. When the system is implemented using wireless sensor networks, the wireless transceivers integrated in the network motes are usually based on the IEEE 802.15.4 standard. But, the CSMA-CA, which is the basis for the medium access protocols in this category of communication systems, is not suitable when several users want to exchange bursts of radio packets with the same beacon to acquire the radio signal strength indicator (RSSI) values needed in the location process. Therefore, new protocols are necessary to avoid the packet collisions that appear when multiple users try to communicate with the same beacons. On the other hand, the RSSI sampling process should be carried out very quickly because some systems cannot tolerate a large delay in the location process. This is even more important when the RSSI sampling process includes measures with different signal power levels or frequency channels. The principal objective of this work is to speed up the RSSI sampling process in indoor localization systems. To achieve this objective, the main contribution is the proposal of a new MAC protocol that eliminates the medium access contention periods and decreases the number of packet collisions to accelerate the RSSI collection process. Moreover, the protocol increases the overall network throughput taking advantage of the frequency channel diversity. The presented results show the suitability of this protocol for reducing the RSSI gathering delay and increasing the network throughput in simulated and real environments.

  12. 47 CFR 54.503 - Other supported special services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... telecommunications carriers include voice mail, interconnected voice over Internet protocol (VoIP), text messaging, Internet access, and installation and maintenance of internal connections in addition to all reasonable...

  13. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.

    PubMed

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young

    2016-03-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.

  14. Point-of-Care Ultrasound Assessment of Tropical Infectious Diseases--A Review of Applications and Perspectives.

    PubMed

    Bélard, Sabine; Tamarozzi, Francesca; Bustinduy, Amaya L; Wallrauch, Claudia; Grobusch, Martin P; Kuhn, Walter; Brunetti, Enrico; Joekes, Elizabeth; Heller, Tom

    2016-01-01

    The development of good quality and affordable ultrasound machines has led to the establishment and implementation of numerous point-of-care ultrasound (POCUS) protocols in various medical disciplines. POCUS for major infectious diseases endemic in tropical regions has received less attention, despite its likely even more pronounced benefit for populations with limited access to imaging infrastructure. Focused assessment with sonography for HIV-associated TB (FASH) and echinococcosis (FASE) are the only two POCUS protocols for tropical infectious diseases, which have been formally investigated and which have been implemented in routine patient care today. This review collates the available evidence for FASH and FASE, and discusses sonographic experiences reported for urinary and intestinal schistosomiasis, lymphatic filariasis, viral hemorrhagic fevers, amebic liver abscess, and visceral leishmaniasis. Potential POCUS protocols are suggested and technical as well as training aspects in the context of resource-limited settings are reviewed. Using the focused approach for tropical infectious diseases will make ultrasound diagnosis available to patients who would otherwise have very limited or no access to medical imaging. © The American Society of Tropical Medicine and Hygiene.

  15. Point-of-Care Ultrasound Assessment of Tropical Infectious Diseases—A Review of Applications and Perspectives

    PubMed Central

    Bélard, Sabine; Tamarozzi, Francesca; Bustinduy, Amaya L.; Wallrauch, Claudia; Grobusch, Martin P.; Kuhn, Walter; Brunetti, Enrico; Joekes, Elizabeth; Heller, Tom

    2016-01-01

    The development of good quality and affordable ultrasound machines has led to the establishment and implementation of numerous point-of-care ultrasound (POCUS) protocols in various medical disciplines. POCUS for major infectious diseases endemic in tropical regions has received less attention, despite its likely even more pronounced benefit for populations with limited access to imaging infrastructure. Focused assessment with sonography for HIV-associated TB (FASH) and echinococcosis (FASE) are the only two POCUS protocols for tropical infectious diseases, which have been formally investigated and which have been implemented in routine patient care today. This review collates the available evidence for FASH and FASE, and discusses sonographic experiences reported for urinary and intestinal schistosomiasis, lymphatic filariasis, viral hemorrhagic fevers, amebic liver abscess, and visceral leishmaniasis. Potential POCUS protocols are suggested and technical as well as training aspects in the context of resource-limited settings are reviewed. Using the focused approach for tropical infectious diseases will make ultrasound diagnosis available to patients who would otherwise have very limited or no access to medical imaging. PMID:26416111

  16. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  17. Utilizing Free and Open Source Software to access, view and compare in situ observations, EO products and model output data

    NASA Astrophysics Data System (ADS)

    Vines, Aleksander; Hamre, Torill; Lygre, Kjetil

    2014-05-01

    The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. To this end, we have developed a geo-spatial database of both historical and new in situ physical, biological and chemical parameters for the Southern Ocean, Atlantic, Nordic Seas and the Arctic, and organized related satellite-derived quantities and model forecasts in a joint geo-spatial repository. For easy access to these data, we have implemented a web-based GIS (Geographical Information Systems) where observed, derived and forcasted parameters can be searched, displayed, compared and exported. Model forecasts can also be uploaded dynamically to the system, to allow modelers to quickly compare their results with available in situ and satellite observations. We have implemented the web-based GIS(Geographical Information Systems) system based on free and open source technologies: Thredds Data Server, ncWMS, GeoServer, OpenLayers, PostGIS, Liferay, Apache Tomcat, PRTree, NetCDF-Java, json-simple, Geotoolkit, Highcharts, GeoExt, MapFish, FileSaver, jQuery, jstree and qUnit. We also wanted to used open standards to communicate between the different services and we use WMS, WFS, netCDF, GML, OPeNDAP, JSON, and SLD. The main advantage we got from using FOSS was that we did not have to invent the wheel all over again, but could use already existing code and functionalities on our software for free: Of course most the software did not have to be open source for this, but in some cases we had to do minor modifications to make the different technologies work together. We could extract the parts of the code that we needed for a specific task. One example of this was to use part of the code from ncWMS and Thredds to help our main application to both read netCDF files and present them in the browser. This presentation will focus on both difficulties we had with and advantages we got from developing this tool with FOSS.

  18. A GeoServices Infrastructure for Near-Real-Time Access to Suomi NPP Satellite Data

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Valente, E. G.; Hao, W.; Chettri, S.

    2012-12-01

    The new Suomi National Polar-orbiting Partnership (NPP) satellite extends NASA's moderate-resolution, multispectral observations with a suite of powerful imagers and sounders to support a broad array of research and applications. However, NPP data products consist of a complex set of data and metadata files in highly specialized formats; which NPP's operational ground segment delivers to users only with several hours' delay. This severely limits their use in critical applications such as weather forecasting, emergency / disaster response, search and rescue, and other activities that require near-real-time access to satellite observations. Alternative approaches, based on distributed Direct Broadcast facilities, can reduce the delay in NPP data delivery from hours to minutes, and can make products more directly usable by practitioners in the field. To assess and fulfill this potential, we are developing a suite of software that couples Direct Broadcast data feeds with a streamlined, scalable processing chain and geospatial Web services, so as to permit many more time-sensitive applications to use NPP data. The resulting geoservices infrastructure links a variety of end-user tools and applications to NPP data from different sources, and to other rapidly-changing geospatial data. By using well-known, standard software interfaces (such as OGC Web Services or OPeNDAP), this infrastructure serves a variety of end-user analysis and visualization tools, giving them access into datasets of arbitrary size and resolution and allowing them to request and receive tailored products on demand. The standards-based approach may also streamline data sharing among independent satellite receiving facilities, thus helping them to interoperate in providing frequent, composite views of continent-scale or global regions. To enable others to build similar or derived systems, the service components we are developing (based in part on the Community Satellite Processing Package (CSPP) from the University of Wisconsin and the International Polar-Orbiter Processing Package (IPOPP) from NASA) are being released as open source software. Furthermore, they are configured to operate in a cloud computing environment, so as to allow even small organizations to process and serve NPP data without large hardware investments; and to maintain near-real-time performance cost-effectively by growing and shrinking their use of computing resources to meet large, rapid fluctuations in end-user demand, data availability, and processing needs. (This is especially important for polar-orbiting satellites like NPP, which pass within range of a receiver only a few times each day.) We will discuss the design of the infrastructure, highlight its capabilities, and sketch its potential to facilitate broad access to satellite data processing and visualization, and to enhance near-real-time applications via distributed NPP data streams.

  19. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network.

    PubMed

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.'s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.'s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client's privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay.

  20. A New Ticket-Based Authentication Mechanism for Fast Handover in Mesh Network

    PubMed Central

    Lai, Yan-Ming; Cheng, Pu-Jen; Lee, Cheng-Chi; Ku, Chia-Yi

    2016-01-01

    Due to the ever-growing popularity mobile devices of various kinds have received worldwide, the demands on large-scale wireless network infrastructure development and enhancement have been rapidly swelling in recent years. A mobile device holder can get online at a wireless network access point, which covers a limited area. When the client leaves the access point, there will be a temporary disconnection until he/she enters the coverage of another access point. Even when the coverages of two neighboring access points overlap, there is still work to do to make the wireless connection smoothly continue. The action of one wireless network access point passing a client to another access point is referred to as the handover. During handover, for security concerns, the client and the new access point should perform mutual authentication before any Internet access service is practically gained/provided. If the handover protocol is inefficient, in some cases discontinued Internet service will happen. In 2013, Li et al. proposed a fast handover authentication mechanism for wireless mesh network (WMN) based on tickets. Unfortunately, Li et al.’s work came with some weaknesses. For one thing, some sensitive information such as the time and date of expiration is sent in plaintext, which increases security risks. For another, Li et al.’s protocol includes the use of high-quality tamper-proof devices (TPDs), and this unreasonably high equipment requirement limits its applicability. In this paper, we shall propose a new efficient handover authentication mechanism. The new mechanism offers a higher level of security on a more scalable ground with the client’s privacy better preserved. The results of our performance analysis suggest that our new mechanism is superior to some similar mechanisms in terms of authentication delay. PMID:27171160

  1. Customised search and comparison of in situ, satellite and model data for ocean modellers

    NASA Astrophysics Data System (ADS)

    Hamre, Torill; Vines, Aleksander; Lygre, Kjetil

    2014-05-01

    For the ocean modelling community, the amount of available data from historical and upcoming in situ sensor networks and satellite missions, provides an rich opportunity to validate and improve their simulation models. However, the problem of making the different data interoperable and intercomparable remains, due to, among others, differences in terminology and format used by different data providers and the different granularity provided by e.g. in situ data and ocean models. The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. In the project, one specific objective has been to improve the technology for accessing historical plankton and associated environmental data sets, along with earth observation data and simulation outputs. To this end, we have developed a web portal enabling ocean modellers to easily search for in situ or satellite data overlapping in space and time, and compare the retrieved data with their model results. The in situ data are retrieved from a geo-spatial repository containing both historical and new physical, biological and chemical parameters for the Southern Ocean, Atlantic, Nordic Seas and the Arctic. The satellite-derived quantities of similar parameters from the same areas are retrieved from another geo-spatial repository established in the project. Both repositories are accessed through standard interfaces, using the Open Geospatial Consortium (OGC) Web Map Service (WMS) and Web Feature Service (WFS), and OPeNDAP protocols, respectively. While the developed data repositories use standard terminology to describe the parameters, especially the measured in situ biological parameters are too fine grained to be immediately useful for modelling purposes. Therefore, the plankton parameters were grouped according to category, size and if available by element. This grouping was reflected in the web portal's graphical user interface, where the groups and subgroups were organized in a tree structure, enabling the modeller to quickly get an overview of available data, going into more detail (subgroups) if needed or staying at a higher level of abstraction (merging the parameters below) if this provided a better base for comparison with the model parameters. Once a suitable level of detail, as determined by the modeller, was decided, the system would retrieve available in situ parameters. The modellers could then select among the pre-defined models or upload his own model forecast file (in NetCDF/CF format), for comparison with the retrieved in situ data. The comparison can be shown in different kinds of plots (e.g. scatter plots), through simple statistical measures or near-coincident values of in situ of model points can be exported for further analysis in the modeller's own tools. During data search and presentation, the modeller can determine both query criteria and what associated metadata to include in the display and export of the retrieved data. Satellite-derived parameters can be queried and compared with model results in the same manner. With the developed prototype system, we have demonstrated that a customised tool for searching, presenting, comparing and exporting ocean data from multiple platforms (in situ, satellite, model), makes it easy to compare model results with independent observations. With further enhancement of functionality and inclusion of more data, we believe the resulting system can greatly benefit the wider community of ocean modellers looking for data and tools to validate their models.

  2. An Efficient Mutual Authentication Framework for Healthcare System in Cloud Computing.

    PubMed

    Kumar, Vinod; Jangirala, Srinivas; Ahmad, Musheer

    2018-06-28

    The increasing role of Telecare Medicine Information Systems (TMIS) makes its accessibility for patients to explore medical treatment, accumulate and approach medical data through internet connectivity. Security and privacy preservation is necessary for medical data of the patient in TMIS because of the very perceptive purpose. Recently, Mohit et al.'s proposed a mutual authentication protocol for TMIS in the cloud computing environment. In this work, we reviewed their protocol and found that it is not secure against stolen verifier attack, many logged in patient attack, patient anonymity, impersonation attack, and fails to protect session key. For enhancement of security level, we proposed a new mutual authentication protocol for the similar environment. The presented framework is also more capable in terms of computation cost. In addition, the security evaluation of the protocol protects resilience of all possible security attributes, and we also explored formal security evaluation based on random oracle model. The performance of the proposed protocol is much better in comparison to the existing protocol.

  3. A lightweight and secure two factor anonymous authentication protocol for Global Mobility Networks.

    PubMed

    Baig, Ahmed Fraz; Hassan, Khwaja Mansoor Ul; Ghani, Anwar; Chaudhry, Shehzad Ashraf; Khan, Imran; Ashraf, Muhammad Usman

    2018-01-01

    Global Mobility Networks(GLOMONETs) in wireless communication permits the global roaming services that enable a user to leverage the mobile services in any foreign country. Technological growth in wireless communication is also accompanied by new security threats and challenges. A threat-proof authentication protocol in wireless communication may overcome the security flaws by allowing only legitimate users to access a particular service. Recently, Lee et al. found Mun et al. scheme vulnerable to different attacks and proposed an advanced secure scheme to overcome the security flaws. However, this article points out that Lee et al. scheme lacks user anonymity, inefficient user authentication, vulnerable to replay and DoS attacks and Lack of local password verification. Furthermore, this article presents a more robust anonymous authentication scheme to handle the threats and challenges found in Lee et al.'s protocol. The proposed protocol is formally verified with an automated tool(ProVerif). The proposed protocol has superior efficiency in comparison to the existing protocols.

  4. A review on transport layer protocol performance for delivering video on an adhoc network

    NASA Astrophysics Data System (ADS)

    Suherman; Suwendri; Al-Akaidi, Marwan

    2017-09-01

    The transport layer protocol is responsible for the end to end data transmission. Transmission control protocol (TCP) provides a reliable connection and user datagram protocol (UDP) offers fast but unguaranteed data transfer. Meanwhile, the 802.11 (wireless fidelity/WiFi) networks have been widely used as internet hotspots. This paper evaluates TCP, TCP variants and UDP performances for video transmission on an adhoc network. The transport protocol - medium access cross-layer is proposed by prioritizing TCP acknowledgement to reduce delay. The NS-2 evaluations show that the average delays increase linearly for all the evaluated protocols and the average packet losses grow logarithmically. UDP produces the lowest transmission delay; 5.4% and 5.8% lower than TCP and TCP variant, but experiences the highest packet loss. Both TCP and TCP Vegas maintain packet loss as low as possible. The proposed cross-layer successfully decreases TCP and TCP Vegas delay about 0.12 % and 0.15%, although losses remain similar.

  5. A lightweight and secure two factor anonymous authentication protocol for Global Mobility Networks

    PubMed Central

    2018-01-01

    Global Mobility Networks(GLOMONETs) in wireless communication permits the global roaming services that enable a user to leverage the mobile services in any foreign country. Technological growth in wireless communication is also accompanied by new security threats and challenges. A threat-proof authentication protocol in wireless communication may overcome the security flaws by allowing only legitimate users to access a particular service. Recently, Lee et al. found Mun et al. scheme vulnerable to different attacks and proposed an advanced secure scheme to overcome the security flaws. However, this article points out that Lee et al. scheme lacks user anonymity, inefficient user authentication, vulnerable to replay and DoS attacks and Lack of local password verification. Furthermore, this article presents a more robust anonymous authentication scheme to handle the threats and challenges found in Lee et al.’s protocol. The proposed protocol is formally verified with an automated tool(ProVerif). The proposed protocol has superior efficiency in comparison to the existing protocols. PMID:29702675

  6. Protocol to Exploit Waiting Resources for UASNs.

    PubMed

    Hung, Li-Ling; Luo, Yung-Jeng

    2016-03-08

    The transmission speed of acoustic waves in water is much slower than that of radio waves in terrestrial wireless sensor networks. Thus, the propagation delay in underwater acoustic sensor networks (UASN) is much greater. Longer propagation delay leads to complicated communication and collision problems. To solve collision problems, some studies have proposed waiting mechanisms; however, long waiting mechanisms result in low bandwidth utilization. To improve throughput, this study proposes a slotted medium access control protocol to enhance bandwidth utilization in UASNs. The proposed mechanism increases communication by exploiting temporal and spatial resources that are typically idle in order to protect communication against interference. By reducing wait time, network performance and energy consumption can be improved. A performance evaluation demonstrates that when the data packets are large or sensor deployment is dense, the energy consumption of proposed protocol is less than that of existing protocols as well as the throughput is higher than that of existing protocols.

  7. One-Step Synthesis of Aliphatic Potassium Acyltrifluoroborates (KATs) from Organocuprates.

    PubMed

    Liu, Sizhou M; Wu, Dino; Bode, Jeffrey W

    2018-04-20

    A one-step synthesis of aliphatic KATs from organocuprates is reported. Organolithium and organomagnesium reagents were readily transmetalated onto Cu(I) and coupled with a KAT-forming reagent to yield the respective aliphatic KAT. The protocol is suitable for primary, secondary and-for the first time-tertiary alkyl substrates. These protocols considerably expand the range of KATs that can be readily accessed in one step from commercially available starting materials.

  8. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  9. Proof of Concept Integration of a Single-Level Service-Oriented Architecture into a Multi-Domain Secure Environment

    DTIC Science & Technology

    2008-03-01

    Machine [29]. OC4J applications support Java Servlets , Web services, and the following J2EE specific standards: Extensible Markup Language (XML...IMAP Internet Message Access Protocol IP Internet Protocol IT Information Technology xviii J2EE Java Enterprise Environment JSR 168 Java ...LDAP), World Wide Web Distributed Authoring and Versioning (WebDav), Java Specification Request 168 (JSR 168), and Web Services for Remote

  10. 76 FR 59963 - Closed Captioning of Internet Protocol-Delivered Video Programming: Implementation of the Twenty...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ...In this document, the Commission proposes rules to implement provisions of the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA'') that mandate rules for closed captioning of certain video programming delivered using Internet protocol (``IP''). The Commission seeks comment on rules that would apply to the distributors, providers, and owners of IP-delivered video programming, as well as the devices that display such programming.

  11. Copper-catalyzed oxidative C-O bond formation of 2-acyl phenols and 1,3-dicarbonyl compounds with ethers: direct access to phenol esters and enol esters.

    PubMed

    Park, Jihye; Han, Sang Hoon; Sharma, Satyasheel; Han, Sangil; Shin, Youngmi; Mishra, Neeraj Kumar; Kwak, Jong Hwan; Lee, Cheong Hoon; Lee, Jeongmi; Kim, In Su

    2014-05-16

    A copper-catalyzed oxidative coupling of 2-carbonyl-substituted phenols and 1,3-dicarbonyl compounds with a wide range of dibenzyl or dialkyl ethers is described. This protocol provides an efficient preparation of phenol esters and enol esters in good yields with high chemoselectivity. This method represents an alternative protocol for classical esterification reactions.

  12. A DNA 'barcode blitz': rapid digitization and sequencing of a natural history collection.

    PubMed

    Hebert, Paul D N; Dewaard, Jeremy R; Zakharov, Evgeny V; Prosser, Sean W J; Sones, Jayme E; McKeown, Jaclyn T A; Mantle, Beth; La Salle, John

    2013-01-01

    DNA barcoding protocols require the linkage of each sequence record to a voucher specimen that has, whenever possible, been authoritatively identified. Natural history collections would seem an ideal resource for barcode library construction, but they have never seen large-scale analysis because of concerns linked to DNA degradation. The present study examines the strength of this barrier, carrying out a comprehensive analysis of moth and butterfly (Lepidoptera) species in the Australian National Insect Collection. Protocols were developed that enabled tissue samples, specimen data, and images to be assembled rapidly. Using these methods, a five-person team processed 41,650 specimens representing 12,699 species in 14 weeks. Subsequent molecular analysis took about six months, reflecting the need for multiple rounds of PCR as sequence recovery was impacted by age, body size, and collection protocols. Despite these variables and the fact that specimens averaged 30.4 years old, barcode records were obtained from 86% of the species. In fact, one or more barcode compliant sequences (>487 bp) were recovered from virtually all species represented by five or more individuals, even when the youngest was 50 years old. By assembling specimen images, distributional data, and DNA barcode sequences on a web-accessible informatics platform, this study has greatly advanced accessibility to information on thousands of species. Moreover, much of the specimen data became publically accessible within days of its acquisition, while most sequence results saw release within three months. As such, this study reveals the speed with which DNA barcode workflows can mobilize biodiversity data, often providing the first web-accessible information for a species. These results further suggest that existing collections can enable the rapid development of a comprehensive DNA barcode library for the most diverse compartment of terrestrial biodiversity - insects.

  13. Enhanced Multi-Modal Access to Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Lamarra, Norm; Doyle, Richard; Wyatt, Jay

    2003-01-01

    Tomorrow's Interplanetary Network (IPN) will evolve from JPL's Deep-Space Network (DSN) and provide key capabilities to future investigators, such as simplified acquisition of higher-quality science at remote sites and enriched access to these sites. These capabilities could also be used to foster public interest, e.g., by making it possible for students to explore these environments personally, eventually perhaps interacting with a virtual world whose models could be populated by data obtained continuously from the IPN. Our paper looks at JPL's approach to making this evolution happen, starting from improved communications. Evolving space protocols (e.g., today's CCSDS proximity and file-transfer protocols) will provide the underpinning of such communications in the next decades, just as today's rich web was enabled by progress in Internet Protocols starting from the early 1970's (ARPAnet research). A key architectural thrust of this effort is to deploy persistent infrastructure incrementally, using a layered service model, where later higher-layer capabilities (such as adaptive science planning) are enabled by earlier lower-layer services (such as automated routing of object-based messages). In practice, there is also a mind shift needed from an engineering culture raised on point-to-point single-function communications (command uplink, telemetry downlink), to one in which assets are only indirectly accessed, via well-defined interfaces. We are aiming to foster a 'community of access' both among space assets and the humans who control them. This enables appropriate (perhaps eventually optimized) sharing of services and resources to the greater benefit of all participants. We envision such usage to be as automated in the future as using a cell phone is today - with all the steps in creating the real-time link being automated.

  14. High Speed Oblivious Random Access Memory (HS-ORAM)

    DTIC Science & Technology

    2015-09-01

    Bryan Parno, “Non-interactive verifiable computing: Outsourcing computation to untrusted workers”, 30th International Cryptology Conference, pp. 465...holder or any other person or corporation; or convey any rights or permission to manufacture , use, or sell any patented invention that may relate to...secure outsourced data access protocols. HS-ORAM deploys a number of server- side software components running inside tamper-proof secure coprocessors

  15. Identity Management Task Force Report 2008

    DTIC Science & Technology

    2008-01-01

    Telecommunication Grid ( GTG ) consists of the public- switched telecommunications network (PSTN), various forms of Internet protocol (IP) networks...to network providers) to a large community of nomadic users and access devices over a wide range of access technologies. The GTG is notional, and...DOC Dr. Myra Gray , DOD Greg Hall, DNI Celia Hanley, DOD Patrick Hannon, DNI James Hass, IC Linda Hill, SSA Bobby Jones, DOC Patrick Hannon

  16. Implementation of radial arterial access for cardiac interventions: a strong case for quality assurance protocols by the nursing staff.

    PubMed

    Steffenino, Giuseppe; Fabrizi, Mauro De Benedetto; Baralis, Giorgio; Tomatis, Marilena; Mogna, Aldo; Dutto, Monica; Dutto, Maria Stefania; Conte, Laura; Lice, Giulietta; Cavallo, Simona; Porcedda, Brunella

    2011-02-01

    Radial arterial access is becoming increasingly popular for coronary angiography and angioplasty. The technique is, however, more demanding than femoral arterial access, and hemostasis is not care-free. A quality assurance program was run by our nursing staff, with patient follow-up, to monitor radial arterial access implementation in our laboratory. In 973 consecutive patients, both a hydrophilic sheath and an inflatable bandage for hemostasis were used. Bandage inflation volume and time were both reduced through subsequent data audit and protocol changes (A = 175 patients; B = 297; C = 501). An increase was achieved in the percentage of patients with neither loss of radial pulse nor hematoma of any size (A = 81.3%, B = 90.9%, C = 92.2%, P < 0.001), and no discomfort at all (A = 44.2%, B = 75.1%, C = 89.3%, P < 0.001). Follow-up was available for 965 patients (99%), and in 956, the access site could be re-inspected at least once. There were no vascular complications. Overall, the radial artery pulse was absent at latest follow-up in 0.6% of cases (95% confidence interval 0.21-1.05%). In 460 consecutive patients with complete assessment in protocol C, a palpable arterial pulse was absent in 5% of cases at about 20 h after hemostasis. Barbeau's test was positive in 26.5% of patients (95% confidence interval 22.5-30.6%). They had a significantly lower body weight, a lower systolic blood pressure at hemostasis, and a higher bandage inflation volume; a hematoma of any size and the report of any discomfort were also more frequent. Barbeau's test returned to normal in 30% of them 3-60 days later. Our nurse-led quality assurance program helped us in reducing minor vascular sequelae and improving patient comfort after radial access. Early occlusion of the radial artery as detected by pulse oxymeter is frequent, often reversible, and may be mostly related to trauma/occlusion of the artery during hemostasis. 2011 Italian Federation of Cardiology.

  17. Strategies that reduce 90-day readmissions and inpatient costs after liver transplantation.

    PubMed

    Zeidan, Joseph H; Levi, David M; Pierce, Ruth; Russo, Mark W

    2018-04-25

    Liver transplantation is hospital-resource intensive and associated with high rates of readmission. We have previously shown a reduction in 30-day readmission rates by implementing a specifically designed protocol to increase access to outpatient care. To determine if strategies that reduce 30-day readmission after liver transplant were effective in also reducing 90-day readmission rates and costs. A protocol was developed to reduce inpatient readmissions after liver transplant that expanded outpatient services and provided alternatives to readmission. The 90-day readmission rates and costs were compared before and implementing strategies outlined in the protocol. Multivariable analysis was used to control for potential confounding factors. Over the study period 304 adult primary liver transplants were performed on patients with a median biologic MELD of 22. 112 (37%) patients were readmitted within 90 days of transplant. The readmission rates before and after implementation of the protocol were 53% and 26% respectively, p<0.001. The most common reason for readmission was elevated liver tests/rejection (24%). In multivariable analysis, the protocol remained associated with avoiding readmission, OR=0.33, [95% CI 0.20,0.55], p<0.001. The median length of stay after transplant preprotocol and postprotocol was 8 and 7 days, respectively. A greater proportion of patients were discharged to hospital lodging post protocol, 10% versus 19%, p=0.03. 90-day readmissions costs were reduced by 55% but total 90 day costs by only 2.7% due to higher outpatient costs and index admission costs. 90-day readmission rates and readmission costs can be reduced by improving access to outpatient services and hospital local lodging. Total 90-day costs were similar between the two groups because of higher outpatient costs after the protocol was introduced. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.

  18. Explanation of the Nagoya Protocol on Access and Benefit Sharing and its implication for microbiology.

    PubMed

    Smith, David; da Silva, Manuela; Jackson, Julian; Lyal, Christopher

    2017-03-01

    Working with genetic resources and associated data requires greater attention since the Nagoya Protocol on Access and Benefit Sharing (ABS) came into force in October 2014. Biologists must ensure that they have legal clarity in how they can and cannot use the genetic resources on which they carry out research. Not only must they work within the spirit in the Convention on Biological Diversity (https://www.cbd.int/convention/articles/default.shtml?a=cbd-02) but also they may have regulatory requirements to meet. Although the Nagoya Protocol was negotiated and agreed globally, it is the responsibility of each country that ratifies it to introduce their individual implementing procedures and practices. Many countries in Europe, such as the UK, have chosen not to put access controls in place at this time, but others already have laws enacted providing ABS measures under the Convention on Biological Diversity or specifically to implement the Nagoya Protocol. Access legislation is in place in many countries and information on this can be found at the ABS Clearing House (https://absch.cbd.int/). For example, Brazil, although not a Party to the Nagoya Protocol at the time of writing, has Law 13.123 which entered into force on 17 November 2015, regulated by Decree 8.772 which was published on 11 May 2016. In this case, export of Brazilian genetic resources is not allowed unless the collector is registered in the National System for Genetic Heritage and Associated Traditional Knowledge Management (SisGen). The process entails that a foreign scientist must first of all be registered working with someone in Brazil and have authorization to collect. The enactment of European Union Regulation po. 511/2014 implements Nagoya Protocol elements that govern compliance measures for users and offers the opportunity to demonstrate due diligence in sourcing their organisms by selecting from holdings of 'registered collections'. The UK has introduced a Statutory Instrument that puts in place enforcement measures within the UK to implement this European Union Regulation; this is regulated by Regulatory Delivery, Department for Business, Energy and Industrial Strategies. Scientific communities, including the private sector, individual institutions and organizations, have begun to design policy and best practices for compliance. Microbiologists and culture collections alike need to be aware of the legislation of the source country of the materials they use and put in place best practices for compliance; such best practice has been drafted by the Microbial Resource Research Infrastructure, and other research communities such as the Consortium of European Taxonomic Facilities, the Global Genome Biodiversity Network and the International Organisation for Biological Control have published best practice and/or codes of conduct to ensure legitimate exchange and use of genetic resources.

  19. Performance analysis of the ALOHA protocol with replication in a fading channel for the Mobile Satellite Experiment

    NASA Technical Reports Server (NTRS)

    Clare, L. P.; Yan, T.-Y.

    1985-01-01

    The analysis of the ALOHA random access protocol for communications channels with fading is presented. The protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. A general two state model is used for the channel error process to account for the channel fading memory. It is shown that greater throughput and smaller delay may be achieved using repetitions. The model is applied to the analysis of the delay-throughput performance in a fading mobile communications environment. Numerical results are given for NASA's Mobile Satellite Experiment.

  20. Noninvasive measurement of dynamic correlation functions

    NASA Astrophysics Data System (ADS)

    Uhrich, Philipp; Castrignano, Salvatore; Uys, Hermann; Kastner, Michael

    2017-08-01

    The measurement of dynamic correlation functions of quantum systems is complicated by measurement backaction. To facilitate such measurements we introduce a protocol, based on weak ancilla-system couplings, that is applicable to arbitrary (pseudo)spin systems and arbitrary equilibrium or nonequilibrium initial states. Different choices of the coupling operator give access to the real and imaginary parts of the dynamic correlation function. This protocol reduces disturbances due to the early-time measurements to a minimum, and we quantify the deviation of the measured correlation functions from the theoretical, unitarily evolved ones. Implementations of the protocol in trapped ions and other experimental platforms are discussed. For spin-1 /2 models and single-site observables we prove that measurement backaction can be avoided altogether, allowing for the use of ancilla-free protocols.

  1. Venous Access Devices: Clinical Rounds

    PubMed Central

    Matey, Laurl; Camp-Sorrell, Dawn

    2016-01-01

    Nursing management of venous access devices (VADs) requires knowledge of current evidence, as well as knowledge of when evidence is limited. Do you know which practices we do based on evidence and those that we do based on institutional history or preference? This article will present complex VAD infection and occlusion complications and some of the controversies associated with them. Important strategies for identifying these complications, troubleshooting, and evaluating the evidence related to lack of blood return, malposition, infection, access and maintenance protocols, and scope of practice issues are presented. PMID:28083553

  2. Experimental realization of an entanglement access network and secure multi-party computation

    PubMed Central

    Chang, X.-Y.; Deng, D.-L.; Yuan, X.-X.; Hou, P.-Y.; Huang, Y.-Y.; Duan, L.-M.

    2016-01-01

    To construct a quantum network with many end users, it is critical to have a cost-efficient way to distribute entanglement over different network ends. We demonstrate an entanglement access network, where the expensive resource, the entangled photon source at the telecom wavelength and the core communication channel, is shared by many end users. Using this cost-efficient entanglement access network, we report experimental demonstration of a secure multiparty computation protocol, the privacy-preserving secure sum problem, based on the network quantum cryptography. PMID:27404561

  3. Europlanet/IDIS: Combining Diverse Planetary Observations and Models

    NASA Astrophysics Data System (ADS)

    Schmidt, Walter; Capria, Maria Teresa; Chanteur, Gerard

    2013-04-01

    Planetary research involves a diversity of research fields from astrophysics and plasma physics to atmospheric physics, climatology, spectroscopy and surface imaging. Data from all these disciplines are collected from various space-borne platforms or telescopes, supported by modelling teams and laboratory work. In order to interpret one set of data often supporting data from different disciplines and other missions are needed while the scientist does not always have the detailed expertise to access and utilize these observations. The Integrated and Distributed Information System (IDIS) [1], developed in the framework of the Europlanet-RI project, implements a Virtual Observatory approach ([2] and [3]), where different data sets, stored in archives around the world and in different formats, are accessed, re-formatted and combined to meet the user's requirements without the need of familiarizing oneself with the different technical details. While observational astrophysical data from different observatories could already earlier be accessed via Virtual Observatories, this concept is now extended to diverse planetary data and related model data sets, spectral data bases etc. A dedicated XML-based Europlanet Data Model (EPN-DM) [4] was developed based on data models from the planetary science community and the Virtual Observatory approach. A dedicated editor simplifies the registration of new resources. As the EPN-DM is a super-set of existing data models existing archives as well as new spectroscopic or chemical data bases for the interpretation of atmospheric or surface observations, or even modeling facilities at research institutes in Europe or Russia can be easily integrated and accessed via a Table Access Protocol (EPN-TAP) [5] adapted from the corresponding protocol of the International Virtual Observatory Alliance [6] (IVOA-TAP). EPN-TAP allows to search catalogues, retrieve data and make them available through standard IVOA tools if the access to the archive is compatible with IVOA standards. For some major data archives with different standards adaptation tools are available to make the access transparent to the user. EuroPlaNet-IDIS has contributed to the definition of PDAP, the Planetary Data Access Protocol of the International Planetary Data Alliance (IPDA) [7] to access the major planetary data archives of NASA in the USA [8], ESA in Europe [9] and JAXA in Japan [10]. Acknowledgement: Europlanet-RI was funded by the European Commission under the 7th Framework Program, grant 228319 "Capacities Specific Programme" - Research Infrastructures Action. Reference: [1] Details to IDIS and the Europlanet-RI via Web-site: http://www.idis.europlanet-ri.eu/ [2] Demonstrator implementation for Plasma-VO AMDA: http://cdpp-amda.cesr.fr/DDHTML/index.html [3] Demonstrator implementation for the IDIS-VO: http://www.idis-dyn.europlanet-ri.eu/vodev.shtml [4] Europlanet Data Model EPN-DM: http://www.europlanet-idis.fi/documents/public_documents/EPN-DM-v2.0.pdf [5] Europlanet Table Access Protocol EPN-TAP: http://www.europlanet-idis.fi/documents/public_documents/EPN-TAPV_0.26.pdf [6] International Virtual Observatory Alliance IVOA: http://www.ivoa.net [7] International Planetary Data Alliance IPDA: http://planetarydata.org/ [8] NASA's Planetary Data System: http://pds.jpl.nasa.gov/ [9] ESA's Planetary Science Archive PSA: http://www.sciops.esa.int/index.php?project=PSA [10] JAXAs Data Archive and Transmission System DARTS: http://darts.isas.jaxa.jp/

  4. Developing a distributed data dictionary service

    NASA Technical Reports Server (NTRS)

    U'Ren, J.

    2000-01-01

    This paper will explore the use of the Lightweight Directory Access Protocol (LDAP) using the ISO 11179 Data Dictionary Schema as a mechanism for standardizing the structure and communication links between data dictionaries.

  5. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houck, F.; Rosenthal, M.; Wulf, N.

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member Statesmore » and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.« less

  6. Local network assessment

    NASA Astrophysics Data System (ADS)

    Glen, D. V.

    1985-04-01

    Local networks, related standards activities of the Institute of Electrical and Electronics Engineers the American National Standards Institute and other elements are presented. These elements include: (1) technology choices such as topology, transmission media, and access protocols; (2) descriptions of standards for the 802 local area networks (LAN's); high speed local networks (HSLN's) and military specification local networks; and (3) intra- and internetworking using bridges and gateways with protocols Interconnection (OSI) reference model. The convergence of LAN/PBX technology is also described.

  7. A Secure Three-Factor User Authentication and Key Agreement Protocol for TMIS With User Anonymity.

    PubMed

    Amin, Ruhul; Biswas, G P

    2015-08-01

    Telecare medical information system (TMIS) makes an efficient and convenient connection between patient(s)/user(s) and doctor(s) over the insecure internet. Therefore, data security, privacy and user authentication are enormously important for accessing important medical data over insecure communication. Recently, many user authentication protocols for TMIS have been proposed in the literature and it has been observed that most of the protocols cannot achieve complete security requirements. In this paper, we have scrutinized two (Mishra et al., Xu et al.) remote user authentication protocols using smart card and explained that both the protocols are suffering against several security weaknesses. We have then presented three-factor user authentication and key agreement protocol usable for TMIS, which fix the security pitfalls of the above mentioned schemes. The informal cryptanalysis makes certain that the proposed protocol provides well security protection on the relevant security attacks. Furthermore, the simulator AVISPA tool confirms that the protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. The security functionalities and performance comparison analysis confirm that our protocol not only provide strong protection on security attacks, but it also achieves better complexities along with efficient login and password change phase as well as session key verification property.

  8. Access to high-tech health care. Ethics.

    PubMed

    Merrill, J M

    1991-03-15

    Access to health care has always been limited by personal and social economics. Poverty remains one element that correlates with poor prognosis in all varieties of cancer. Prior to becoming standard therapy, elements of high-tech health care are often widely available as research protocols, participation in which is generally available without considerations of insurance coverage or personal wealth. Any person may still volunteer participation in research protocols and thereby partake in high-tech advances even before these become standard therapy. However, recent developments in the conduct of research now may limit participation. Medicare and third party insurance payers proscribe payment for research project care and always have. Recently, more than ever before, reimbursements to physicians and health care institutions have been more closely scrutinized to reject all payment in research settings. In situations in which cost and availability of the new technology, whether machine or drug, limit participation, research entrepreneurs have made research participation available to only those who can pay for it. These and similar developments threaten to limit access to high-tech health care and to actually impede cancer research.

  9. A Hybrid TDMA/CSMA-Based Wireless Sensor and Data Transmission Network for ORS Intra-Microsatellite Applications

    PubMed Central

    Wang, Long; Liu, Yong; Yin, Zengshan

    2018-01-01

    To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions. PMID:29757243

  10. A Hybrid TDMA/CSMA-Based Wireless Sensor and Data Transmission Network for ORS Intra-Microsatellite Applications.

    PubMed

    Wang, Long; Liu, Yong; Yin, Zengshan

    2018-05-12

    To achieve launch-on-demand for Operationally Responsive Space (ORS) missions, in this article, an intra-satellite wireless network (ISWN) is presented. It provides a wireless and modularized scheme for intra-spacecraft sensing and data buses. By removing the wired data bus, the commercial off-the-shelf (COTS) based wireless modular architecture will reduce both the volume and weight of the satellite platform, thus achieving rapid design and cost savings in development and launching. Based on the on-orbit data demand analysis, a hybrid time division multiple access/carrier sense multiple access (TDMA/CSMA) protocol is proposed. It includes an improved clear channel assessment (CCA) mechanism and a traffic adaptive slot allocation method. To analyze the access process, a Markov model is constructed. Then a detailed calculation is given in which the unsaturated cases are considered. Through simulations, the proposed protocol is proved to commendably satisfy the demands and performs better than existing schemes. It helps to build a full-wireless satellite instead of the current wired ones, and will contribute to provide dynamic space capabilities for ORS missions.

  11. Analysis of OPACITY and PLAID Protocols for Contactless Smart Cards

    DTIC Science & Technology

    2012-09-01

    9 3. Access Control ........................................................................ 9 E . THREATS AND...Synchronization .............................. 23 c. Simple Integration and Interoperability ..................... 24 E . MODES OF OPERATION...Interoperability ..................... 47 E . MODES OF OPERATIONS ................................................................ 47 F. SUGGESTED KEY

  12. Network Basics.

    ERIC Educational Resources Information Center

    Tennant, Roy

    1992-01-01

    Explains how users can find and access information resources available on the Internet. Highlights include network information centers (NICs); lists, both formal and informal; computer networking protocols, including international standards; electronic mail; remote log-in; and file transfer. (LRW)

  13. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    NASA Astrophysics Data System (ADS)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  14. Interoperability In The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Rios, C.; Barbarisi, I.; Docasal, R.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; Grotheer, E.; Besse, S.; Martinez, S.; Heather, D.; De Marchi, G.; Lim, T.; Fraga, D.; Barthelemy, M.

    2015-12-01

    As the world becomes increasingly interconnected, there is a greater need to provide interoperability with software and applications that are commonly being used globally. For this purpose, the development of the new Planetary Science Archive (PSA), by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is focused on building a modern science archive that takes into account internationally recognised standards in order to provide access to the archive through tools from third parties, for example by the NASA Planetary Data System (PDS), the VESPA project from the Virtual Observatory of Paris as well as other international institutions. The protocols and standards currently being supported by the new Planetary Science Archive at this time are the Planetary Data Access Protocol (PDAP), the EuroPlanet-Table Access Protocol (EPN-TAP) and Open Geospatial Consortium (OGC) standards. The architecture of the PSA consists of a Geoserver (an open-source map server), the goal of which is to support use cases such as the distribution of search results, sharing and processing data through a OGC Web Feature Service (WFS) and a Web Map Service (WMS). This server also allows the retrieval of requested information in several standard output formats like Keyhole Markup Language (KML), Geography Markup Language (GML), shapefile, JavaScript Object Notation (JSON) and Comma Separated Values (CSV), among others. The provision of these various output formats enables end-users to be able to transfer retrieved data into popular applications such as Google Mars and NASA World Wind.

  15. Providing Comprehensive and Consistent Access to Astronomical Observatory Archive Data: The NASA Archive Model

    NASA Technical Reports Server (NTRS)

    McGlynn, Thomas; Guiseppina, Fabbiano A; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; hide

    2016-01-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  16. A Study of the Seastar Underwater Acoustic Local Area Network Concept

    DTIC Science & Technology

    2007-12-01

    sense multiple access (CSMA) and multiple access with collision avoidance ( MACA ) are reviewed in [19, 22, 23, 34]. Peripheral nodes using ALOHA and...transmissions until the channel is clear. However, the long propagation time limits the effectiveness of CSMA for acoustic communications. MACA [22] uses... MACA protocol, if no ACK message is received after the transmission is completed, the full packet will be retransmitted until reception is

  17. MNE7 Access to the Global Commons Outcome 3 Cyber Domain. Objective 3.5 Cyber Situational Awareness. Concept of Employment for Cyber Situational Awareness Within the Global Commons Version 1.0

    DTIC Science & Technology

    2013-02-25

    such as authentication , protocols, and ‘signature’ management exist but the imposition of such techniques must be balan 15p the legal requirements...gulation, mation face onflicting pressures to keep this data secure and yet allow access by authorised users. in the sharing network should be

  18. Supporting Tablet Configuration, Tracking, and Infection Control Practices in Digital Health Interventions: Study Protocol.

    PubMed

    Furberg, Robert D; Ortiz, Alexa M; Zulkiewicz, Brittany A; Hudson, Jordan P; Taylor, Olivia M; Lewis, Megan A

    2016-06-27

    Tablet-based health care interventions have the potential to encourage patient care in a timelier manner, allow physicians convenient access to patient records, and provide an improved method for patient education. However, along with the continued adoption of tablet technologies, there is a concomitant need to develop protocols focusing on the configuration, management, and maintenance of these devices within the health care setting to support the conduct of clinical research. Develop three protocols to support tablet configuration, tablet management, and tablet maintenance. The Configurator software, Tile technology, and current infection control recommendations were employed to develop three distinct protocols for tablet-based digital health interventions. Configurator is a mobile device management software specifically for iPhone operating system (iOS) devices. The capabilities and current applications of Configurator were reviewed and used to develop the protocol to support device configuration. Tile is a tracking tag associated with a free mobile app available for iOS and Android devices. The features associated with Tile were evaluated and used to develop the Tile protocol to support tablet management. Furthermore, current recommendations on preventing health care-related infections were reviewed to develop the infection control protocol to support tablet maintenance. This article provides three protocols: the Configurator protocol, the Tile protocol, and the infection control protocol. These protocols can help to ensure consistent implementation of tablet-based interventions, enhance fidelity when employing tablets for research purposes, and serve as a guide for tablet deployments within clinical settings.

  19. The IS-ENES climate4impact portal: bridging the CMIP5 and CORDEX data to impact users

    NASA Astrophysics Data System (ADS)

    Som de Cerff, Wim; Plieger, Maarten; Page, Christian; Tatarinova, Natalia; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Sjökvist, Elin; Vega Saldarriaga, Manuel; Santiago Cofiño Gonzalez, Antonio

    2015-04-01

    The aim of climate4impact (climate4impact.eu) is to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. It has been developed within the IS-ENES European project and is currently operated and further developed in the IS ENES2 project. As the climate impact community is very broad, the focus is mainly on the scientific impact community. Climate4impact is connected to the Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and regional climate model data (RCM) data from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services using OpenID, and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task is to describe the available model data and how it can be used. The portal informs users about possible caveats when using climate model data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. Climate4impact currently has two main objectives. The first one is to work on a web interface which automatically generates a graphical user interface on WPS endpoints. The WPS calculates climate indices and subset data using OpenClimateGIS/icclim on data stored in ESGF data nodes. Data is then transmitted from ESGF nodes over secured OpenDAP and becomes available in a new, per user, secured OpenDAP server. The results can then be visualized again using ADAGUC WMS. Dedicated wizards for processing of climate indices will be developed in close collaboration with users. The second one is to expose climate4impact services, so as to offer standardized services which can be used by other portals (like the future Copernicus platform, developed in the EU FP7 CLIPC project). This has the advantage to add interoperability between several portals, as well as to enable the design of specific portals aimed at different impact communities, either thematic or national. In the presentation the following subjects will be detailed: - Lessons learned developing climate4impact.eu - Download: Directly from ESGF nodes and other THREDDS catalogs - Connection with the downscaling portal of the university of Cantabria - Experiences on the question and answer site via Askbot - Visualization: Visualize data from ESGF data nodes using ADAGUC Web Map Services. - Processing: Transform data, subset, export into other formats, and perform climate indices calculations using Web Processing Services implemented by PyWPS, based on NCAR NCPP OpenClimateGIS and IS-ENES2 icclim. - Security: Login using OpenID for access to the ESGF data nodes. The ESGF works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESGF search services. A catalog browser allows for browsing through CMIP5 and any other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA).

  20. Dynamic federations: storage aggregation using open tools and protocols

    NASA Astrophysics Data System (ADS)

    Furano, Fabrizio; Brito da Rocha, Ricardo; Devresse, Adrien; Keeble, Oliver; Álvarez Ayllón, Alejandro; Fuhrmann, Patrick

    2012-12-01

    A number of storage elements now offer standard protocol interfaces like NFS 4.1/pNFS and WebDAV, for access to their data repositories, in line with the standardization effort of the European Middleware Initiative (EMI). Also the LCG FileCatalogue (LFC) can offer such features. Here we report on work that seeks to exploit the federation potential of these protocols and build a system that offers a unique view of the storage and metadata ensemble and the possibility of integration of other compatible resources such as those from cloud providers. The challenge, here undertaken by the providers of dCache and DPM, and pragmatically open to other Grid and Cloud storage solutions, is to build such a system while being able to accommodate name translations from existing catalogues (e.g. LFCs), experiment-based metadata catalogues, or stateless algorithmic name translations, also known as “trivial file catalogues”. Such so-called storage federations of standard protocols-based storage elements give a unique view of their content, thus promoting simplicity in accessing the data they contain and offering new possibilities for resilience and data placement strategies. The goal is to consider HTTP and NFS4.1-based storage elements and metadata catalogues and make them able to cooperate through an architecture that properly feeds the redirection mechanisms that they are based upon, thus giving the functionalities of a “loosely coupled” storage federation. One of the key requirements is to use standard clients (provided by OS'es or open source distributions, e.g. Web browsers) to access an already aggregated system; this approach is quite different from aggregating the repositories at the client side through some wrapper API, like for instance GFAL, or by developing new custom clients. Other technical challenges that will determine the success of this initiative include performance, latency and scalability, and the ability to create worldwide storage federations that are able to redirect clients to repositories that they can efficiently access, for instance trying to choose the endpoints that are closer or applying other criteria. We believe that the features of a loosely coupled federation of open-protocols-based storage elements will open many possibilities of evolving the current computing models without disrupting them, and, at the same time, will be able to operate with the existing infrastructures, follow their evolution path and add storage centers that can be acquired as a third-party service.

Top